Finance May Go Slower On Generative AI Adoption

Date:

Finance has in many ways spearheaded business automation in the past few years — but will that necessarily happen with artificial intelligence, particularly generative AI? 

Vendors are pushing that theme, but uncertainty around AI governance, controls, output accuracy, and security means history may not repeat itself.

In a session, “AI in the Audit — What Finance Executives Need to Know,” at the FEI’s virtual Corporate Financial Reporting Insights conference earlier this week, panelists revealed some of the early adoption strategies in finance and auditing.

A screenshot of the virtual FEI conference session on AI in the audit.

Ed Moran (left), PayPal’s Hasitha Verma (right), and Mike Sawyer talk AI adoption. 

FEI’s Corporate Financial Reporting Insights Conference

 

Ed Moran, managing director of innovation at KPMG and panel moderator, said that the Big Four firm’s generative AI was filling “real capacity gaps,” including drafting policies and procedures and performing simple audit tests. “I think this is going to be as ubiquitous as spreadsheets,” he commented during a separate session on auditing regulation.

But finance teams, especially their controllers, will be weighing finance’s need for completeness and accuracy and the potential benefits of quick implementations of generative AI.

Controls and Governance

Payments company PayPal Holdings has been using generative AI and exploring use cases in functions like compliance, customer service, fraud, human resources, and credit risk decision-making. But finance lags. “[Finance’s] beginning point is just to be skeptical and [ask] what are the controls necessary to realize it within our function, which, of course, [handles] lots of material nonpublic information,” said Hasitha Verma, PayPal’s chief accounting officer.

Companywide, PayPal is taking a centralized approach to governance. The payments company has set up an AI center of excellence to guide and train employees and develop and approve use cases. It also drafted a responsible AI framework. But in these early days, the most crucial piece might be the written guidelines for individual employee use of generative AI tools.


“If you’re submitting confidential information, the AI tool may produce an output that substantially resembles one of those proprietary inputs.”

Hasitha Verma

Chief accounting officer, PayPal


“One of the distinctions we’re making is using free public tools [like ChatGPT] versus subscription-based ones or vendor-provided services,” said Verma. When using open-source AI tools, the guidelines tell employees never to upload sensitive, confidential corporate information. For a payments company, that includes its financials and “high-stakes” data on consumers, merchants, bank instruments, and credit and debit cards. 

“Generative AI works by taking all the submitted data points and using them to generate some output. If you’re submitting confidential information, the AI tool may produce an output that substantially resembles one of those proprietary inputs,” Verma said.

Early Auditing Uses

The other panelist, audit partner Mike Sawyer of KPMG, said the Big Four firm is in the vanguard of companies using generative AI but is cautious about using it responsibly.

KPMG has been trying algorithmic approaches in auditing for years, Sawyer said, but it required substantial upfront investment and programming. This past Summer, KPMG developed its version of a generative AI tool that reduces administrative burdens and assists with audit quality, said Sawyer. Alongside the tool, KPMG released a “prompt library.” The first tier of the library explains large language models and describes how to best interact with KPMG’s tool.

A higher-level tier of the library helps enhance audit quality. A chatbot prompts engagement team members about documentation and other aspects of an engagement and compares the inputs to the relevant audit standards. The output is an observation like, “Hey, your documentation is great, but maybe think about these two or three things,” said Sawyer. 

Generative AI “can get things wrong,” said Sawyer, “but just interacting with it, reading prompts that have standards listed, those are huge triggers to think about everything that needs to go into the audit file.”


“One of the challenges of AI tools is there’s a bit of opaqueness in the output, in the processes that generate the output.”


Somewhat ironically, a key barrier to the use of generative AI in finance could be auditability. “One of the challenges of AI tools is there’s a bit of opaqueness in the output, in the processes that generate the output,” said Verma. “How would our auditors look at it?”

Sawyer agreed. “We’re thinking about how do we apply the same kind of lens to completeness and accuracy of data and how management is controlling the data,” Sawyer said. “We’re seeing a lot of use of generative AI in low-risk areas,” he said, helping KPMG develop a model of how to audit it.

Share post:

Subscribe

Popular

More like this
Related