
Analysis · Finance Leadership · 15 min read
Seventy-one percent of finance functions globally are now using artificial intelligence in some form. But the gap between the firms genuinely winning with it and the ones merely deploying it has widened sharply — and the interesting question is no longer whether to adopt, but what separates the leaders from everyone else.
I. The Gap
Seventy-one percent, and widening.
The most recent round of global survey data on artificial intelligence in finance tells a clearer story than any vendor deck ever will. Seventy-one percent of finance functions across twenty-three industrialised and emerging markets are now using AI in some form. Within the next three years, that figure will approach universality. The technology is no longer a signal of innovation; it is becoming the baseline expectation of how finance teams operate.
What changes beneath that headline number, however, is far more interesting. The firms surveyed were grouped into three maturity tiers: beginners, implementers, and leaders. Eighteen percent of firms are beginners, still piloting and evaluating. Fifty-eight percent are implementers, deploying AI in one or two functions but not yet achieving integrated firm-wide adoption. Twenty-four percent are leaders — firms that have embedded AI across finance and are generating compounding returns from that investment.
The gap between the leaders and everyone else is not, as one might assume, primarily a technology gap. It is an investment gap, a governance gap, a talent gap, and above all a strategic-intent gap. Leaders spend more, deploy across more functions, institutionalise governance earlier, and pull assurance from their auditors into the AI lifecycle itself. They are pulling ahead fast enough that the gap will likely continue to widen through the end of the decade.
This article is a close reading of what, specifically, distinguishes the leaders from the rest of the market — drawn from global survey data spanning 2,900 finance functions across six industries and twenty-three markets, and intended as a practical reference for CFOs, managing partners, and controllers who want to know which of their current practices matter and which are theatre.
II. The Maturity Framework
Three tiers, three very different trajectories.
| Tier | Share of Firms | Defining Characteristic |
|---|---|---|
| Leaders | ~24% | Integrated AI across multiple finance functions; measurable ROI; institutional governance |
| Implementers | ~58% | Active deployments in one or two functions; scaling not yet achieved |
| Beginners | ~18% | Piloting, evaluating, or planning; no production-scale deployment |
The distribution matters because the journey from each tier to the next is qualitatively different. The step from beginner to implementer is almost always a single successful production deployment in a high-volume, rules-based workflow. The step from implementer to leader is harder: it requires cross-functional integration, a serious governance model, and a willingness to reconsider how finance work is organised and priced. Most finance functions, looking at themselves honestly, sit in the large implementer middle.
III. The Geography
Geography is narrowing faster than most expected.
A year ago, a sensible working model for AI adoption in finance was that North America, Western Europe, and developed Asia-Pacific would dominate, with emerging markets following some years behind. That model is already outdated. Adoption in emerging economies lags the developed world, but not by much — and in several categories, it is closing quickly.
The variation within regions is larger than the variation between them. United States, German, and Japanese firms sit at or near the global frontier on AI deployment in finance. Italian and Spanish firms, despite sitting in the same regional economic block, trail visibly. Within the emerging-market cohort, Chinese and Indian firms are ahead of the global average on several adoption measures, while Saudi Arabian and several African markets remain in the early stages.
The implication for firms with international operations is straightforward: AI maturity cannot be inherited from a head office. Each market has its own trajectory, shaped by regulatory posture, local talent availability, and legacy technology stack. A successful global rollout depends on treating each country as its own project rather than as a copy-paste exercise from the most advanced operation.
| Region | Leader Share | Notable Markets |
|---|---|---|
| ASPAC | ~37% | Japan, China, India at the frontier; regional variance is substantial |
| North America | ~27% | US finance functions among the most advanced globally |
| Europe | ~22% | Germany ahead; Italy and Spain behind regional average |
| Middle East & Africa | ~24% | Considerable market-level variation |
| Latin America | ~20% | Smaller base but accelerating |
Two details stand out. First, ASPAC leads on raw leader share — driven substantially by Japanese and Chinese enterprise adoption. Second, the spread between regions is far smaller than the spread between individual markets within each region. Treating “Europe” or “Asia” as homogeneous blocks masks more than it reveals.
“
Leaders have six use cases in production on average. Everyone else has three or four. The compounding returns come from the breadth of deployment, not the depth of any single project.
— The Leader’s Advantage
IV. What Leaders Do Differently
Five behaviours that separate the top quartile.
When the survey data is sliced finely, five behaviours emerge that consistently distinguish the leaders from everyone else. None of them is exotic. All of them require deliberate effort from the CFO and, in most cases, from the managing partner or the board.
i. They spend nearly twice as much as everyone else.
Leader firms currently allocate approximately 12.5% of their IT budget to enterprise-wide AI activities, compared to roughly 7.4% for the rest of the market. Over the next three years, leader AI budget share is expected to rise to 16.5%. The rest of the market will narrow the gap but not close it. Spend discipline — not raw spend — is the pattern; leaders tend to move faster from pilot to production and allocate the savings back into the next wave of deployment.
ii. They deploy across more functions, earlier.
In accounting, 88% of leaders have selectively or widely adopted AI, compared to only 19% of everyone else. The pattern holds across every area of finance: financial planning, treasury management, risk management, tax operations. Leaders do not choose between use cases; they deploy across the portfolio. On average, a leader firm operates six AI use cases in production. The rest of the market operates three or four.
iii. They commit to generative AI early.
Approximately 38% of leaders are already selectively or widely using generative AI in financial reporting. Among non-leaders, that figure is approximately 3%. Within three years, 95% of leaders expect to be using generative AI in reporting — compared to 39% of the rest. The generative AI gap is larger than the traditional AI gap, and it is widening faster.
iv. They build internal AI capability rather than relying exclusively on vendors.
Leaders are far more likely to have either a dedicated AI team within finance or distributed AI specialists embedded in each function. Roughly two-thirds of leaders also draw on a centralised enterprise AI team outside of finance. External consultants and technology outsourcers are used by nearly half of leaders, but as accelerators rather than substitutes for internal capability.
v. They take governance seriously from day one.
More than twice as many leaders as non-leaders have placed AI risks and controls within the scope of their financial reporting frameworks. More than half of leaders obtain third-party assurance over AI processes and controls — more than double the rate of non-leaders. Published AI frameworks, defined review checkpoints, and documented audit trails are not corner-case concerns for the top quartile; they are table stakes.
| Use Case | Leaders | Everyone Else |
|---|---|---|
| Research & data analysis | 85% | 46% |
| Predictive analysis & planning | 81% | 46% |
| Risk management & cybersecurity | 78% | 45% |
| Performance evaluation & training | 75% | 33% |
| Data entry & document processing | 62% | 27% |
| Expense tracking & tax deductions | 52% | 27% |
| Fraud detection & prevention | 50% | 28% |
| Gen AI for document composition | 48% | 25% |
| Administrative automation | 43% | 27% |
| Custom virtual assistants | 39% | 19% |
| Regulatory monitoring | 33% | 21% |
The pattern is striking and consistent. Across every category measured, leader deployment rates run between 1.6x and 2.4x higher than the rest of the market. There is no single use case that separates leaders from the pack; the separation comes from the aggregate breadth of deployment.
V. The Return
ROI is real, and it is asymmetric.
The most frequently voiced objection to AI investment in finance — voiced particularly by firms that have not yet started — is uncertainty about return on investment. The survey data substantially resolves that question. Among firms currently using AI in finance, 47% report that ROI is meeting expectations and a further 19% report that it is ahead of or well ahead of expectations. Only 15% report outcomes below expectations.
The asymmetry by maturity tier is the more important finding. Among beginners, 25% report higher-than-expected ROI. Among implementers, 30%. Among leaders, 57%. The relationship is not coincidental. Leaders report between two and three times as many distinct benefits from their AI programmes as beginners do — an average of seven benefits per firm, compared to two or three for early-stage adopters. The compounding nature of those benefits is what drives the ROI gap.
This matters for firms currently sitting on the fence. The evidence suggests that the “wait and see” posture has become a costly one. Firms that start later will face a larger adoption curve, compressed timelines, and a smaller pool of skilled staff to draw from. The returns are available now, they are proportional to deployment maturity, and the window to catch up is narrowing.
| Maturity Tier | Ahead of Expectations | Meeting Expectations | Below Expectations |
|---|---|---|---|
| Leaders | 57% | 39% | 5% |
| Implementers | 30% | 50% | 19% |
| Beginners | 25% | 52% | 23% |
The dispersion in outcomes is the clearest single argument against a cautious posture. Leaders are not only more likely to hit their ROI targets; they are dramatically more likely to beat them. The firms that moved earliest are now the firms seeing disproportionate returns, and the structural reasons for that advantage — accumulated data, trained staff, institutionalised governance — do not diminish with time.
VII. The Blind Spots
The issues nobody is paying enough attention to.
Finance executives consistently rate privacy, data integrity, and security as their most important AI concerns. They pay proportional attention to each. What emerges from the survey data, however, is a more interesting finding: the attributes that matter most to AI outcomes are not the ones currently receiving the most attention. Three specific blind spots deserve naming.
Transparency and explainability.
Because most modern AI systems behave as black boxes, stakeholders often have little basis to trust or challenge their outputs. A finance leader cannot explain why a particular model flagged a particular transaction — which becomes a serious problem the moment an auditor, regulator, or litigator asks for the explanation. Transparency is not a nice-to-have. It is the basis on which AI outputs become defensible.
Accountability for automated outputs.
When AI drafts an accounting memo, reconciles an account, or flags a fraud risk, who is professionally responsible for the output? The answer cannot be “the system.” Firms that have not defined clear accountability points — which human approves which AI output, under what review standard — are exposing themselves to liability that will eventually crystallise.
Sustainability.
AI inference and training consume significant amounts of energy. For firms with sustainability commitments, the energy footprint of an expanding AI deployment sits in tension with the carbon targets. Very few finance functions have begun measuring this, much less reporting on it. Expect the conversation to catch up over the next eighteen months as sustainability disclosure frameworks mature.
VIII. The Auditor’s Evolving Role
What companies now expect from their auditors.
One of the quieter but more consequential shifts captured in the research is how companies now expect their external auditors to engage with AI. The expectations have moved well beyond traditional control testing. Most surveyed organisations now expect auditors to conduct detailed reviews of their AI control environments. A large proportion want auditors to assess the maturity of their AI governance, provide third-party attestation over the use of specific AI technologies, and perform readiness and gap assessments ahead of scaled deployment.
The expectations flow in both directions. Companies also expect their auditors to be using AI themselves — in data analysis, anomaly detection, fraud identification, predictive analysis, and the general acceleration of the audit process. The survey data shows meaningful demand for real-time or near-real-time auditing, which represents a substantial departure from the annual cycle that has defined the profession for decades. The largest audit firms — Deloitte, EY, PwC, KPMG, and the next tier — have been investing heavily in AI-assisted audit platforms, with firms such as EY publicly foregrounding their AI-driven audit transformation in recent strategic communications.
Most notably, finance teams want more communication from their auditors on AI — and they are not currently getting it. Among leader firms, only 15% say their auditor communicates frequently with them about AI; 51% say they would like that frequency. The gap between current and desired communication is a genuine market opportunity for audit firms that can close it, and a liability for firms that continue to approach AI as a tangential client concern rather than a core audit subject.
| Activity | Traditional AI Demand | Generative AI Demand |
|---|---|---|
| Data analysis | 66% | 54% |
| Risk mitigation | 57% | 53% |
| Risk identification | 55% | 51% |
| Fraud detection | 53% | 45% |
| Predictive analysis | 50% | 32% |
| Speed up audit process | 45% | 29% |
| Real-time auditing | 39% | 33% |
| Document / data gathering | 37% | 37% |
| Trend analysis | 34% | 30% |
| Improve responsiveness | 32% | 35% |
“
Finance teams want more communication from their auditors on AI. Only fifteen percent of leader firms say they are getting it. That gap is the single clearest market opportunity in professional services right now.
— The Communication Gap
IX. Recommendations
Seven steps, in sequence.
01
Prioritise AI in finance explicitly
Move beyond the base level of reconciliation and data entry. Build a portfolio of use cases that explicitly includes research, risk, cybersecurity, fraud detection, and predictive analysis. Breadth beats depth when you are trying to move from implementer to leader.
02
Develop a specific Gen AI plan
Generative AI is where the largest capability gap between leaders and everyone else currently sits. A plan that addresses use cases, data-sovereignty controls, accuracy standards, and intellectual property boundaries is worth more than a generic “AI strategy” memo.
03
Look beyond reporting
Most finance functions have concentrated their first AI deployments in accounting and reporting. The next frontier sits in treasury, risk management, and tax operations. Leaders have already expanded beyond reporting; the rest of the market should follow.
04
Build internal capability
Staff up within finance. External vendors and consultants are accelerators, not substitutes. A dedicated internal AI capability — whether a central team or specialists embedded by function — is a consistent leader characteristic and a durable competitive moat.
05
Tackle the barriers head on
Establish governance, invest in data cleanup, run ROI-validating pilots before scaling. The barriers do not go away on their own; they require deliberate programme management. Firms that treat them as execution problems rather than strategic obstacles move faster.
06
Watch the blind spots
Transparency, accountability, and sustainability receive disproportionately little attention relative to their actual importance. The firms that address them early will find themselves ahead when regulatory and stakeholder attention turns that direction — which it will.
07
Expect more from auditors
A 2026 audit relationship should involve active dialogue on AI governance, assurance over AI controls, and demonstrable use of AI within the audit itself. If that conversation is not currently happening, raise it. Audit firms that do not have a credible answer are signalling a durable gap in their own capability.
X. What Comes Next
The next three years, in outline.
Global adoption of AI in finance is on a trajectory toward near-universal penetration. Within three years, the share of firms selectively or widely using AI in financial reporting is expected to rise from roughly 28% to approximately 83% across developed markets, and to 78% across the emerging markets included in the global sample. These numbers are striking because they describe a shift from “mainstream adoption” to “operating assumption” over an unusually compressed timeframe.
The generative AI component of that shift is more dramatic still. Across the global sample, nearly every surveyed firm expects to be piloting or actively using generative AI in financial reporting within three years. Among leaders, 95% expect to be selectively or widely deploying it. The maturity distribution captured in today’s survey will look very different twenty-four months from now, and the firms that emerge at the top of that new distribution will be the ones that treated the current window as an opportunity rather than a decision to defer.
Three trends are worth specific attention. First, agentic AI workflows — systems capable of planning and executing multi-step financial tasks autonomously — are moving from proof-of-concept to production in narrow domains. Second, the regulatory conversation on AI in finance is crystallising, particularly in the European Union, and will reshape vendor selection criteria over the next two years. Third, the professional standards bodies governing audit and assurance are actively developing guidance that will define what constitutes adequate human oversight of AI-generated financial outputs. Firms that build their practices now with these frameworks in mind will have substantially less rework to do when the rules arrive.
XI. Reader Questions
Twenty-five questions, answered plainly.
How many finance functions globally are now using AI?
Approximately 71% of finance functions across twenty-three industrialised and emerging markets are using AI to some degree, with 41% using it moderately or widely. Within three years, that figure is expected to approach universality.
What defines a “leader” firm on AI adoption?
Leaders have integrated AI across multiple finance functions, achieved measurable ROI, institutionalised governance, and established internal capability. They comprise roughly 24% of the surveyed population.
What is the biggest gap between leaders and everyone else?
Breadth of deployment. Leaders operate approximately six AI use cases in production; the rest of the market operates three or four. The ROI advantage comes from compounding returns across the portfolio rather than from any single exceptional project.
Is ROI on AI in finance actually being realised?
Yes. Among firms currently deploying AI, 47% report that ROI is meeting expectations and a further 19% report that it is ahead of or well ahead of expectations. Only 15% report outcomes below expectations.
How much more does a leader firm spend on AI than a non-leader?
Approximately twice as much as a share of IT budget. Leaders currently allocate about 12.5% of IT budget to enterprise-wide AI activities, versus 7.4% for the rest of the market. Over three years, the leader share rises to approximately 16.5%.
Which finance functions are furthest along?
Accounting and financial planning are the most mature, with nearly two-thirds of firms piloting or deploying AI. Treasury and risk management follow. Tax management lags, primarily because of data quality issues and regulatory complexity.
Why is tax the slowest to adopt AI?
Tax workflows are complex, regulation-heavy, and often dependent on data scattered across legacy systems. Traditional machine learning approaches struggle in this environment. Generative AI is changing that — but production-scale adoption in tax is further behind than in other finance functions.
What is the generative AI gap between leaders and others?
Approximately 38% of leaders are already selectively or widely using generative AI in financial reporting, compared to roughly 3% of non-leaders. Within three years, 95% of leaders expect to be using it, versus 39% of others.
Which regions are ahead globally?
ASPAC leads on raw leader share, driven by Japanese and Chinese enterprise adoption. North America follows. Europe trails both, with significant internal variation — Germany ahead, Italy and Spain behind. Emerging markets as a block are closing the gap faster than many observers expected.
What are the most commonly cited barriers?
Data security (57%), limited AI skills (53%), inconsistent data (48%), high implementation costs (45%), lack of transparency (40%), compliance concerns (39%), and potential for bias (37%). The ordering shifts as firms mature — early-stage concerns are cost and skills; later-stage concerns are data consistency and integration.
What are the major blind spots?
Transparency and explainability, accountability for automated outputs, and sustainability of AI energy consumption. All three receive less management attention than their actual importance warrants.
How many use cases does a leader typically run?
On average, six. For non-leaders, the average is closer to 3.6. The breadth is the strongest single predictor of ROI outcomes.
Do leaders build internal AI teams or outsource?
Both, but the pattern is deliberate. Leaders build internal capability — either a dedicated team within finance or distributed AI specialists embedded in each function — and use external vendors and consultants as accelerators rather than substitutes.
What governance practices distinguish leaders?
Published AI frameworks, AI risks and controls included in financial reporting scope, third-party assurance over AI processes and controls, documented human-review checkpoints for high-risk outputs. More than twice as many leaders as non-leaders have formalised these practices.
Do leaders get third-party assurance over AI?
More than half of leaders obtain third-party assurance over AI processes and controls — more than double the rate of non-leaders. They also routinely include AI controls assurance in the scope for vendor and third-party processes.
What do companies now expect from their external auditors on AI?
Detailed reviews of AI control environments, AI governance maturity assessments, third-party attestation over AI technology, and active use of AI within the audit process itself. Communication from auditors on AI is the single most common unmet expectation.
How are auditors using AI in their own work?
Data analysis, risk identification and mitigation, fraud detection, anomaly identification, predictive analysis, and accelerating the audit process. Real-time or near-real-time auditing is an emerging expectation that will likely reshape the audit cycle over the coming years.
Why do data quality problems delay AI projects so often?
Because AI deployment exposes underlying data hygiene issues rather than solving them. Inconsistent chart of accounts, vendor master duplicates, and fragmented ledgers all become visible — and operationally problematic — when a model is asked to operate on them at scale.
Is staff resistance a real problem?
It rises with deployment maturity. Early pilots rarely trigger resistance because the scope is limited. As automation expands and affects visible job content, resistance becomes material. Firms that frame AI as augmentation rather than replacement — and back that framing with visible investment in staff reskilling — see much higher adoption.
Should we start with a single pilot or multiple?
One pilot, measured carefully, expanded after validation. A sequence of disciplined pilots is how most successful firm-wide rollouts are actually built. Multiple simultaneous pilots dilute management attention and make attribution of results harder.
What is the typical payback period?
In narrow, well-chosen pilots — reconciliation, document processing, expense categorisation — under twelve months is achievable. For broader platform migrations, two to three years is more realistic once data cleanup, training, and process redesign are included.
Why do emerging markets lag the developed ones?
Smaller technology budgets, less established digital infrastructure, and earlier-stage data governance practices. The gap is narrower than many observers expected, however, and is closing quickly — particularly in China, India, and several ASPAC markets.
How much of the IT budget will AI command over the next three years?
Across the surveyed population, average AI spend is expected to rise from approximately 8.5% of IT budget to 13.5% over three years. For leader firms specifically, the figure rises from 12.5% to approximately 16.5%.
What should a CFO do this quarter?
Three things. Commission an honest maturity assessment. Pick one use case outside accounting and financial reporting — treasury, risk, or tax — and run a structured pilot. Open a formal conversation with the firm’s external auditor about AI governance, assurance, and the auditor’s own AI capability.
What is the single biggest risk of waiting?
The compounding nature of the leader advantage. Firms that move later face a larger adoption curve, compressed timelines, and a smaller pool of skilled staff. The evidence suggests that the window for catching up on favourable terms is narrowing rather than widening.
