75% of UK Financial Services Firms Now Use AI — What the 2024 Regulators’ Survey Actually Says

Regulatory Brief · UK · 5 min read

The UK regulators’ third survey of AI in financial services landed quietly in November 2024. Buried inside it are the numbers every CFO, compliance officer, and audit partner should have on hand.

— At a Glance —

  • 75% of surveyed UK financial services firms are already using AI. A further 10% plan to within three years.
  • Insurance leads sector adoption at 95%. Financial market infrastructure sits at 57%.
  • Third-party implementations now account for a third of all AI use cases, nearly double the 2022 share.
  • 55% of use cases involve some degree of automated decision-making. Only 2% are fully autonomous.
  • 46% of firms admit to only “partial understanding” of the AI technologies they operate.
  • Cybersecurity is the top-rated systemic risk both now and projected three years out.

The adoption curve has bent.

The Bank of England and the Financial Conduct Authority have now run three surveys on artificial intelligence use in UK financial services — in 2019, 2022, and again in 2024. The trajectory between surveys is the clearest single summary of how the sector has changed. In 2022, 58% of surveyed firms reported using AI. By late 2024, that figure had climbed to 75%, with a further 10% planning to deploy within three years. If projected forward on the current slope, close to universal adoption arrives within the next survey cycle.

What is more revealing is the compression of the deployment timeline within firms that already use AI. The median respondent operates nine use cases today and expects to operate twenty-one within three years. Large UK banks already run a median of thirty-nine; international banks operating in the UK, forty-nine. The gap between AI-heavy firms and the rest of the market is already sizeable and appears to be widening.

— Sector Dynamics —

Insurance is quietly ahead; FMIs lag.

The sector-level numbers will surprise anyone who still thinks of banking as the frontier of AI in finance. Insurance led adoption at 95% of surveyed firms, with international banks close behind at 94%. UK retail banking — the sector that attracts most of the media attention — sat well inside the pack. At the other end of the distribution, only 57% of financial market infrastructure firms reported using AI, making them the single largest adoption gap across the surveyed population.

Operations and IT accounted for 22% of all reported use cases — twice the next-largest category, retail banking at 11%, with general insurance third at 10%. Optimisation of internal processes was the single most common application (41% of respondents), followed by cybersecurity (37%) and fraud detection (33%). Customer support, regulatory compliance, and further fraud-detection deployments are expected to be the largest incremental growth areas over the next three years.

— The Third-Party Problem —

One-third of AI now runs on someone else’s rails.

The single most consequential finding for systemic risk watchers is the sharp rise in third-party dependency. One in three AI use cases is now a third-party implementation, nearly double the 17% figure reported in 2022. Human resources, risk and compliance, and operations and IT show particularly high third-party rates — 65%, 64%, and 56% respectively.

The concentration within that third-party layer is where the systemic risk sits. The top three cloud providers account for 73% of all named providers. The top three model providers now account for 44%, up sharply from 18% in 2022. The top three data providers account for 33%, also up meaningfully. The conclusion is that the AI supply chain for UK financial services is not just increasingly outsourced; it is increasingly concentrated in a small number of non-UK vendors. Regulators flagged this directly as the systemic risk with the largest expected three-year increase.

— Automation & Oversight —

Automated, but not autonomous.

Despite the hype around agentic AI, the survey paints a more measured picture of actual autonomy. Of all reported use cases, 55% involve some degree of automated decision-making. Within that, 24% are semi-autonomous — systems that can make routine decisions independently but escalate ambiguous or high-impact ones to human review. Only 2% of deployed use cases are fully autonomous. For firms considering procurement of AI with meaningful decision-making authority, the prevailing design pattern is clearly still human-in-the-loop.

Forty-six percent of firms admit they do not fully understand the AI systems they operate. The biggest reason is third-party models, which they deploy but cannot inspect.

— The Understanding Gap

— Risk & Constraints —

The risks are mostly about data. The constraints are mostly about regulation.

Four of the top five risks reported by surveyed firms relate to data — privacy and protection, quality, security, and bias or representativeness. The fifth is model transparency. The risks projected to increase most over three years are third-party dependencies, model complexity, and embedded or “hidden” models inside vendor products. In plain terms: firms are worried about losing visibility and control as their AI stack becomes more outsourced and more complex.

On the regulatory side, data protection and privacy were cited as the single largest constraint on AI adoption — 23% of firms rated it a “large” constraint. Resilience, cybersecurity, and third-party rules followed, with the FCA’s Consumer Duty close behind. The complaint, notably, was not primarily about the stringency of the rules but about the burden of compliance and, in some cases, lack of clarity — 18% cited unclear treatment of intellectual property rights, 13% unclear application of the Consumer Duty to AI-driven decisions.

Non-regulatory constraints paint a familiar picture. Safety, security, and robustness of models ranked first. Insufficient talent and access to skills ranked second — 25% of firms rated it a “large” constraint. Appropriate transparency and explainability ranked third. Professional-services firms advising clients on AI deployment in financial services, including EY, have noted similar bottlenecks in their own client engagements across the sector.

UK Financial Services AI, 2022 → 2024 — Selected Shifts
Metric20222024
Firms using AI58%75%
Firms planning to use AI14%10%
Use cases that are third-party17%33%
Top 3 model providers’ share18%44%
Top 3 data providers’ share25%33%
Foundation model use casesNot measured17%

— What to Watch —

The next survey will look very different.

Three trends in the 2024 data are likely to define the 2026 or 2027 survey. First, foundation models already account for 17% of all use cases and are growing fast; the next survey will likely show them as the majority category. Second, the third-party concentration risk in cloud, models, and data is on a trajectory that will, absent intervention, become a named systemic concern for the FPC. Third, the gap between firms with “partial” and “complete” understanding of their own AI is a governance problem that will only compound as complexity rises.

For CFOs and audit partners advising UK financial services clients, the practical implication of the 2024 survey is unambiguous. The adoption question is settled. The questions that replace it — governance adequacy, third-party concentration risk, model transparency, talent availability — are harder, more expensive, and will occupy regulatory attention through the rest of the decade.

— Reader Questions —

Eight questions, answered briefly.

What did the 2024 UK survey actually measure?

It measured AI adoption, use-case distribution, third-party exposure, automated decision-making, materiality, perceived risks and benefits, and governance practices across 118 regulated UK financial services firms. It is the third such survey conducted jointly by the Bank of England and the FCA.

How much has adoption grown since 2022?

Firms reporting active AI use rose from 58% in 2022 to 75% in 2024. Factoring in firms planning to deploy within three years, the combined figure reaches 85%.

Which sector is furthest ahead?

Insurance, at 95% of surveyed firms. International banks follow at 94%. Financial market infrastructure firms trail at 57%.

What does the third-party concentration risk actually look like?

The top three cloud providers account for 73% of named providers. The top three model providers account for 44%, more than double the 2022 share. The top three data providers account for 33%. Concentration is accelerating across all three layers.

Are fully autonomous AI systems common in UK finance?

No. Only 2% of reported use cases involve fully autonomous decision-making. The prevailing design pattern remains human-in-the-loop, with 24% of use cases classified as semi-autonomous.

What are the biggest perceived risks?

Four of the top five relate to data: privacy, quality, security, and bias. Model transparency rounds out the top five. Looking three years out, respondents expect third-party dependencies, model complexity, and hidden vendor models to grow fastest as risk drivers.

What is the biggest regulatory constraint?

Data protection and privacy, with 23% of firms rating it a “large” constraint. Resilience and cybersecurity rules follow, with the FCA’s Consumer Duty in third place.

What should finance leaders do with these findings?

Assess third-party AI concentration risk in their own stack; pressure-test governance adequacy against the 16-framework benchmark the survey used; ensure that leadership understanding of deployed AI systems matches the material risk they carry; and plan for a regulatory environment in which the adoption question is settled and the governance question becomes central.

Contact Us

We'd love to hear from you