Source: site

Software engineering is a primary cyber risk for agentic AI use in financial services, with 48% of respondents flagging adversarial AI as a top concern. This is reinforced by the recent news that Anthropic’s Mythos model is often more capable than humans in hacking, making manual oversight of AI use in financial services problematic.
“Further complicating this problem space is a notable perception gap: AI vendors place less priority than industry and regulators on both adversarial AI threats (35% versus 50% industry, 57% regulators) and cyber/operational resilience (32% versus 46% industry, 59% regulators),” said the 2026 Global AI in Financial Services Report – Adoption, Impact and Risks.
Other top AI risks identified include model hallucinations, unreliable outputs, model opacity and lack of explainability, and market abuse.
“The scale and pace of AI adoption in financial services is genuinely remarkable – 4 in 5 firms are already deploying AI at some level, agentic systems have crossed into the mainstream and real productivity and profitability gains are being felt across the industry,” said Bryan Zhang, Executive Director of the Cambridge Centre for Alternative Finance.
But our data also reveals a sector navigating without a map. Accountability for AI failures is unresolved, cyber vulnerabilities are compounding faster than they can be managed, and regulators are operating at half the pace of the institutions they oversee. The opportunity is enormous. So is the responsibility to get the governance right.”
Big gap between AI experimentation and firm-wide implementation
The survey results signal a “deep execution gap between early-stage experimentation and institution-wide AI integration”, with most financial-services industry AI use cases being back-office functions such as software engineering and data management rather than AI-powered customer support.
Fintechs lead incumbent financial services providers in using AI in customer support, with 76% of respondents in large financial institutions finding it hard to measure the value of AI deployment.
“Overall, AI is primarily being used currently to improve execution, throughput and service rather than to fundamentally reconfigure business models, though 51% of more mature AI adopters are piloting or deploying new financial products powered by AI,” said the report. This signals “a potentially significant execution and business integration gap” regarding AI use in financial services.
Industry is far ahead of regulators in AI adoption
The report finds that industry respondents are far ahead of regulators in adoption and deep adoption of AI, with 48% of the 130 regulatory authorities surveyed reporting they are “still in the ‘exploring’ stage for AI adoption or not engaged with AI at all”.
Most organisations surveyed said they are building off external AI models rather than training from scratch, with OpenAI the most-used foundation model provider across all groups (76% of industry and 48% of regulators), followed by Google and Anthropic.
Report based on insight from 628 respondents
The report is based on insights from 628 respondent organisations, including 203 fintechs, 149 financial incumbents, 146 AI vendors and 130 central banks and other financial regulators across 151 jurisdictions around the world.
The global research was conducted by the Cambridge Centre for Alternative Finance, in partnership with Financial Innovation for Impact (FII), the Bank for International Settlements (BIS), the International Monetary Fund (IMF), the World Economic Forum (WEF), the World Bank Group, the Inter-American Development Bank (IDB), the Consultative Group to Assist the Poor (CGAP), the Arab Monetary Fund (AMF) and with the support of the UK Foreign, Commonwealth and Development Office (FCDO).
To access and download the full report: https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/2026-global-ai-in-financial-services-report/
Source: Cambridge Judge Business School




