Artificial intelligence (“AI”) continues to reshape the UK financial services landscape in 2026, with consumers increasingly relying on AI-driven tools for financial guidance and firms deploying more autonomous systems across their businesses.
The Financial Conduct Authority (“FCA”), Prudential Regulation Authority (“PRA”) and Bank of England (“BoE”) (together “the Regulators”) have consistently signalled that AI will be overseen through existing regulatory frameworks, rather than through bespoke AI-specific rules. At the same time, political scrutiny is intensifying, supervisory expectations are rising, and the Regulators are investing heavily in sandbox initiatives and long-term reviews to test whether those frameworks remain fit for purpose.
This article explores the latest policy signals, supervisory initiatives and regulatory tools shaping the UK’s evolving approach to AI in financial services.
Pressure to Regulate AI in the Financial Services Sector Grows – But No New AI Rules Yet
Political and policy pressure towards the current approach to AI regulation is growing, even as the Regulators continue to resist introducing AI-specific rules, in favour of a technology neutral, principles-based approach.
On 20 January 2026, the House of Commons Treasury Committee published a critical report warning that a “wait-and-see” approach to the use of AI in financial services – an approach it considers the Regulators to have adopted – risks serious harm to consumers and the broader financial system, if left unchecked. While the Committee’s recommendations are not binding, the report reflects heightened parliamentary scrutiny of AI deployment in financial services and signals rising expectations around regulatory clarity and preparedness. Amongst other things, the Committee called on the Regulators to:
- conduct AI-specific stress testing to assess systemic resilience;
- publish practical guidance by the end of 2026 on how existing consumer protection rules apply to AI, including clarity on senior manager accountability under the Senior Managers and Certification Regime (“SMCR”); and
- ensure that HM Treasury designates major AI and cloud providers as critical third parties under the new UK Critical Third Parties oversight regime (“UK CTP”).
On 27 January 2026, the FCA launched a long-term review into how AI could reshape retail financial services (the “Mills Review”). The FCA reiterated that it does not currently plan to introduce AI-specific rules, but acknowledged that existing supervisory frameworks may need to evolve as AI systems become more capable and autonomous. In particular, the FCA raised questions about how SMCR would operate where AI systems perform functions traditionally subject to direct human oversight. However, the FCA emphasised that “it would be premature” to recommend major regulatory or legislative changes at this stage.
These developments sit alongside a broader government push for regulators to take a more proactive stance on AI. In January 2026, the Department for Science, Innovation and Technology (DSIT) and the Deportment for Business and Trade (DBT) issued strategic letters to 19 regulators – including the FCA, BoE and PRA – directing them to publish plans for enabling safe AI-powered innovation and to report annually on their progress. On 1 April 2026, the BoE and PRA published their response, reiterating that they are maintaining a technology‑agnostic approach to regulation and keeping under review whether further action or guardrails may be needed. The regulators confirmed that monitoring and engagement with industry on AI will continue, including through:
- a fourth edition of the regulators’ biennial survey of AI adoption by the financial sector, to be re-run this year;
- a report to be published by the AI Consortium, a public-private platform set up by the Regulators last May to gather input from stakeholders on the capabilities, development, deployment and use of AI in financial services; and
- a new series of AI roundtables with banks and insurers, to be conducted by the PRA and BoE this year, to better understand the constraints firms may face in adopting AI.
Monitoring of Financial Stability and Prudential Engagement
The Financial Policy Committee (“FPC”), a BoE body responsible for monitoring systemic risks to the UK financial system and directing the PRA and FCA on macroprudential policy, has confirmed that – together with the RPA and FCA – it continues to monitor the development of AI-related risks to financial stability. The FPC’s April 2025 report on “Artificial intelligence in the financial system” highlighted the potential for systemic risk arising from the increasing use of AI: in banks’ and insurers’ core financial decision-making; in financial markets to inform trading and investment strategies and decisions; and within firms’ and third-party providers’ operational functions. While existing microprudential regulation (including SMCR) help mitigate these risks, the FPC has indicated that it will continue to consider whether any macroprudential measures (in addition to the UK Critical Third Parties regime) may be required to safeguard the financial system as a whole.
The BoE and PRA also continue to actively engage with industry. On 16 February 2026, the BoE published a summary of AI roundtables with banks and insurers, which highlighted broad industry support for the PRA’s principles-based approach to AI governance, including the Supervisory Statement 1/23 on Model Risk Management. However, firms raised concerns about whether traditional model risk management and validation approaches can scale effectively in the context of widespread deployment of generative and agentic AI systems. Participants also questioned how the concept of a “human-in-the-loop” can be meaningfully applied as AI systems take on more decision-making functions. Firms further highlighted the operational challenges of managing AI risks across borders as jurisdictions adopt divergent regulatory approaches.
Regulators Explore Innovative Tools to Support Responsible AI Experimentation
Alongside their principles-based supervisory stance, the Regulators have invested heavily in regulatory tools designed to provide practical and responsible support for AI experimentation, and to deepen supervisory understanding of the use of AI in financial services.
In October 2024, the FCA launched its AI Lab, a dedicated initiative aimed at promoting safe innovation, improving regulatory insight into AI technologies, and providing firms with targeted support across the innovation lifecycle. Key components of the AI Lab include:
- Supercharged Sandbox – designed to lower barriers for firms without extensive in-house infrastructure by providing access to high-performance computing, enriched datasets and advanced AI tools;
- AI Live Testing – enabling firms to trial AI systems in controlled, real-world market conditions;
- AI Spotlight – showcasing real-world examples of how firms are experimenting with AI in financial services;
- AI Sprint – bringing together industry, academics, regulators, technologists and consumer representatives to inform the regulatory approach to AI; and
- AI Input Zone – enabling stakeholders to share views about current and future uses of AI.
In September 2025, the FCA published a feedback statement summarising industry responses to its April 2025 Engagement Paper, which set out the regulator’s proposal for an “AI Live Testing” pilot in an aim to support firms’ safe and responsible deployment of AI, as part of the existing AI Lab. Respondents expressed broad support for AI Live Testing, which was widely viewed as a valuable mechanism for building trust and transparency through closer regulator-firm collaboration. In particular, firms noted that AI Live Testing would help overcome “proof of concept paralysis”, which is where AI initiatives can stall due to regulatory uncertainty. Respondents also highlighted the role of AI Live Testing in developing shared understanding of complex AI issues such as model validation, bias detection and mitigation, and system robustness. In response to the strong industry support, the FCA proceeded to launch the pilot in practice. The first cohort of firms joined AI Live Testing in October 2025, and a second cohort is expected to launch in April 2026, following an application window that ran from 19 January to 24 March 2026.
In addition, and in line with the UK’s pro-innovation agenda, on 26 March 2026 the FCA published its work programme for 2026/27, which confirms the expansion of the Supercharged Sandbox to a new cohort of firms. Participants will gain access to high-quality synthetic data to test innovative AI-driven financial products in a controlled environment. This reinforces the FCA’s strategy of enabling live experimentation rather than introducing new prescriptive rules.
Perimeter Questions and Unregulated AI-Driven Financial Guidance
On 26 March 2026, the FCA published its latest perimeter report, which highlights emerging risks at the edge of its regulatory remit – particularly the rapid growth of general-purpose AI tools offering financial advice or recommendations, such as AI-powered personal finance chatbots. The FCA notes that these tools may not fit neatly within existing regulatory frameworks, raising questions about whether current perimeter boundaries remain appropriate if consumer harm begins to materialise. The FCA has urged the government to consider whether regulatory boundaries should be updated if these unregulated services pose increasing risks.
Practical Takeaways
While no AI-specific rules have been introduced, regulatory expectations are rising. Firms are encouraged to take proactive steps to remain aligned with evolving supervisory priorities, including by:
- closely monitoring for further FCA guidance on how existing rules – particularly the Consumer Duty and SMCR – apply to AI-enabled business models;
- reviewing governance, explainability, and oversight frameworks for AI systems, especially those involving agentic or more autonomous capabilities, to ensure they meet current regulatory standards;
- closely monitoring developments under the UK CTP regime, particularly where firms rely on externally-sourced AI or cloud service providers; and
- engaging with regulatory initiatives such as FCA sandboxes, live testing and calls for input (including the Mills Review) to help shape future policy and gain early insight into supervisory expectations.
The message for 2026 is clear: the window for innovation is open – but so is the door to greater scrutiny. Firms that act now to align with regulatory direction should be well-positioned to operate compliantly in an increasingly AI-enabled financial services landscape.
If you have any questions concerning the material discussed in this article, please contact a member of the team below.
