Sunday, April 12

UK financial regulators rush to assess risks of Anthropic’s latest AI model


Stay informed with free updates

UK financial regulators are holding urgent discussions with the government’s main cyber security watchdog and the country’s biggest banks to assess the risks posed by the latest AI model from Anthropic.

Officials at the Bank of England, the Financial Conduct Authority and HM Treasury are in talks with the National Cyber Security Centre to explore potential vulnerabilities in key IT systems revealed by Anthropic’s latest model.

Leading British banks, insurers and exchanges will be warned about the cyber security risks exposed by Anthropic’s latest model, Claude Mythos Preview, at a meeting with the regulators in the next fortnight, according to two people briefed on the talks.

The response by UK authorities follows a summons by US Treasury secretary Scott Bessent to leaders of some of the largest Wall Street banks to discuss the latest AI model’s advanced ability to detect cyber security vulnerabilities that could be exploited by bad actors.

When Anthropic announced the release of Mythos to select customers last week, the company said it had already “found thousands of high-severity vulnerabilities, including some in every major operating system and web browser”, some of which have gone undetected for decades.

The $380bn San Francisco start-up said it would “not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely”, adding: “The fallout — for economies, public safety, and national security — could be severe.”

The potential ramifications of the new AI system are on the agenda for the next meeting of the UK’s Cross Market Operational Resilience Group, which brings regulators and financial services companies together to discuss threats to the sector.

CMORG is co-chaired by Duncan Mackinnon, the Bank of England’s executive director for supervisory risk, and David Postings, the head of the UK Finance trade body for banks. 

Other members include senior representatives from eight of the biggest UK banks, four financial infrastructure providers and two insurers, as well as the NCSC, the FCA and HM Treasury. The agenda of the CMORG meeting was first reported by The Telegraph. The BoE declined to comment.

David Raw, managing director for resilience at UK Finance, said “We are aware of the press reports on the Anthropic AI development and the risks highlighted.”

He added: “UK Finance engages with our members and through our public/private partnerships on any significant operational risks that could affect the resilience of the UK financial services sector.”

The BoE could also convene a meeting with financial institutions within one to two hours via its separate Cross Market Business Continuity Group when confronted with an urgent threat to the sector. But it has yet to do so in this case.

A number of major UK companies were targeted by hackers last year in cyber attacks that caused significant disruption to their operations, including retailers M&S, the Co-op Group and Harrods, in addition to Jaguar Land Rover.

The UK’s AI Security Institute, the government’s frontier AI model testing and risk research unit, has been evaluating Anthropic’s Mythos along with other leading models such as Claude and OpenAI’s ChatGPT.

But the government is weighing a plan to conduct standardised testing of the general-purpose AI models used by all UK lenders after the BoE warned them over their evaluation practices last year, the FT reported this month.

The BoE’s Prudential Regulation Authority, which regulates banks, told executives from lenders in two meetings last October that their AI model monitoring was “not frequent enough”, according to slides from the events.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *