Sunday, April 12

AI is supercharging the cybersecurity fight


The steady advance of artificial intelligence models continues to raise serious concerns about the cybersecurity threats the technology poses to companies and institutions worldwide.

The latest example came Tuesday, when Anthropic (ANTH.PVT) announced a new cybersecurity initiative called Project Glasswing, meant to help companies find holes in their software by using the startup’s newest AI model, Claude Mythos Preview.

There’s just one problem: Anthropic launched the initiative only because, during testing, it discovered that Claude Mythos Preview was surprisingly good at hacking software. Anthropic didn’t develop Claude Mythos Preview to hack software, either. It just turned out to be shockingly good at it.

That’s also why Anthropic isn’t releasing the Claude Mythos Preview to the public. The company needs to develop cybersecurity safeguards to “block the model’s most dangerous outputs.”

Now, the company is working with partners including Amazon (AMZN), Apple (AAPL), Microsoft (MSFT), and Nvidia (NVDA) through Project Glasswing to help them identify vulnerabilities in their software before similarly powerful models hit the market.

Claude Mythos Preview, Anthropic says, has already discovered thousands of flaws in existing programs and applications, including all of the popular operating systems and web browsers.

POLAND - 2023/10/19: In this photo illustration Anthropic logo is displayed on a smartphone with stock market percentages on the background. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)
Anthropic launched its Project Glasswing to help companies prepare for high-powered AI models capable of hacking their software. (Omar Marques/SOPA Images/LightRocket via Getty Images) · SOPA Images via Getty Images

In one particular example of the model’s abilities, Anthropic said Mythos was able to chain together several software vulnerabilities in Linux. This popular operating system underpins much of the web and enables ordinary users to gain complete control of a computer.

It’s not just cutting-edge models like Mythos that represent a potential cybersecurity threat, though. According to experts, the industry’s move toward AI generally represents a new opportunity for hackers to breach systems worldwide.

“The level of opening that this provides attackers is dramatic,” Charles Harry, a research professor at the University of Maryland who leads the Center for Governance of Technology and Systems, told Yahoo Finance.

The good news? The same technologies can be used to prevent cyberattacks as long as defenders can catch up.

Hackers have existed for decades, finding and exploiting software errors to steal data, take control of systems, and more.

Humans are, well, only human, and when they write code, they occasionally make mistakes. Despite strenuous testing, those errors get overlooked and eventually ship with the final piece of software. Hackers spend their time scouring programs and applications for bugs in the hopes of using them to break into a piece of software. And tossing AI into the mix only further complicates things.

“AI is just yet another layer of complexity on an already complex technical ecosystem,” Harry said. Attackers can exploit that complexity by finding more ways into software.

Andrew Lohn, a senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), said that AI coding isn’t helping either.

“One thing that I worry about a little bit with AI, and [large language models] in general, is you can produce more code … so there’s more vulnerabilities,” he told Yahoo Finance.

“As more code gets written by AI, and more of the code gets inspected or certified by AI, then it raises more questions about where the vulnerabilities are,” Lohn said.

The massive push across the tech industry to take advantage of AI as quickly as possible could prompt some workers to cut corners, potentially exposing more companies to cyberattacks.

We’ve already seen it on the offensive side. Security researcher Callum McMahon discovered a piece of malware that ricocheted from an open-source project called LightLLM to an AI training company, Mercor, last month.

That in and of itself isn’t Earth-shattering. But according to TechCrunch, McMahon and AI researcher Andrej Karpathy found that the malware was “vibe coded,” meaning the attacker created it using AI. The reason McMahon found it? The malware itself contained an error that crashed his computer.

And if those errors are happening on the offensive side, they’re certainly happening on the defensive side.

Anthropic also suffered a cybersecurity incident last week, though the company said the leak of some of its source code was the result of human error, not a cyberattack. Still, Lohn said the push to go faster makes similar instances more likely.

It’s not all negative news, though. Just as AI can be used to attack different pieces of software, it can be used to defend against attacks as well. Companies across the spectrum are implementing AI into their cybersecurity systems to detect attacks early and address them promptly.

Harry, however, cautioned that as organizations lean further into AI defense, malicious actors will look for increasingly clever ways to hide their attacks. And the fighting will continue.

Sign up for Yahoo Finance's Week in Tech newsletter.
Sign up for Yahoo Finance’s Week in Tech newsletter. · Yahoo Finance

Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

For the latest earnings reports and analysis, earnings whispers and expectations, and company earnings news, click here

Read the latest financial and business news from Yahoo Finance





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *