Sunday, February 22

Computer science professor studies the intersection of cybersecurity and AI 


Phishing attacks. Jailbreaking and social engineering attacks. Malicious advertisements. These have all become part of the day in the life for most everyone who depends on a web connection for their work and livelihood. University of Georgia cybersecurity expert Roberto Perdisci welcomes a greater awareness of the threats that have long been at the center of his professional life. 

“I initially came to the U.S. as a research scholar to deepen my doctoral work in Italy, which was at the intersection of machine learning and cybersecurity,” said Perdisci, Patty and D.R. Grimes Distinguished Professor of Computer Science and director of the UGA Institute of Cybersecurity and Privacy. “And back then, it was already a big thing.” 

Attracted to the U.S. to work with Wenke Lee, Regents’ Professor and John P. Imlay Jr. Chair in Software in the Georgia Tech College of Computing, Perdisci completed a post-doctoral fellowship with Lee after finishing his Ph.D. and joined the UGA faculty in 2010. 

The quick transition is indicative of what continues to be a high demand field. “With all the development of AI agents, the intersection with cybersecurity has become even more prevalent,” he said. 

Perdisci’s research primarily focuses on using machine learning and AI as tools to improve cybersecurity, rather than the safety and security of AI systems themselves. Understanding the distinction is instructive to understanding the many uses — and potential misuses — of large language models. 

Security and safety of AI models themselves refer to attacks on AI systems, whether self-driving cars that encounter defaced road signs or nefarious attempts to trick an LLM such as Open AI’s ChatGPT or Google’s Gemini with illicit or illegal prompts. “That’s known as jailbreaking, because the AI system is working within safeguards, and you’re trying to break those safeguards to misuse the model,” Perdisci said.  

“But we can also use AI systems to help improve the cybersecurity of a web browser,” he said, for instances including when people encounter malicious web pages while using a browser to navigate across the internet. 

“This happens because either you visit a site that is not reputable, or you visit one that is reputable but includes an advertisement that is dynamically selected,” he said, noting that the publisher of a particular site doesn’t necessarily have control over its ads. “And sometimes malicious adverts can trigger the browser so that ends up being re-directed to a social engineering attack, like a fake software download or antivirus, a tech support scam or fake lottery type of scam, and so forth. These are all social engineering attacks performed on the web.” 

To work against and help foil such attacks, Perdisci has led an NSF-funded project since 2021 where his team demonstrates the possibility of integrating an AI model within the browser itself. 

“As you browse the web, the AI model inspects both the visual appearance of the web pages and the textual appearance, not necessarily the HTML of the underlying pages, but how the page reads visually. A technology like Optical Character Recognition interprets the visual image that is rendered by the browser to determine whether it’s a scam,” he said. 

Because of the scale and speed required by such a tool, the inspection can only be accomplished through ML and AI systems. And as with AI-enhanced browsers or LLMs including ChatGPT and Gemini, AI models that interact with the internet on a person’s behalf use tools that are understood and planned for by attackers. 

“One thing they can do is construct web content in a way that doesn’t appear as malicious to a human being but could mislead the AI agent,” Perdisci said. As the use of AI models grows, more and newer vulnerabilities are being introduced through its tools and systems. 

“There are many different types of vulnerabilities that have been introduced because of the use of AI,” he said, “but this is true for any technology. It’s just that this is a new one, and as usual for any type of information system, that’s the pattern: features and functionalities come first, and security later. If the company makes money, the product survives and maybe they will think about adding more security investment.” 

Emerging threats color this brave new world, but Perdisci brings that moment and its urgency directly into the classroom for his students to consider, learn and build new security strategies.  

In 2025, Perdisci was selected as a recipient of an Amazon Research Award, one of four awardees in the AI for Information Security category. The award will support his project, ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection, which grew out of one of his doctoral student’s 2024 internship at Amazon Web Services. 

Real-world examples of challenges students will face working in industry make them more attuned to tomorrow’s threats. Awareness of how AI works and can work against people still requires the human touch after all. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *