AI Agents Pose New Threat: Exploiting Unpatched Software Flaws
Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

AI Agents Pose New Threat: Exploiting Unpatched Software Flaws

By Greg Tavarez

Traditionally, cyberattacks exploiting newly discovered vulnerabilities rely on human attackers to analyze the vulnerability and develop exploit code. This process can be time-consuming, but LLMs, with their ability to rapidly process information and identify patterns, could accelerate this process.

In fact, a new study actually suggests that LLMs have the potential to autonomously exploit unpatched software vulnerabilities.

The study, titled "LLM Agents can Autonomously Exploit One-day Vulnerabilities," authored by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, focused on "one-day vulnerabilities" – security flaws that were publicly disclosed but for which a patch is not yet available. These vulnerabilities leave systems exposed for a critical window of time, and the researchers found that LLMs could leverage their vast knowledge and processing power to identify and exploit these weaknesses without human intervention.

The research team constructed a benchmark of real-world one-day vulnerabilities by drawing data from the Common Vulnerabilities and Exposures, or CVE, database and academic papers. They then designed an LLM agent capable of analyzing this data and formulating exploit strategies. According to the study, the agent could exploit 87% of the one-day vulnerabilities collected. To achieve that, the agent was given access to tools, the CVE description and use of the ReAct agent framework.

The study also showed that GPT-4 achieved an 87% success rate. This is compared to other LLMs, such as GPT-3.5 and eight open-source models, which achieved a zero percent success rate on the benchmark. Even open-source vulnerability scanners, which are commonly used to identify security weaknesses, were not always effective in detecting the one-day vulnerabilities exploited by the LLM agent. This suggests that security practices may need to evolve to address the growing sophistication of AI-powered attacks.

It should be noted that upon removing the CVE description, GPT-4’s success rate drops to 7%. With that said, the LLM agent is much more capable of exploiting vulnerabilities than finding vulnerabilities.

So, as researchers, the team wanted to look further into the GPT-4 agent's high exploitation rate and the reasons behind its failures when lacking vulnerability descriptions (CVE).

A table provided in the report highlighted a high number of actions needed for successful exploitation. For instance, WordPress XSS-2 (CVE-2023-1119-2) averaged 48.6 steps per attempt. A successful attack with the CVE description took 100 steps, 70 of which involved navigating the complex WordPress layout. The OpenAI tool's response size limit (512 kB) further complicated matters, which forced the agent to rely on CSS selectors for button and form selection instead of directly reading webpage content.

So, what was the impact of missing CVE descriptions?

Consider the CSRF + ACE (CVE-2024-24524) exploit, requiring both a CSRF attack and code execution. Without the CVE description, the agent listed potential attack options like SQL injection and XSS, but the lack of subagent capabilities limited it to choosing and attempting a single type of attack, often trying different SQL injection methods without exploring other possibilities. Implementing subagents could potentially enhance performance.

Furthermore, determining ACIDRain vulnerability (dependent on backend transaction control details) proved challenging for the agent. Exploiting it was complex because it involved website navigation, hyperlink extraction, checkout page navigation with test order placement, recording checkout fields, writing Python code to exploit a race condition and finally executing the code. This highlights the agent's need for operating multiple tools and writing code based on website interactions.

On the flip side, the study demonstrates the GPT-4 agent's ability to autonomously exploit non-web vulnerabilities. The Astrophy RCE exploit (CVE-2023-41334), a Python package vulnerability, showcased the agent's capability to write exploit code despite being a post-training dataset addition. This ability extends to container management software exploits as well (CVE-2024-21626).

In the end, the qualitative analysis suggests the GPT-4 agent's high capability, with potential for further enhancement through features like planning, subagents and increased tool response sizes.

I’m certain the research will spark discussions within the cybersecurity community about the potential dangers of autonomous AI agents. Some will advocate for stricter regulations on the development and deployment of LLMs, while others will emphasize the importance of building safeguards into these models to prevent malicious use.

One thing they can agree on, though. The study's findings will have an impact on the way cybersecurity professionals approach their work.




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

CDG and Innovation Incubator Alliance Advances AI in the Broadband Industry

By: Greg Tavarez    5/1/2024

A strategic alliance between Communications Data Group, or CDG, and Innovation Incubator aims to create solutions that integrate with CDG's existing O…

Read More

DeepL's Write Pro Ignites Creativity in Business Communication

By: Greg Tavarez    4/29/2024

DeepL Write Pro's strength lies in its ability to give writers, regardless of experience, an edge in communication.

Read More

What Moveworks' New Collab Means for Accessible and Scalable GenAI Solutions

By: Alex Passett    4/25/2024

Moveworks has announced a new multi-year collaboration deal with Microsoft.

Read More

AI Agents Pose New Threat: Exploiting Unpatched Software Flaws

By: Greg Tavarez    4/24/2024

A new study suggests that LLMs have the potential to autonomously exploit unpatched software vulnerabilities.

Read More

IntelePeer Integrates with Microsoft Azure OpenAI Service for Improved Automation

By: Tracey E. Schelmetic    4/24/2024

Communications automation platform provider IntelePeer recently announced integration with Microsoft Azure OpenAI Service to embed OpenAI GPT into Int…

Read More