Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

AI Agents Pose New Threat: Exploiting Unpatched Software Flaws

By Greg Tavarez

Traditionally, cyberattacks exploiting newly discovered vulnerabilities rely on human attackers to analyze the vulnerability and develop exploit code. This process can be time-consuming, but LLMs, with their ability to rapidly process information and identify patterns, could accelerate this process.

In fact, a new study actually suggests that LLMs have the potential to autonomously exploit unpatched software vulnerabilities.

The study, titled "LLM Agents can Autonomously Exploit One-day Vulnerabilities," authored by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, focused on "one-day vulnerabilities" – security flaws that were publicly disclosed but for which a patch is not yet available. These vulnerabilities leave systems exposed for a critical window of time, and the researchers found that LLMs could leverage their vast knowledge and processing power to identify and exploit these weaknesses without human intervention.

The research team constructed a benchmark of real-world one-day vulnerabilities by drawing data from the Common Vulnerabilities and Exposures, or CVE, database and academic papers. They then designed an LLM agent capable of analyzing this data and formulating exploit strategies. According to the study, the agent could exploit 87% of the one-day vulnerabilities collected. To achieve that, the agent was given access to tools, the CVE description and use of the ReAct agent framework.

The study also showed that GPT-4 achieved an 87% success rate. This is compared to other LLMs, such as GPT-3.5 and eight open-source models, which achieved a zero percent success rate on the benchmark. Even open-source vulnerability scanners, which are commonly used to identify security weaknesses, were not always effective in detecting the one-day vulnerabilities exploited by the LLM agent. This suggests that security practices may need to evolve to address the growing sophistication of AI-powered attacks.

It should be noted that upon removing the CVE description, GPT-4’s success rate drops to 7%. With that said, the LLM agent is much more capable of exploiting vulnerabilities than finding vulnerabilities.

So, as researchers, the team wanted to look further into the GPT-4 agent's high exploitation rate and the reasons behind its failures when lacking vulnerability descriptions (CVE).

A table provided in the report highlighted a high number of actions needed for successful exploitation. For instance, WordPress XSS-2 (CVE-2023-1119-2) averaged 48.6 steps per attempt. A successful attack with the CVE description took 100 steps, 70 of which involved navigating the complex WordPress layout. The OpenAI tool's response size limit (512 kB) further complicated matters, which forced the agent to rely on CSS selectors for button and form selection instead of directly reading webpage content.

So, what was the impact of missing CVE descriptions?

Consider the CSRF + ACE (CVE-2024-24524) exploit, requiring both a CSRF attack and code execution. Without the CVE description, the agent listed potential attack options like SQL injection and XSS, but the lack of subagent capabilities limited it to choosing and attempting a single type of attack, often trying different SQL injection methods without exploring other possibilities. Implementing subagents could potentially enhance performance.

Furthermore, determining ACIDRain vulnerability (dependent on backend transaction control details) proved challenging for the agent. Exploiting it was complex because it involved website navigation, hyperlink extraction, checkout page navigation with test order placement, recording checkout fields, writing Python code to exploit a race condition and finally executing the code. This highlights the agent's need for operating multiple tools and writing code based on website interactions.

On the flip side, the study demonstrates the GPT-4 agent's ability to autonomously exploit non-web vulnerabilities. The Astrophy RCE exploit (CVE-2023-41334), a Python package vulnerability, showcased the agent's capability to write exploit code despite being a post-training dataset addition. This ability extends to container management software exploits as well (CVE-2024-21626).

In the end, the qualitative analysis suggests the GPT-4 agent's high capability, with potential for further enhancement through features like planning, subagents and increased tool response sizes.

I’m certain the research will spark discussions within the cybersecurity community about the potential dangers of autonomous AI agents. Some will advocate for stricter regulations on the development and deployment of LLMs, while others will emphasize the importance of building safeguards into these models to prevent malicious use.

One thing they can agree on, though. The study's findings will have an impact on the way cybersecurity professionals approach their work.




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

Larger Models, Faster Inference: Cloudflare's AI Platform Gets a GPU Boost

By: Greg Tavarez    10/8/2024

Cloudflare announced new capabilities for Workers AI to help developers build faster, more performant AI applications.

Read More

AI Takes a Seat at the Table: Realbotix's Robot Joins Board

By: Greg Tavarez    10/7/2024

Realbotix announced it recently appointed Aria, its AI robot, as an advisor to its board of directors.

Read More

Integrail AI Studio Debuts, Simplifying AI Application Development

By: Greg Tavarez    10/4/2024

AI Studio is a user-friendly platform that enables you to automate complex business processes using AI models without requiring coding knowledge.

Read More

Agentic AI Takes Center Stage as NetApp and NVIDIA Partner

By: Greg Tavarez    10/3/2024

NetApp brings new capabilities to its ONTAP unified storage operating system and a global metadata namespace.

Read More

TMC Unveils Expanded Conference Program for Generative AI Expo 2025

By: TMCnet News    10/3/2024

3rd Annual Conference and Trade Show Grows to Two Tracks as part of the ITEXPO #TECHSUPERSHOW in Florida

Read More

-->