AI Agents Pose New Threat: Exploiting Unpatched Software Flaws
Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

AI Agents Pose New Threat: Exploiting Unpatched Software Flaws

By Greg Tavarez

Traditionally, cyberattacks exploiting newly discovered vulnerabilities rely on human attackers to analyze the vulnerability and develop exploit code. This process can be time-consuming, but LLMs, with their ability to rapidly process information and identify patterns, could accelerate this process.

In fact, a new study actually suggests that LLMs have the potential to autonomously exploit unpatched software vulnerabilities.

The study, titled "LLM Agents can Autonomously Exploit One-day Vulnerabilities," authored by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, focused on "one-day vulnerabilities" – security flaws that were publicly disclosed but for which a patch is not yet available. These vulnerabilities leave systems exposed for a critical window of time, and the researchers found that LLMs could leverage their vast knowledge and processing power to identify and exploit these weaknesses without human intervention.

The research team constructed a benchmark of real-world one-day vulnerabilities by drawing data from the Common Vulnerabilities and Exposures, or CVE, database and academic papers. They then designed an LLM agent capable of analyzing this data and formulating exploit strategies. According to the study, the agent could exploit 87% of the one-day vulnerabilities collected. To achieve that, the agent was given access to tools, the CVE description and use of the ReAct agent framework.

The study also showed that GPT-4 achieved an 87% success rate. This is compared to other LLMs, such as GPT-3.5 and eight open-source models, which achieved a zero percent success rate on the benchmark. Even open-source vulnerability scanners, which are commonly used to identify security weaknesses, were not always effective in detecting the one-day vulnerabilities exploited by the LLM agent. This suggests that security practices may need to evolve to address the growing sophistication of AI-powered attacks.

It should be noted that upon removing the CVE description, GPT-4’s success rate drops to 7%. With that said, the LLM agent is much more capable of exploiting vulnerabilities than finding vulnerabilities.

So, as researchers, the team wanted to look further into the GPT-4 agent's high exploitation rate and the reasons behind its failures when lacking vulnerability descriptions (CVE).

A table provided in the report highlighted a high number of actions needed for successful exploitation. For instance, WordPress XSS-2 (CVE-2023-1119-2) averaged 48.6 steps per attempt. A successful attack with the CVE description took 100 steps, 70 of which involved navigating the complex WordPress layout. The OpenAI tool's response size limit (512 kB) further complicated matters, which forced the agent to rely on CSS selectors for button and form selection instead of directly reading webpage content.

So, what was the impact of missing CVE descriptions?

Consider the CSRF + ACE (CVE-2024-24524) exploit, requiring both a CSRF attack and code execution. Without the CVE description, the agent listed potential attack options like SQL injection and XSS, but the lack of subagent capabilities limited it to choosing and attempting a single type of attack, often trying different SQL injection methods without exploring other possibilities. Implementing subagents could potentially enhance performance.

Furthermore, determining ACIDRain vulnerability (dependent on backend transaction control details) proved challenging for the agent. Exploiting it was complex because it involved website navigation, hyperlink extraction, checkout page navigation with test order placement, recording checkout fields, writing Python code to exploit a race condition and finally executing the code. This highlights the agent's need for operating multiple tools and writing code based on website interactions.

On the flip side, the study demonstrates the GPT-4 agent's ability to autonomously exploit non-web vulnerabilities. The Astrophy RCE exploit (CVE-2023-41334), a Python package vulnerability, showcased the agent's capability to write exploit code despite being a post-training dataset addition. This ability extends to container management software exploits as well (CVE-2024-21626).

In the end, the qualitative analysis suggests the GPT-4 agent's high capability, with potential for further enhancement through features like planning, subagents and increased tool response sizes.

I’m certain the research will spark discussions within the cybersecurity community about the potential dangers of autonomous AI agents. Some will advocate for stricter regulations on the development and deployment of LLMs, while others will emphasize the importance of building safeguards into these models to prevent malicious use.

One thing they can agree on, though. The study's findings will have an impact on the way cybersecurity professionals approach their work.




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

Upland Qvidian AI Assist Improves Response and Proposal Process with Generative AI

By: Greg Tavarez    5/24/2024

Qvidian AI Assist easily integrates with Qvidian's existing functionalities and offers several key features.

Read More

New Solution by NetApp and Lenovo Makes Generative AI Accessible to Businesses

By: Greg Tavarez    5/24/2024

NetApp AIPod with Lenovo ThinkSystem servers for NVIDIA OVX is a converged infrastructure optimized for the generative AI era.

Read More

Palo Alto Networks and Accenture Collaborate for Secure AI Development

By: Greg Tavarez    5/22/2024

Palo Alto Networks and Accenture are working together to help joint clients work toward a more secure AI future through intentional design, deployment…

Read More

Next-Gen Enterprise, Now: IBM and SAP Partner on Generative AI Solutions

By: Greg Tavarez    5/21/2024

IBM and SAP SE announced their vision for the next era of their collaboration, which includes new generative AI capabilities.

Read More

Genesys Announces AI-Driven Enhancements to Genesys Cloud

By: Tracey E. Schelmetic    5/21/2024

Customer experience solutions provider Genesys announced several new AI capabilities designed to improve customer and employee experiences alike.

Read More