Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

The Dark Side of AI Code: Security Risks Loom Large

By Greg Tavarez

“Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won’t give up their superpowers, and attackers are infiltrating our ranks.”

Kevin Bocek, Chief Innovation Officer at Venafi, is spot on with that quote.

Think about it.

On one hand, developers have embraced AI to increase efficiency and productivity. However, this fast development cycle introduces new security risks as AI-generated code may contain vulnerabilities that are difficult to detect.

Simultaneously, attackers are using AI to enhance their capabilities. AI-powered tools are used to automate attacks, generate more sophisticated malware and even craft convincing social engineering attacks.

To prove that point, Bocek and the team at Venafi released a research report. The report found that 83% of security leaders say their developers currently use AI to generate code, with 57% saying it has become common practice. Well, 72% feel they have no choice but to allow developers to use AI to remain competitive, and 63% have considered banning the use of AI in coding due to the security risks.

Adding to that, the report found that 66% of survey respondents report it is impossible for security teams to keep up with AI-powered developers. As a result, security leaders feel like they are losing control and that businesses are being put at risk, with 78% believing AI-developed code will lead to a security reckoning and 59% losing sleep over the security implications of AI.

Why do they feel that way though?

“Anyone today with an LLM can write code, opening an entirely new front,” said Bocek. “It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. So, it’s the code that matters! We have to authenticate code wherever it comes from.”

Look at recent high-profile incidents like the CrowdStrike outage. As AI and other advanced technologies continue to influence the coding process, the origin and authenticity of code become increasingly uncertain.

With code potentially sourced from diverse and sometimes untrusted sources, including AI and foreign actors, it's imperative to prioritize code authentication and verification. By focusing on the identity of code, applications, and workloads, we can ensure their integrity and prevent unauthorized modifications.

So, what can security teams do?

Implement a strong code signing strategy is one way to go. Organizations that do this can verify the authenticity and integrity of software, and prevent unauthorized code execution and mitigate the risk of malicious attacks. The challenge is that managing code signing certificates and keys can be a complex and time-consuming process.

Venafi's Stop Unauthorized Code Solution offers a comprehensive approach to streamline code signing and strengthen security. By automating certificate management, enforcing policy controls and integrating with CI/CD pipelines, this solution empowers organizations to scale their code signing operations while maintaining a high level of security.

“In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business’ foundational line of defense,” Bocek added. “But for this protection to hold, the code signing process must be as strong as it is secure. It’s not just about blocking malicious code — organizations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed. The good news is that code signing is used just about everywhere — the bad news is it is most often left unprotected by security teams who can help keep it safe.”

Because of Venafi’s report, we see why security teams are caught between the need to support the business's need for speed and innovation and the imperative to protect the organization from increasingly sophisticated attacks.

Be part of the discussion about the latest trends and developments in the Generative AI space at Generative AI Expo, taking place February 11-13, 2025, in Fort Lauderdale, Florida. Generative AI Expo covers the evolution of GenAI and feature conversations focused on the potential for GenAI across industries and how the technology is already being used to create new opportunities for businesses to improve operations, enhance customer experiences, and create new growth opportunities.




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

LatticeFlow AI's Suite 2.0: The Future of AI, Optimized for Performance and Security

By: Greg Tavarez    12/6/2024

With its Suite 2.0, LatticeFlow AI guides organizations through the "last mile" by enabling them to deliver AI systems that are technically performant…

Read More

A10 Networks Shields AI Innovations with Enhanced Security

By: Greg Tavarez    12/6/2024

A10 Networks plans to expand its high-performance infrastructure and security solutions to now include an AI firewall and LLM safety tooling.

Read More

Learn How Generative AI is Redefining Digital Marketing at Generative AI Expo 2025

By: Greg Tavarez    12/5/2024

"Digital Marketing in the Age of Generative AI" will explore how generative AI is changing the way businesses approach digital marketing strategies.

Read More

Twelve Labs and AWS Turn Videos into Text-Searchable Content

By: Greg Tavarez    12/4/2024

Twelve Labs will use AWS technologies to accelerate the development of its foundation models that map natural language to what's happening in a video.

Read More

-->