Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

A10 Networks Shields AI Innovations with Enhanced Security

By Greg Tavarez

AI, particularly generative AI, has changed data centers and hybrid cloud infrastructures, ultimately reshaping the technological landscape. This we know. The downside to AI applications becoming increasingly integrated into various industries is that they expand the attack surface and make security a big concern.

According to a survey by KPMG, over three-fourths of respondents anticipate facing heightened data privacy and security risks due to the adoption of GenAI.

One of the primary concerns centers around the security of LLMs, including proprietary and open-source models like LLAMA 3.1. These models, while powerful, can be susceptible to various security threats like data poisoning, model extraction and prompt injection. Data poisoning involves introducing malicious data into the training dataset to compromise the model's output. Model extraction aims to steal the model's intellectual property by extracting its parameters. Prompt injection involves manipulating the input prompts to elicit unintended or harmful responses from the model.

Therefore, there is a need for security teams to develop new strategies to safeguard AI applications in production environments.

And A10 Networks, a provider of security and infrastructure solutions for on-premises, hybrid cloud and edge-cloud environments, plans to address these concerns by expanding its high-performance infrastructure and security solutions to now include an AI firewall and LLM safety tooling.

Organizations that put LLMs into production require new insights and easy-to-integrate solutions to deliver security and availability of inference workloads. These include tools that discover vulnerabilities such as responding to prompts that cause the model to hallucinate or divulge information that is proprietary or personally identifiable in nature. Additional tools would be required to, in some instances, break a secured model to help identify ways to make the model stronger.

New solutions like an AI firewall, as proposed by A10 Networks, can be used to secure production LLMs. For those of you unfamiliar with it, an AI firewall is a new approach to securing a new set of applications. It is a solution that inspects the request and response path of traffic to an AI Inference application.

An AI firewall can be applied to enforce specific policies, for example, to drop prompts that can be harmful to the untested and tested LLM. An AI firewall operates at a high speed to minimize latency and to work in conjunction with a proxy, or it may have a proxy function built in to terminate encrypted traffic and process traffic directly.

“Building on our expertise in helping secure service providers and hybrid cloud infrastructures, we are driving our AI strategy forward to develop new approaches,” said Dhrupad Trivedi, President and CEO, A10 Networks. “An AI firewall can help secure these new and evolving applications, while minimizing latency and maximizing availability so AI applications perform as they are intended.”

The long-story-short of this? An AI firewall will take actions to increase the security of AI applications, while also helping to provide availability and low latency.



Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

Shape a Responsible Future for Generative AI at Generative AI Expo 2025

By: Greg Tavarez    1/17/2025

A panel session at Generative AI Expo 2025 will unpack issues such as bias, privacy and transparency.

Read More

Turn AI Challenges into Opportunities at Generative AI Expo 2025

By: Greg Tavarez    1/17/2025

"Overcoming Challenges in AI Implementation" will explore critical obstacles businesses face when deploying AI solutions and how to overcome those cha…

Read More

AI Adoption in Application Development Faces IT Roadblocks

By: Greg Tavarez    1/13/2025

OutSystems revealed in its global 2025 State of Application Development report the obstacles IT professionals face when developing modern applications…

Read More

Responsible GenAI Adoption: Privacy, Safety, and the Path to Resilience

By: Greg Tavarez    1/9/2025

Findings from the 2024 State of AI in Cybersecurity Survey show us how industry leaders are approaching responsible GenAI adoption.

Read More

-->