Across our GenAI Today site, we often write articles about how organizations are finding a wealth of new possibilities thanks to generative AI. The thing is, for many, these advancements might seem out of reach due to the perceived high cost, intricate integration processes and lengthy deployment times.
Further complicating matters is the availability of pre-built LLMs. While convenient, these models often lack training on the most current or industry-specific data crucial for organizations to glean valuable insights or automate repetitive tasks effectively.
In essence, while pre-built LLMs offer a readily available solution, they may not be tailored to address the organization's specific needs. This hinders the potential benefits of generative AI.
It seems that a simple, affordable and powerful infrastructure solution that makes AI more accessible for enterprises is needed. Maybe NetApp AIPod with Lenovo ThinkSystem servers for NVIDIA OVX, a converged infrastructure optimized for the generative AI era?
NetApp and Lenovo are long-time collaborators in the tech industry (over five years strong). The new solution is a converged infrastructure system, meaning it combines essential hardware and software components.
The solution specifically supports three key technologies: NVIDIA NIM inference microservices, which streamline the process of using pre-trained AI models; the NVIDIA AI Enterprise platform for building and deploying generative AI applications; and a technique called retrieval-augmented generation, or RAG. RAG allows businesses to leverage their own valuable data for AI tasks without the need for massive, time-consuming model training from scratch. This translates to faster and more cost-effective AI adoption for businesses.
NetApp AIPod brings together the best of all worlds for businesses looking to dive into AI. It combines NetApp's reliable storage solutions with their expertise in hybrid cloud data management. This foundation is bolstered by Lenovo's high-performance ThinkSystem servers, perfectly suited for NVIDIA's cutting-edge OVX architecture.
The system also boasts NVIDIA L40S GPUs for improved processing power and NVIDIA Spectrum-X Networking for smooth data flow. This comprehensive package simplifies AI adoption by integrating seamlessly with existing systems.
“The NetApp AIPod with Lenovo ThinkSystem servers for NVIDIA OVX transforms enterprise AI by delivering a pre-integrated, high-performance solution that accelerates the deployment and scaling of generative AI workloads," said Sandeep Singh, Senior Vice President and General Manager, Enterprise Storage at NetApp. "Through our collaboration with NVIDIA and Lenovo, NetApp is empowering organizations to harness the capabilities of AI more efficiently.”
Additionally, NetApp, Lenovo and NVIDIA prioritize security throughout the infrastructure, giving businesses peace of mind. With NetApp AIPod, organizations unlock the full potential of AI for tasks like chatbots, knowledge management and object recognition.
“As customers deploy AI, they demand business critical availability, ease of management, and infrastructure efficiency,” said Kirk Skaugen, President of Lenovo Infrastructure Solutions Group. “The NetApp AIPod with Lenovo ThinkSystem servers for NVIDIA OVX delivers optimized and validated solutions to make generative AI more accessible for businesses of every size.”
NetApp AIPod with Lenovo for NVIDIA OVX will be available by summer of 2024.
Edited by
Alex Passett