Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

Red Hat OpenShift AI 2.9: The Next Chapter in Hybrid Cloud AI Innovation

By Greg Tavarez

Customers face a multitude of hurdles when transitioning AI models from the experimental phase to real-world applications. Soaring hardware creates a barrier that forces organizations to fight with the financial burden of implementing AI solutions. Data privacy concerns add another layer of complexity. Companies are hesitant to share sensitive information with SaaS models, which raises questions about data security and control.

The rapidly evolving nature of GenAI further complicates the issue. Organizations struggle to establish a dependable core AI platform that can adapt to these advancements. Therefore, they do not get the most out of the technology.

Industry research firm IDC further draws attention to this critical role of infrastructure modernization in successful AI adoption. Enterprises must revamp existing applications and data environments to accommodate the demands of AI. Additionally, breaking down silos between current systems and storage platforms is essential for seamless data flow. Sustainability considerations regarding infrastructure are paramount, while strategically selecting deployment locations across cloud, data center and edge environments creates optimal performance.

Red Hat interprets these findings as a call for AI platforms to offer adaptability. As companies progress through their AI journey, their needs and resources will inevitably evolve. A flexible platform makes certain of continued support throughout this process.

In that vein, Red Hat, a provider of open-source solutions, announced upgrades to Red Hat OpenShift AI, its open hybrid AI and machine learning platform. Built on Red Hat OpenShift, the platform allows businesses to create and deploy AI-powered applications across hybrid cloud environments. Red Hat OpenShift AI offers a flexible, scalable and adaptable platform that supports predictive and generative models, deployable on-premises or in the cloud.

Key highlights of the updated platform include:

  • Model Serving at the Edge: This technology preview allows deploying AI models in remote locations using single-node OpenShift to enable inference capabilities in resource-limited environments with unreliable or no network connectivity. It offers a consistent operational experience throughout core, cloud and edge deployments, with built-in observability features.
     
  • Enhanced Model Serving: Users can employ multiple model servers for various tasks, including predictive and GenAI. This includes support for KServe, a Kubernetes resource definition for managing model serving, vLLM and Text Generation Inference Server engines for LLMs, and Caikit-nlp-tgis runtime for natural language processing tasks. This consolidated platform for predictive and GenAI applications reduces costs and streamlines operations.
     
  • Distributed Workloads with Ray: Red Hat OpenShift AI leverages Ray, a framework for accelerating AI workloads, along with CodeFlare and KubeRay for distributed processing and training on multiple cluster nodes. CodeFlare simplifies task orchestration and monitoring, while central queuing and management features optimize resource utilization and allocation, including GPUs.
     
  • Improved Model Development: Project workspaces and additional workbench images offer data scientists greater flexibility. These images include support for popular IDEs and toolkits like VS Code and RStudio (currently in technology preview), along with enhanced CUDA capabilities for diverse use cases and model types.

These enhancements reflect Red Hat's commitment to open source principles and customer choice in the realm of intelligent workloads. They encompass everything from the underlying hardware to development tools like Jupyter and PyTorch. This will create faster innovation, increased productivity and the ability to integrate AI seamlessly into daily operations.

Red Hat's AI strategy prioritizes flexibility within hybrid cloud environments to help organizations refine pre-trained models with their own data and leverage various hardware and software accelerators. Red Hat OpenShift AI 2.9 delivers on these needs through AI/ML advancements as well as Red Hat’s partner ecosystem.

“Enterprises need a more reliable, consistent and flexible AI platform that can increase productivity, drive revenue and fuel market differentiation,” said Ashesh Badani, Senior Vice President and Chief Product Officer, Red Hat. “Red Hat’s answer for the demands of enterprise AI at scale is Red Hat OpenShift AI, making it possible for IT leaders to deploy intelligent applications anywhere across the hybrid cloud while growing and fine-tuning operations and models as needed to support the realities of production applications and services."




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

Wipro, HPE Team Up: New GenAI Solution Aims to Slash Resolution Times

By: Greg Tavarez    6/21/2024

Wipro entered a partnership with Hewlett Packard Enterprise and unveiled a collaborative GenAI solution.

Read More

ICYMI: News in the Generative AI Space

By: Greg Tavarez    6/21/2024

Here are a few articles compiled into one for readers interested in developments around GenAI.

Read More

AuthenticID Launches AI-Powered Deep Fake Detection Shield

By: Greg Tavarez    6/21/2024

AuthenticID recently announced the release of a new solution to detect deep fake and generative AI injection attacks.

Read More

Shreds.AI Cracks Complex Software Development

By: Greg Tavarez    6/20/2024

Shreds.AI, which recently announced its official beta launch, claims that it slashes the time to market for software, along with team sizes and costs,…

Read More

-->