FriendliAI Makes LLMs Easy: Managed Service Streamlines Generative AI Deployment
Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

FriendliAI Makes LLMs Easy: Managed Service Streamlines Generative AI Deployment

By Greg Tavarez

Generative AI has changed how industries operate, and it has been great to see it do this from the start of its surge. We won't risk soundling like a broken record, though; we're sure you're already aware of GenAI's benefits and the opportunities therein.

However, many organizations are still facing a widespread adoption hurdle: Namely, complex infrastructure requirements. Deploying and serving LLMs necessitates expertise in containerization and managing high-performance computing resources like GPUs. This technical barrier often limits the technology to well-resourced organizations with dedicated AI teams.

FriendliAI, a frontrunner in inference serving for generative AI, aims to bridge this gap with Friendli Dedicated Endpoints, a managed service offering built upon the foundation of their Friendli Container technology. This new addition to the Friendli Suite streamlines the deployment of LLMs by automating complex processes and delivering cost-effective, high-performance custom model serving.

Friendli Dedicated Endpoints functions as the managed cloud alternative to Friendli Container. Friendli Container, already adopted by startups and large enterprises alike, allows for the deployment of LLMs at scale within private environments. It achieves reductions in GPU costs through the power of the Friendli Engine, a highly GPU-optimized engine that also serves as the core of Friendli Dedicated Endpoints.

Friendli Dedicated Endpoints also simplifies the entire LLM development and serving process through automation. This automation encompasses everything from model fine-tuning and cloud resource procurement to deployment monitoring. What this means is that users can now fine-tune and deploy cutting-edge, quantized models like Llama 2 or Mixtral with just a few clicks, thanks to the Friendli Engine's power. This allows users of all technical backgrounds to leverage Friendli's GPU-optimized serving capabilities.

In the announcement, Byung-Gon Chun, CEO of FriendliAI, mentioned the importance of making generative AI accessible to a wider audience and highlights its potential to drive innovation and boost organizational productivity.

“Friendli Dedicated Endpoints eliminates the burden of infrastructure management,” Chun said. “This allows our customers to unlock the full potential of generative AI with the Friendli Engine. Whether it's text generation, image creation, or anything else, our service opens doors to endless possibilities for users regardless of their technical expertise.”

To be more specific, here are key features offered by Friendli Dedicated Endpoints.

Dedicated GPU instances allow users to reserve entire GPUs for their custom generative AI models to guarantee consistent and dependable access to high-performance computing resources.

Also, a single GPU powered by the optimized Friendli Engine delivers performance equivalent to up to seven GPUs running a vanilla LLM. This translates to cost savings of 50% to 90% on GPUs and up to 10 times faster response times for queries.

Furthermore, Friendli Dedicated Endpoints automatically adapts to fluctuating workloads and handles failures seamlessly. This includes features like automated failure management and auto-scaling, which adjusts resource allocation based on real-time traffic patterns. In other words, say hello to uninterrupted operations and optimal resource utilization during peak demand periods.

By eliminating technical hurdles and optimizing GPU usage, FriendliAI aims to remove infrastructure constraints as a barrier to innovation in generative AI.

“We're excited to welcome new users on our mission to make generative AI models fast and affordable,” Chun said.

By offering a user-friendly managed service with exceptional performance and efficiency, Friendli Dedicated Endpoints has the potential to better equip a wider range of users to leverage the power of LLMs and unlock new possibilities in various fields.




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

GenAIToday Editor

SHARE THIS ARTICLE
Related Articles

CDG and Innovation Incubator Alliance Advances AI in the Broadband Industry

By: Greg Tavarez    5/1/2024

A strategic alliance between Communications Data Group, or CDG, and Innovation Incubator aims to create solutions that integrate with CDG's existing O…

Read More

DeepL's Write Pro Ignites Creativity in Business Communication

By: Greg Tavarez    4/29/2024

DeepL Write Pro's strength lies in its ability to give writers, regardless of experience, an edge in communication.

Read More

What Moveworks' New Collab Means for Accessible and Scalable GenAI Solutions

By: Alex Passett    4/25/2024

Moveworks has announced a new multi-year collaboration deal with Microsoft.

Read More

AI Agents Pose New Threat: Exploiting Unpatched Software Flaws

By: Greg Tavarez    4/24/2024

A new study suggests that LLMs have the potential to autonomously exploit unpatched software vulnerabilities.

Read More

IntelePeer Integrates with Microsoft Azure OpenAI Service for Improved Automation

By: Tracey E. Schelmetic    4/24/2024

Communications automation platform provider IntelePeer recently announced integration with Microsoft Azure OpenAI Service to embed OpenAI GPT into Int…

Read More