Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

Details on TensorOpera's Infrastructure Team-Up with Aethir

By Alex Passett

Back in June, Silicon Valley-based AI company TensorOpera introduced Fox-1, its open-source small language model (SLM) with, according to the official press release, “1.6 billion parameters, outperforming other models in its class,” even models reportedly from industry giants like Google or Apple. And this certainly isn’t the company’s first rodeo, either; since early 2022, TensorOpera has provided GenAI-platform-at-scale solutions, offering users what they need to build and commercialize their own GenAI applications both easily and economically.

What’s more is that its Fox-1 features an innovative architecture that is, quote, “78% more advanced than many other models.”

This is what leads us to more recent news:

TensorOpera established a partnership with Aethir, a distributed cloud infrastructure provider with teams looking to spearhead the next evolution of cloud compute.

By partnering up, Aethir has equipped TensorOpera with the advanced GPU resources necessary for Fox-1’s additional training. Aethir's previous collaborations with NVIDIA Cloud Partners, Infrastructure Funds, and other enterprise-grade hardware providers has established a global, large-scale GPU cloud. This led (and still leads) to the delivery of cost-effective and scalable GPU resources; these are essential for high-throughput, substantial memory capacity, as well as efficient parallel processing capabilities.

Thus, with the support of Aethir’s decentralized cloud infrastructure, TensorOpera has the necessary tools for facilitating streamlined AI development that requires higher network bandwidth and ample amounts of GPU power.

“I am thrilled about our partnership with Aethir,” said Salman Avestimehr, CEO and co-founder of TensorOpera. “In the dynamic landscape of generative AI, the ability to efficiently scale up and down during various stages of model development and in-production deployment is essential. Aethir’s decentralized infrastructure offers this flexibility, combining cost-effectiveness with high-quality performance. Having experienced these benefits firsthand during the training of our Fox-1 model, we decided to integrate Aethir's GPU resources into TensorOpera's AI platform to empower developers with the resources necessary for pioneering the next generation of AI technologies."

Read more here.




Edited by Greg Tavarez
Get stories like this delivered straight to your inbox. [Free eNews Subscription]
SHARE THIS ARTICLE
Related Articles

VoIP Provider Zadarma Integrates Three AI Voice Agents into its PBX Platform

By: Erik Linask    6/11/2025

London-based VoIP provider Zadarma integrated three AI-powered voice assistants directly into its PBX platform, a first in Europe, according to the co…

Read More

The Future of CX: Mosaicx Unveils AI-Native Engage Platform

By: Erik Linask    6/6/2025

Mosaicx has launched Engage, its next-gen AI-native CX platform to drive improvements in customer engagement and experiences.

Read More

Jabra Reviving Human Focus Amid AI Revolution in Customer Experience

By: Erik Linask    5/27/2025

Jabra looks to redefine how customer service teams make good on the promise of quality CX by combining the "what" of customer conversations, with "how…

Read More

When AI Ambitions are Dictated by Cloud Matters

By: Special Guest    5/27/2025

How are increasing AI workloads changing what we know about and how we design cloud architectures?

Read More

Rising AI-Driven Infrastructure Costs Expose Critical Weaknesses: NVMe SSDs & CXL Modules Redefine Scalability

By: Special Guest    5/7/2025

AI workloads are too demanding for their existing IT architecture. GPUs remain under-utilized, not because of faulty hardware, but because data can't …

Read More

-->