Gen-AI-Today

GenAI TODAY NEWS

Free eNews Subscription

Details on TensorOpera's Infrastructure Team-Up with Aethir

By Alex Passett

Back in June, Silicon Valley-based AI company TensorOpera introduced Fox-1, its open-source small language model (SLM) with, according to the official press release, “1.6 billion parameters, outperforming other models in its class,” even models reportedly from industry giants like Google or Apple. And this certainly isn’t the company’s first rodeo, either; since early 2022, TensorOpera has provided GenAI-platform-at-scale solutions, offering users what they need to build and commercialize their own GenAI applications both easily and economically.

What’s more is that its Fox-1 features an innovative architecture that is, quote, “78% more advanced than many other models.”

This is what leads us to more recent news:

TensorOpera established a partnership with Aethir, a distributed cloud infrastructure provider with teams looking to spearhead the next evolution of cloud compute.

By partnering up, Aethir has equipped TensorOpera with the advanced GPU resources necessary for Fox-1’s additional training. Aethir's previous collaborations with NVIDIA Cloud Partners, Infrastructure Funds, and other enterprise-grade hardware providers has established a global, large-scale GPU cloud. This led (and still leads) to the delivery of cost-effective and scalable GPU resources; these are essential for high-throughput, substantial memory capacity, as well as efficient parallel processing capabilities.

Thus, with the support of Aethir’s decentralized cloud infrastructure, TensorOpera has the necessary tools for facilitating streamlined AI development that requires higher network bandwidth and ample amounts of GPU power.

“I am thrilled about our partnership with Aethir,” said Salman Avestimehr, CEO and co-founder of TensorOpera. “In the dynamic landscape of generative AI, the ability to efficiently scale up and down during various stages of model development and in-production deployment is essential. Aethir’s decentralized infrastructure offers this flexibility, combining cost-effectiveness with high-quality performance. Having experienced these benefits firsthand during the training of our Fox-1 model, we decided to integrate Aethir's GPU resources into TensorOpera's AI platform to empower developers with the resources necessary for pioneering the next generation of AI technologies."

Read more here.




Edited by Greg Tavarez
Get stories like this delivered straight to your inbox. [Free eNews Subscription]
SHARE THIS ARTICLE
Related Articles

What enterprises learn from mixed open model stacks

By: Contributing Writer    1/23/2026

Enterprises are done waiting for a single model to solve every problem. The most effective teams blend specialist open models, proprietary endpoints a…

Read More

Achieving Predictability at Scale: How AI and Automation is Transforming IT Operations

By: Special Guest    12/22/2025

Achieve predictability at scale by leveraging multi-domain agentic workflows that fuse AI reasoning with deterministic execution to eliminate IT silos…

Read More

Enhancing Labor Productivity in Construction with AI Field Feedback Loops

By: Contributing Writer    12/15/2025

In the competitive construction industry, leveraging Drawer AI-powered field feedback loops is key to improving labor productivity and bid accuracy. T…

Read More

Icons8 Icons: an engineering-grade playbook

By: Contributing Writer    11/19/2025

Icons are functional parts of the interface. They compress intent, telegraph state, and reduce rereads. When you run them like a subsystem-documented,…

Read More

TMC and genaitoday.ai Announce 2025 Generative AI Product of the Year Award Winners

By: TMCnet News    11/14/2025

GenAI Product of the Year Award winners demonstrate exceptional ability to transform workflows, accelerate decision-making, elevate customer engagemen…

Read More

-->