Sarvam Open-Sources 30B, 105B

Sarvam Open-Sources 30B, 105B

7 March 2026

What happened

Sarvam AI open-sourced its Sarvam 30B and Sarvam 105B reasoning models, trained from scratch in India under the IndiaAI mission. These models use a Mixture-of-Experts (MoE) Transformer backbone, optimised for efficient deployment across diverse hardware, from flagship GPUs to personal devices. Sarvam 30B (16T tokens) powers the Samvaad conversational agent, and Sarvam 105B (12T tokens) drives the Indus AI assistant for complex reasoning. Both models outperform significantly larger models on Indian language benchmarks, with their MoE architecture scaling parameter count without increasing compute per token, keeping inference costs practical, per Sarvam's announcement.

Why it matters

This release provides platform engineers and CTOs with production-ready, sovereign-trained MoE models optimised for diverse hardware, reducing reliance on external providers for critical AI infrastructure. The MoE architecture's efficiency in scaling parameter count without increasing compute per token lowers inference costs, a key metric for procurement teams evaluating large-scale deployments. This move aligns with India's broader strategic push for AI autonomy, offering a local, high-performance option for building complex AI applications and conversational agents, particularly for Indian language use cases.

Source:sarvam.ai

AI generated content may differ from the original.

Published on 7 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Sarvam Open-Sources 30B, 105B