Sarvam AI LLMs: India’s New Voice-Optimized Models Are Shaking Up the Global AI Landscape

Bengaluru-based AI startup Sarvam launched two indigenous large language models specifically trained on Indian languages on February 18, 2026, marking a pivotal moment for India’s growing AI sovereignty movement. The announcement at the India AI Impact Summit represents more than just another model release. It signals that Indian AI language models are ready to compete globally while staying rooted in local needs.

These aren’t your typical chatbots. The announcement includes a 30 billion and a 105 billion parameter model, with the first model handling real-time conversations through 32,000 context windows while the second offers 128,000 tokens for more complex tasks. What makes Sarvam AI new LLMs especially compelling is their focus on solving problems that global giants often ignore—real-time voice AI models optimized for India’s linguistic diversity and infrastructure constraints.

The timing couldn’t be better. While OpenAI, Google, and Anthropic dominate headlines, millions of Indians interact primarily in regional languages through voice. Sarvam’s models are built to be used through voice commands and accessible through 22 Indian languages, a competitive advantage in a country of 1.45 billion where the vast majority can’t read, write or type in English. This focus on voice-first AI applications positions these models uniquely in the global market.

Breaking Down the Technical Architecture Behind Sarvam AI LLMs

The engineering behind these Sarvam AI LLMs reveals careful thought about efficiency and scale. Sarvam-30B contains 30 billion parameters with a context window of 32,000 tokens, trained on 16 trillion tokens and designed to operate efficiently while delivering high-quality responses using fewer computational resources. This isn’t just impressive on paper—it translates to lower costs and faster inference times that matter for real-world deployment.

Meanwhile, the larger sibling takes things further. The Sarvam-105B model, with 105 billion parameters and a 128,000-token context window, is designed for more complex reasoning and large-scale analytical tasks. Both models use a mixture-of-experts architecture that activates only a fraction of parameters at a time, dramatically reducing computing costs without sacrificing performance.

The AI reasoning capabilities of these models stand out. Sarvam 30B utilizes a 19-layer depth with 128 experts and a top-6 routing strategy, while Sarvam 100B scales to 32 layers and employs top-8 routing over 128 experts. This architectural choice enables the models to handle everything from casual conversations to analyzing complex financial statements in real time.

How Sarvam AI vs Gemini Comparisons Stack Up

Performance benchmarks tell an interesting story. Milestones shared at the launch show the model performing competitively against international models including Gemma 27B, Mistral-32-24B, Nemotron-30B, Qwen-30B, and GPT-OSS-20B across tasks measuring mathematical reasoning, coding accuracy, and general problem-solving. The Sarvam AI vs Gemini debate becomes particularly fascinating when you consider cost efficiency alongside raw performance.

What surprised many industry watchers was the claim about efficiency gains. Co-founder Pratyush Kumar explained that although the 105-billion-parameter model is only about one-sixth the size of the 600-billion-parameter DeepSeek R1 model, it was trained from scratch and delivers similar competitive intelligence while being cheaper than Google’s Gemini Flash. These aren’t minor differences—they represent fundamentally different approaches to LLM optimization for voice applications.

The real-world demonstrations proved compelling. During the summit, the model analyzed a company balance sheet in real time, providing precise answers to detailed financial and contextual questions. This capability positions Sarvam AI LLMs as serious contenders for enterprise applications where speed and accuracy both matter.

Sovereign AI India Initiative Gains Momentum

The government’s role in developing these Indian AI language models deserves attention. IT Minister Ashwini Vaishnaw selected Sarvam AI as the first startup from 67 shortlisted companies to develop India’s first indigenous foundational model under the IndiaAI Mission. This sovereign AI India push reflects strategic thinking about technological independence and data governance.

The support extends beyond just funding. Sarvam AI secured approximately Rs 99 crore in subsidies for acquiring 4,096 NVIDIA H100 GPUs, providing access to cutting-edge hardware crucial for training advanced models. This infrastructure support from the IndiaAI Mission addresses a major bottleneck that typically hampers AI startups in emerging markets.

The collaboration with global partners strengthens the ecosystem. The startup announced key partnerships with global tech players like Qualcomm, Bosch, and Nokia for product integration and deployment. These partnerships aren’t just symbolic—they create distribution channels that can bring multilingual LLM performance to millions of devices.

Real-Time Voice AI Models Designed for India’s Reality

Voice optimization sets these models apart from competitors. The models support all 22 scheduled Indian languages and are optimized for voice-first interactions. This isn’t a minor feature—it’s the core design philosophy that shapes every architectural decision.

The practical applications look promising. A video demo showed a user clicking a dedicated AI button on a feature phone to converse with an AI assistant in a local language to get guidance on government schemes or local markets. This vision of AI accessibility extends far beyond smartphone users to include the hundreds of millions who rely on feature phones.

Edge deployment capabilities add another dimension. The company is using edge models that take up only megabytes of space, can run on most phones with existing processors, and can work offline. For a country where connectivity remains spotty outside urban centers, offline functionality transforms AI from a luxury into practical infrastructure.

The Vikram chatbot demonstration captured attention. Sarvam showcased “Vikram,” a multilingual AI chatbot capable of conversing in Hindi, Punjabi, Marathi, and other Indian languages, even on feature phones. Named after physicist Vikram Sarabhai, the chatbot represents the kind of voice-first AI applications that could bring generative AI benefits to populations previously unable to access advanced AI tools.

Multilingual LLM Performance Across Indian Languages

Language support goes beyond simple translation. The AI models are trained using 22 Indian languages on high-quality datasets, including varied financial documents, literature, newspapers, historic texts, and more. This comprehensive training approach ensures the models understand cultural context and linguistic nuances that generic models often miss.

The training methodology reveals careful attention to representation. The company chose to have a 28% representation of Hindi and 8% representation each for 9 other languages—Bengali, Gujarati, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, and Telugu—together representing the mother tongue of over 70% of the Indian population. This balanced approach to Indian AI language models ensures that major linguistic communities all receive appropriate attention.

Code-mixing support addresses real-world usage patterns. They chose to support three forms of Indian language representations based on day-to-day usage patterns, namely formal native script, code-mixed which combines multiple languages in single conversations. Anyone who’s observed how Indians actually communicate recognizes this as essential for practical deployment.

Enterprise Applications and Commercial Viability

The business model extends beyond just releasing models. The startup announced the Sarvam startup programme, providing free API credits worth ₹10 Cr to startups. This ecosystem-building approach mirrors successful strategies from companies like AWS and Google Cloud that recognized early developer adoption drives long-term platform success.

The Sarvam for Work platform targets specific enterprise needs. The company outlined plans to build specialized AI systems, including coding-focused models and enterprise tools under a product called Sarvam for Work, and a conversational AI agent platform called Samvaad. These focused applications show understanding that generic models alone don’t solve specific business problems.

Industrial applications are gaining traction. Sarvam launched ‘Pravah’, an AI token factory that will manufacture tokens for industrial use with a variety of models, making AI available to everybody at a fraction of the cost. This infrastructure play positions the company to benefit from the entire ecosystem’s growth rather than just selling individual model access.

The dubbing capabilities open media opportunities. The platform supports AI video dubbing, with Sarvam Studio generating high-fidelity dubs in 11 Indian languages, with participants in an expert study preferring Sarvam Studio for overall quality and production readiness. Media companies seeking to expand into regional markets represent a significant potential customer base.

Investment Backing and Growth Trajectory

The funding landscape reflects investor confidence. Founded in 2023, Sarvam has raised more than $50 million in funding and counts Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners among its investors. These aren’t just any investors—they’re firms with track records of spotting transformative technology companies early.

The rapid timeline from founding to product launch stands out. Less than three years from inception to releasing competitive LLMs represents remarkable execution speed. This pace suggests strong technical talent and effective organization that can iterate quickly based on real-world feedback.

The product launch strategy shows ambition. The launch is part of Sarvam’s ’14 day-14 launch’ programme, which included AI glasses ‘Kaze’ that have the ability to listen, understand, respond and capture what the user sees. This aggressive launch cadence builds momentum and demonstrates the breadth of their technology platform beyond just the core LLMs.

Challenges and Competitive Landscape

The competitive environment remains intense. The startup’s claims of outperforming larger global models place Sarvam AI directly in the path of tech giants, with the true test lying in sustained performance across diverse real-world applications and competitive pricing. Benchmarks matter, but production reliability and support determine long-term success.

Dependency on government support raises sustainability questions. Access to cutting-edge hardware is critical for training advanced LLMs, but the long-term sustainability may hinge on the continuity and scale of subsidy programs, as well as the company’s ability to translate subsidized development into profitable commercial operations. The transition from government-backed research to self-sustaining business represents a critical challenge.

Global players aren’t standing still. OpenAI launched IndQA in November, a new benchmark designed to evaluate how well AI models understand and reason about questions pertinent to various Indian languages, while Anthropic has infused 10 Indic languages in Claude. These moves show that international companies recognize India’s importance and are adapting their products accordingly.

Future Roadmap and Open Source Commitment

The open source strategy could accelerate adoption. Sarvam said it plans to open source the 30B and 105B models, though it did not specify whether the training data or full training code would also be made public. Even partial open sourcing enables researchers and developers to build on top of these foundations, potentially creating network effects that benefit the entire ecosystem.

Scaling philosophy emphasizes practical applications. Co-founder Pratyush Kumar said the company wants to be mindful in how they do the scaling, understanding the tasks which really matter at scale. This measured approach contrasts with the “bigger is always better” mentality and suggests focus on solving real problems rather than chasing parameter counts.

The edge AI strategy opens new distribution channels. Tushar Goswamy, head of Edge AI at Sarvam, said through edge AI they want to bring intelligence to every phone, laptop, car, and even a new generation of devices. This vision extends AI beyond traditional computing devices into everyday objects, potentially creating entirely new categories of AI-powered experiences.

Industry Recognition and Global Impact

The praise from industry leaders carries weight. Sundar Pichai stated that the developer energy he finds in India is second to none, and that recently, the work Sarvam has done developing local AI models shows what’s actually happening. When Google’s CEO specifically highlights a startup’s work at a major summit, it signals that global tech leadership recognizes India’s growing AI capabilities.

The broader sovereign AI movement gains momentum. India unveiled three major sovereign AI models at the India AI Impact Summit 2026, marking a strong push for homegrown AI systems and signaling India’s shift from using global AI tools to creating its own infrastructure. Sarvam AI LLMs lead this movement but they’re part of a larger ecosystem that includes multiple players working toward technological self-reliance.

The competitive positioning in emerging markets matters. India’s linguistic diversity and infrastructure constraints mirror challenges faced across South Asia, Africa, and Latin America. Success in India could create templates for AI deployment in other emerging markets where incumbent solutions remain inadequate.

Sarvam AI LLMs represent more than just another set of language models entering an increasingly crowded market. They embody a thesis that AI success in emerging markets requires purpose-built solutions rather than adaptations of Western products. The focus on real-time voice AI models, multilingual LLM performance across 22 languages, and deployment strategies that work on feature phones reflects deep understanding of India’s unique context.

The sovereign AI India movement backing these efforts provides both resources and strategic direction that startups rarely receive. With over $50 million in funding, partnerships with global giants like Qualcomm and Nokia, and government support through the IndiaAI Mission, the infrastructure exists to scale these solutions rapidly.

Yet challenges remain substantial. Competition intensifies as OpenAI, Google, and Anthropic adapt their products for Indian markets. The transition from government-subsidized development to commercially sustainable operations will test business model viability. Technical performance in controlled demos must translate to reliable production deployments across diverse use cases.

The next 12-18 months will prove crucial. Can these Indian AI language models maintain performance advantages as incumbents improve their offerings? Will the open source strategy accelerate adoption or enable competitors to leapfrog Sarvam’s innovations? Most importantly, will voice-first AI applications achieve the transformative impact on accessibility that their creators envision?

For anyone tracking AI’s global evolution, Sarvam AI deserves close attention. Their success or struggle will shape how we think about AI development beyond Silicon Valley and Beijing. Whether they ultimately triumph or stumble, they’re already demonstrating that the future of AI looks different when built by diverse teams solving problems their communities actually face.

Ready to explore Indian AI language models for your business? Visit Sarvam AI’s platform to learn more about their APIs, try their voice-optimized solutions, or join their startup programme for free credits. The voice-first AI revolution is happening now—and it speaks your language.


Frequently Asked Questions

What makes Sarvam AI LLMs different from ChatGPT or Gemini?

Sarvam AI LLMs are specifically optimized for voice-first interactions across 22 Indian languages, trained on Indian datasets including regional literature, financial documents, and cultural texts. Unlike general-purpose models, they’re designed to work efficiently on feature phones, support offline operation through edge deployment, and understand code-mixing patterns common in Indian communication. The models use mixture-of-experts architecture to deliver competitive performance at lower computational costs.

How do the Sarvam-30B and Sarvam-105B models differ in capabilities?

Sarvam-30B contains 30 billion parameters with a 32,000-token context window, optimized for real-time conversations, customer support, and applications requiring fast responses with lower computational resources. Sarvam-105B features 105 billion parameters with a 128,000-token context window, designed for complex reasoning tasks like financial analysis, long-document summarization, and multi-step problem-solving. Both models were trained from scratch in India using sovereign compute infrastructure.

What languages do Sarvam AI models support?

Sarvam AI LLMs support all 22 scheduled Indian languages including Hindi, Bengali, Gujarati, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu, and others. The models are trained with 28% Hindi representation and 8% each for nine other major languages, covering over 70% of India’s population. They support three representation forms: formal native script, code-mixed combinations, and transliterated text, matching how Indians actually communicate in daily life.

Are Sarvam AI models available for commercial use?

Yes, Sarvam AI offers commercial access through their API platform. They’ve launched the Sarvam Startup Programme providing ₹10 crore in free API credits to startups. The company offers various products including Sarvam for Work (enterprise tools), Samvaad (conversational AI platform), Bulbul V3 (text-to-speech), and Pravah (AI token factory). Enterprise solutions include voice agents, dubbing services supporting 11 languages, and document intelligence capabilities.

What is the IndiaAI Mission’s role in developing these models?

The IndiaAI Mission, launched with ₹10,000 crore funding, selected Sarvam AI as the first company from 67 applicants to build India’s indigenous foundational LLM. The government provided approximately ₹99 crore in subsidies for acquiring 4,096 NVIDIA H100 GPUs and access to sovereign compute infrastructure. This support aims to reduce reliance on foreign AI platforms, ensure data sovereignty, and develop AI capabilities tailored to Indian governance and public services.

How does Sarvam AI’s performance compare to global models?

Benchmarks show Sarvam AI LLMs performing competitively against Gemma 27B, Mistral-32-24B, Nemotron-30B, Qwen-30B, and GPT-OSS-20B across mathematical reasoning, coding accuracy, and problem-solving tasks. The 105B model reportedly outperforms DeepSeek R1 (600B parameters) on several benchmarks despite being one-sixth the size, while offering better cost efficiency than Google’s Gemini Flash. Real-world demonstrations included analyzing company balance sheets and providing contextually accurate responses in multiple Indian languages.

What devices can run Sarvam AI models?

Sarvam AI has developed edge models requiring only megabytes of space that run on feature phones, smartphones, and can operate offline. Through partnerships with Nokia and HMD, they’re deploying conversational AI assistants accessible via dedicated buttons on feature phones. The company is also working with Bosch to integrate AI assistants into cars and has launched Sarvam Kaze smart glasses. This multi-device strategy aims to bring AI to populations beyond traditional smartphone users.