Last Updated:
Sarvam AI as announced its made in India large language models that will be looking to compete with global versions.

Sarvam has announced its made in India LLMs wit 30B and 105B metrics.
Made in India Sarvam AI has announced not one but two large language models at the India AI Impact Summit 2026 this week. The Sarvam 30B and Sarvam 105B will be a core part of the sovereign AI ecosystem in India in the next few years and the AI company is doubling down on its efforts not just to perform in India but also compete with larger and more powerful AI models.
The Indian government has sought locally built AI models and companies like Sarvam are showing their ability to make these LLMs at the global scale with less resources to deliver similar results.
Sarvam 30B And 105B AI Models: What They Offer?
The company says both these AI models have been trained and built from ground up and are primed to focus on the sovereign AI mission from the government. This means the data remains within our shores and users don’t have to share it with global AI companies.
Using lesser parameters than Google or OpenAI for these AI models has sparked a debate about their performance but the company claims use of a mixture of engineering experts and architecture allows them to keep the reasoning levels and the efficiency of AI tools to work at its best level.
The benefit of having a 30B LLM model is that Sarvam can power it on regular smartphones without seeing any drop in the overall results and also speed of response from the AI on device. It can handle multilingual text translations, content creation and offer a conversational AI experience.
It can speak in local languages like Punjabi and Hindi among others. Sarvam claims the smaller model is going to make AI work on feature phones but we’ll wait to see that happen. Having them built locally and with less resources should help Sarvam price its AI chatbot with more affordable plans than Google and OpenAI in the country.
Sarvam 105B AI Model: The Indian Challenger To ChatGPT And Gemini?
Scale-wise the Sarvam 30B will have more potential but you cannot ignore the obvious quality on the table with the Sarvam 105B version. Built using 105 billion parameters, this model is suited for complex tasks and handles more information than the smaller version.
By that we mean, summarising long-form articles, going further into research materials and other ways of handling complex data. Sarvam claims the 105B can perform better than China’s DeepSeek R1 model in reasoning benchmarks while showing strong credentials to match the GPTs and Gemini’s of this world.
The demo at the AI Summit allowed us to see the ability of the Sarvam 105B which was fed with complex business data and it answered most of the queries with precision.
Sarvam has also talked about taking on global AI models and we could see them do that in the near future. The company has also announced its smart glasses with AI like the Meta Ray Bans, first tried out by Prime Minister Narendra Modi earlier this week and these will be available around May in the country.
Delhi, India, India
February 19, 2026, 12:41 IST
Read More

