India put AI innovation on display this week as the host of the AI Impact Summit.
The event brought together industry leaders, government agencies, educational institutions and startups sharing how they’re working with Nvidia to drive AI in the world’s most populous country.
These initiatives support the IndiaAI Mission, a government effort that’s infusing India’s AI ecosystem with over $1-billion to bolster the nation’s compute capacity and foster the development of sovereign AI datasets, frontier models and applications.
Nvidia Cloud Partners Boost India AI Infrastructure
To achieve its AI ambitions, India is investing heavily in its computing infrastructure. Under the IndiaAI Compute Pillar, the nation is building out its AI cloud offerings with systems including tens of thousands of Nvidia GPUs.
Nvidia is collaborating with next‑generation cloud providers Yotta, L&T and E2E Networks to deliver advanced AI factories to meet India’s growing need for AI compute and enable it to develop AI models and services that drive innovation.
- Yotta is a hyperscale data center and cloud provider building large‑scale sovereign AI infrastructure for India, branded as Shakti Cloud, powered by over 20,000 Nvidia Blackwell Ultra GPUs. Its campuses in Navi Mumbai and Greater Noida deliver GPU‑dense, high‑bandwidth AI cloud services on a pay‑per‑use model, designed to make advanced AI training and inference affordable and compliant for Indian enterprises and public sector customers.
- Larsen & Toubro (L&T) is building sovereign, gigawatt-scale Nvidia AI factory infrastructure in India to reinforce the country’s position as a global AI powerhouse in alignment with the IndiaAI Mission. The roadmap includes initial expansions in Chennai to 30 megawatts as well as a new 40-megawatt facility in Mumbai. These facilities will power sovereign cloud workloads and hyperscale deployments, delivering secure, energy‑efficient infrastructure for advanced AI applications.
- E2E Networks is building an Nvidia Blackwell GPU cluster on its TIR platform, hosted at the L&T Vyoma Data Center in Chennai. The TIR cloud compute platform will feature Nvidia HGX B200 systems and Nvidia Enterprise software as well as Nvidia Nemotron open models to supercharge sovereign development across agentic AI, healthcare, finance, manufacturing and agriculture.
India’s AI cloud infrastructure will host workloads as well as manufacture intelligence for model training, fine-tuning and high‑scale inference. Capacity within these data centers will be reserved for model builders, startups, researchers and enterprises to build, fine-tune and deploy AI in India.
In addition, Netweb Technologies is launching its Tyrone Camarero AI Supercomputing systems built on the Nvidia Grace Blackwell architecture. The Nvidia GB200 NVL4 platforms — manufactured in India by Netweb under the government’s “Make in India” mission — feature four Nvidia Blackwell GPUs and two Nvidia Grace CPUs to power scientific computing, model training and inference.
Another key goal of the IndiaAI Mission — led by its Innovation Center Pillar — is to develop and deploy foundation models trained on India-specific data and domestic AI infrastructure.
For a nation as multilingual as India — with 22 constitutionally recognized languages and over 1 500 more recorded by the country’s census — frontier AI models are a powerful tool to help its more than 1.4 billion residents interact with technology in their primary language.
Organizations across the country are building AI applications with Nvidia Nemotron to support public-sector services, financial systems and enterprise operations in multiple languages.
Nvidia Nemotron open models, datasets, tools and libraries enable organisations to build frontier speech, language and multimodal models at scale and across languages for government, consumer and enterprise applications.
It includes India-specific datasets like Nemotron-Personas-India, an open dataset built from publicly available census data using NeMo Data Designer that includes 21-million fully synthetic Indic personas to enable population-scale sovereign AI development.
Adopters in India of Nemotron — and NeMo Curator, an open library for multilingual and multimodal data curation — include:
- BharatGen, a sovereign AI initiative supported by the Government of India aimed at strengthening the country’s multilingual and multimodal AI ecosystem. As part of this effort, BharatGen has developed a 17-billion-parameter mixture-of-experts (MoE) model from the ground up, using the Nvidia NeMo framework for pretraining and the NeMo RL library for post-training. The open source models are designed to power applications across public services, agriculture, security and cultural preservation.
- Chariot, a company building AI systems for speech and multimodal communication. Using the NeMo framework, Chariot is developing an 8-billion-parameter model for real-time text to speech, supporting applications that improve accessibility and digital interaction across consumer and enterprise use cases.
- Commotion, backed by Tata Communications, which has developed an AI operating system to automate complex enterprise workflows. By integrating Nvidia Nemotron models and speech capabilities, the platform enables governed, production-grade AI deployments, helping enterprises scale AI across critical business operations.
- CoRover.ai, which has deployed Nvidia Nemotron Speech open models and Nvidia Riva libraries for end-to-end, ultralow-latency speech AI — including the Nvidia Riva Whisper v3 model for multilingual automatic speech recognition in English, Hindi and Gujarati. Powering customer service applications for the Indian Railway Catering and Tourism Corporation, CoRover’s platform supports around 10,000 concurrent users and more than 5,000 daily ticket bookings.
- Gnani.ai, which offers enterprises a multilingual agentic AI platform that can interact with customers through voice and text. Gnani is building a 14-billion-parameter speech-to-speech model built on Nvidia Nemotron Speech models, datasets and NeMo libraries including NeMo libraries through Nvidia Cloud Partner E2E Networks — with plans to expand to a 32-billion-parameter model. By fine-tuning the Nvidia Nemotron Speech model for Indic languages, Gnani has achieved a 15x reduction in inference costs, enabling the company to scale to support more than 10 million calls per day for customers in telecom, banking and hospitality.
- National Payments Corporation of India (NPCI), which operates India’s retail payment and settlement systems and is deploying AI models to support digital financial services. Building on its production deployment of the AI-powered UPI Help Assistant — a pilot initiative for India’s Unified Payments Interface (UPI) — NPCI is exploring training FiMi, a financial model for India, using the Nvidia Nemotron 3 Nano model and its own datasets. The model, fine-tuned with the NeMo framework, will support multilingual customer service across India’s banking ecosystem.
- Sarvam.ai, a leader in full-stack sovereign generative AI that provides enterprise-grade multimodal, speech-to-text, text-to-speech, translation and reasoning models. The company is open sourcing its Sarvam-3 series of text and multimodal large language model variants, trained for 22 Indic languages, English math and code. Sarvam is using NeMo Curator to construct high-quality multilingual training data while adopting a subset of Nvidia Nemotron datasets. The foundation models were pre-trained from scratch across 3B, 30B and 100B parameter sizes using the Nvidia NeMo framework and Megatron-LM, and post-trained with NeMo RL. Training was conducted on Nvidia H100 GPUs through Nvidia Cloud Partners, including Yotta. With these sovereign models, Sarvam.ai’s new Pravah platform enables production-grade inference for Indian government and enterprise applications.
- Soket.ai, which is using a modern large-model training stack on open Nvidia Nemotron technologies, including Nvidia Megatron and Nvidia NeMo. These open source components enable scalable experimentation, training stability and efficient GPU usage, while preserving full control over the model’s data, design and life cycle.
- Tech Mahindra, which has developed an 8-billion-parameter foundation model tailored for Indian languages and dialects. The model, built with Nemotron, is being designed for use in classrooms, where it can help make educational materials available in a wider range of Indian languages including Hindi, Maithili and Dogri. The team generated synthetic data with Nemotron libraries and tools such as NeMo Data Designer and conducted supervised fine-tuning with NeMo AutoModel.
- Zoho, which is advancing its Zia LLM platform with proprietary models built using Nvidia NeMo on the Nvidia Blackwell and Hopper platforms, integrated across its software-as-a-service applications. This privacy-first architecture delivers contextual, production-grade AI for critical business workflows like customer relation management and finance, ensuring technology sovereignty and enterprise security at a global scale.
Under its Application Development Initiative Pillar, the IndiaAI Mission is supporting high-impact AI applications — and its Startup Financing Pillar aims to democratize funding availability for AI entrepreneurs across the country.
Nvidia is collaborating with government agencies, research institutions, venture capital firms and startups to advance projects aligned with these goals.