Nvidia has unveiled its Alpamayo family of open AI models, simulation tools and datasets designed to accelerate the next era of safe, reasoning‑based autonomous vehicle (AV) development.
AVs must safely operate across an enormous range of driving conditions. Rare, complex scenarios, often called the “long tail,” remain some of the toughest challenges for autonomous systems to safely master.
Traditional AV architectures separate perception and planning, which can limit scalability when new or unusual situations arise.
Recent advances in end-to-end learning have made significant progress, but overcoming these long-tail edge cases requires models that can safely reason about cause and effect, especially when situations fall outside a model’s training experience.
The Alpamayo family introduces chain-of-thought, reasoning-based vision language action (VLA) models that bring humanlike thinking to AV decision-making. These systems can think through novel or rare scenarios step by step, improving driving capability and explainability — which is critical to scaling trust and safety in intelligent vehicles — and are underpinned by the Nvidia Halos safety system.
“The ChatGPT moment for physical AI is here — when machines begin to understand, reason and act in the real world,” says Jensen Huang, founder and CEO of Nvidia. “Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions — it’s the foundation for safe, scalable autonomy.”
Complete, open ecosystem
Alpamayo integrates three foundational pillars — open models, simulation frameworks and datasets — into a cohesive, open ecosystem that any automotive developer or research team can build upon.
Rather than running directly in-vehicle, Alpamayo models serve as large-scale teacher models that developers can fine-tune and distill into the backbones of their complete AV stacks.
At CES, Nvidia is releasing:
- Alpamayo 1: The industry’s first chain-of-thought reasoning VLA model designed for the AV research community, now on Hugging Face. With a 10-billion-parameter architecture, Alpamayo 1 uses video input to generate trajectories alongside reasoning traces, showing the logic behind each decision. Developers can adapt Alpamayo 1 into smaller runtime models for vehicle development, or use it as a foundation for AV development tools such as reasoning-based evaluators and auto-labeling systems. Alpamayo 1 provides open model weights and open-source inferencing scripts. Future models in the family will feature larger parameter counts, more detailed reasoning capabilities, more input and output flexibility, and options for commercial usage.
- AlpaSim: A fully open‑source, end-to-end simulation framework for high‑fidelity AV development, available on GitHub. It provides realistic sensor modeling, configurable traffic dynamics and scalable closed‑loop testing environments, enabling rapid validation and policy refinement.
- Physical AI Open Datasets: Nvidia offers the most diverse large-scale, open dataset for AV that contains 1,700+ hours of driving data collected across the widest range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures. These datasets are available on Hugging Face.
Together, these tools enable a self-reinforcing development loop for reasoning-based AV stacks.
Broad AV industry support
Mobility and industry players, including Lucid, JLR, Uber and Berkeley DeepDrive, are showing interest in Alpamayo to develop reasoning-based AV stacks that will enable level 4 autonomy.
“The shift toward physical AI highlights the growing need for AI systems that can reason about real-world behavior, not just process data,” says Kai Stepper, vice president of ADAS and autonomous driving at Lucid Motors. “Advanced simulation environments, rich datasets and reasoning models are important elements of the evolution.”
“Open, transparent AI development is essential to advancing autonomous mobility responsibly,” says Thomas Müller, executive director of product engineering at JLR. “By open-sourcing models like Alpamayo, Nvidia is helping to accelerate innovation across the autonomous driving ecosystem, giving developers and researchers new tools to tackle complex real-world scenarios safely.”
“Handling long-tail and unpredictable driving scenarios is one of the defining challenges of autonomy,” says Sarfraz Maredia, global head of autonomous mobility and delivery at Uber. “Alpamayo creates exciting new opportunities for the industry to accelerate physical AI, improve transparency and increase safe level 4 deployments.”
“Alpamayo 1 enables vehicles to interpret complex environments, anticipate novel situations and make safe decisions, even in scenarios not previously encountered,” according to Owen Chen, senior principal analyst of S&P Global. “The model’s open-source nature accelerates industry-wide innovation, allowing partners to adapt and refine the technology for their unique needs.”
“The launch of the Alpamayo portfolio represents a major leap forward for the research community,” says Wei Zhan, codirector of Berkeley DeepDrive. “Nvidia’s decision to make this openly available is transformative as its access and capabilities will enable us to train at unprecedented scale — giving us the flexibility and resources needed to push autonomous driving into the mainstream.”
Beyond Alpamayo, developers can tap into Nvidia’s library of tools and models, including from the Nvidia Cosmos and Nvidia Omniverse platforms. Developers can fine-tune model releases on proprietary fleet data, integrate them into the Nvidia DRIVE Hyperion architecture built with Nvidia DRIVE AGX Thor accelerated compute, and validate performance in simulation before commercial deployment.