An accelerated computing ecosystem is one that includes sectors like automotive, financial services, healthcare and life sciences, industrial, media and entertainment, quantum, retail, robotics and telecom.

“All of these different vectors of AI have platforms that Nvidia provides,” says Jensen Huang, CEO of Nvidia.

Addressing the opening of the GTC 2026 conference, Huang highlighted Nvidia’s broad range of CUDA-X libraries, which he described as the “crown jewels” of the company.

“We’re updating these all the time,” he added.

Huang highlighted the rise of “AI natives” — brand-new companies, some well-known, such as OpenAI and Anthropic, and some still emerging. “This last year, it just skyrocketed,” Huang said, citing $150-billion of investment into venture startups and walking through the history of the technologies that sparked the latest technology boom.

As a result of this boom, the computing demand for Nvidia GPUs is “off the charts,” he said. “I believe computing demand has increased by 1-million times over the last few years.”

As a result, Huang said he now sees at least $1-trillion in revenue from 2025 through 2027.

 

Vera Rubin and beyond

Nvidia Vera Rubin is a generational leap in full-stack computing comprising seven breakthrough chips, five rack-scale systems and one revolutionary supercomputer for agentic AI. The platform includes the new Nvidia Vera CPU and BlueField-4 STX storage architecture.

“When we think Vera Rubin, we think the entire system, vertically integrated, complete with software, extended end to end, optimised as one giant system,” Huang said.

Looking beyond Vera Rubin, Nvidia’s next major architecture is Feynman.

It will include a new CPU, Nvidia Rosa, named for Rosalind Franklin, Huang said, whose X‑ray crystallography revealed the structure of DNA and reshaped modern biology.

As Franklin exposed the hidden architecture of life, Rosa is built to orchestrate the full structure of agentic AI workloads — moving data, tools and tokens efficiently across GPUs, LPUs, storage and networking.

Rosa anchors a new platform that pairs LP40, Nvidia’s next‑generation LPU, with Nvidia BlueField‑5 and CX10, connected through Nvidia Kyber for both copper and co‑packaged optics scale‑up, and Nvidia Spectrum‑class optical scale‑out, Huang said.

Together, the Feynman generation advances every pillar of the AI factory: compute, memory, storage, networking and security.

And to help accelerate the scale-out of new AI capacity, Huang announced the Nvidia Vera Rubin DSX AI Factory reference design and the Nvidia Omniverse DSX Blueprint. Part of Nvidia DSX Sim in the DSX platform, DSX Air is a software-as-a-service platform for logically simulating AI factories.

Finally, Huang announced Nvidia is going to space. Its new Vera Rubin architecture honors the astronomer whose work revealed dark matter, and future systems like Nvidia Space-1 Vera Rubin are being designed to bring AI data centers into orbit, extending accelerated computing from Earth to space.

 

Nvidia NemoClaw

Huang spotlighted OpenClaw, an open source project from developer Peter Steinberger that he called “the most popular open source project in the history of humanity.”

“OpenClaw has open sourced the operating system of agentic computers … Now, OpenClaw has made it possible for us to create personal agents,” Huang said.

With a single command, developers can pull down OpenClaw, stand up an AI agent and begin extending it with tools and context. Nvidia is announcing support for OpenClaw across its platform, making it easier for developers to safely build, deploy and accelerate AI agents on Nvidia‑powered infrastructure.

“Every single company in the world today has to have an OpenClaw strategy,” Huang said.

To ensure this technology can be deployed securely inside enterprises, Huang introduced the Nvidia OpenShell runtime and the Nvidia NemoClaw stack — combining policy enforcement, network guardrails and privacy routing. These technologies can serve as “the policy engine of all the SaaS companies in the world,” Huang said.

In addition, Nvidia is expanding its open model ecosystem with a new Nemotron Coalition, rallying partners around six frontier model families: Nvidia Nemotron (language and reasoning), Nvidia Cosmos (world and vision), Nvidia Isaac GR00T (general‑purpose robotics), Nvidia Alpaymayo (autonomous driving), Nvidia BioNeMo (biology and chemistry) and Nvidia Earth‑2 (weather and climate).

 

Physical AI 

Nvidia is extending AI from digital agents into physical AI that can navigate the real world.

Huang said Nvidia’s robotaxi‑ready platform is drawing new automaker partners, including BYD, Hyundai, Nissan and Geely

He also highlighted a partnership with Uber to deploy these vehicles into its ride‑hailing network.

Beyond automakers, Nvidia is working with industrial software giants and robotics leaders such as ABB, Universal Robots and KUKA to integrate its physical AI models and simulation tools, enabling deployment of smarter robots on manufacturing lines, and with telecom providers like T‑Mobile as base stations evolve into edge AI platforms.