At CES 2026, AMD chair and CEO Dr Lisa Su detailed in the show’s opening keynote how the company’s extensive portfolio of AI products and deep cross-industry collaborations are turning the promise of AI into real-world impact.
The keynote showcased major advancements from the data center to the edge, with partners including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci and Illumina detailing how they are using AMD technology to power AI breakthroughs.
“At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone,” says Su. “As AI adoption accelerates, we are entering the era of yotta-scale computing, driven by unprecedented growth in both training and inference. AMD is building the compute foundation for this next phase of AI through end-to-end technology leadership, open platforms, and deep co-innovation with partners across the ecosystem.”
The blueprint for yotta-scale compute
Compute infrastructure is the foundation of AI, and accelerating adoption is driving an unprecedented expansion from today’s 100 zettaflops of global compute capacity to a projected 10+ yottaflops in the next five years.
Building AI infrastructure at yotta-scale will require more than raw performance; it demands an open, modular rack design that can evolve across product generations, combining leadership compute engines with high-speed networking to connect thousands of accelerators into a single, unified system.
The AMD “Helios” rack-scale platform is the blueprint for yotta-scale infrastructure, delivering up to 3 AI exaflops of performance in a single rack. It’s designed to deliver maximum bandwidth and energy efficiency for trillion-parameter training.
“Helios” is powered by AMD Instinct MI455X accelerators, AMD EPYC “Venice” CPUs and AMD Pensando “Vulcano” NICs for scale-out networking, all unified through the open AMD ROCm software ecosystem.
At CES, AMD provided an early look at “Helios” and, for the first time unveiled the full AMD Instinct MI400 Series accelerator product portfolio while previewing the next-generation MI500 Series GPUs.
The latest addition to the MI400 Series is the AMD Instinct MI440X GPU, designed for on-premises enterprise AI deployments. The MI440X will power scalable training, fine-tuning and inference workloads in a compact, eight-GPU form factor that integrates seamlessly into existing infrastructure.
The MI440X builds on the recently announced AMD Instinct MI430X GPUs, which are designed to deliver leadership performance and hybrid computing for high-precision scientific, HPC and sovereign AI workloads. MI430X GPUs will power AI factory supercomputers around the world, including Discovery at Oak Ridge National Laboratory and the Alice Recoque system, France’s first exascale supercomputer.
AMD shared additional details at CES on the next-generation AMD Instinct MI500 GPUs, planned to launch in 2027. The MI500 Series is on track to deliver up to a 1 000x increase in AI performance compared to the AMD Instinct MI300X GPUs introduced in 2023.
Built on next-generation AMD CDNA 6 architecture, advanced 2nm process technology and cutting-edge HBM4E memory, MI500 GPUs will deliver leadership at every level.
Enabling AI PC experiences everywhere
AI is becoming a foundational part of the PC experience, where billions of users will interact directly with AI, both locally on the device and through the cloud. At CES, AMD introduced new products that expand AMD’s AI PC portfolio and deepen developer support across the ecosystem.
The next-generation AMD Ryzen AI 400 Series and Ryzen AI PRO 400 Series platforms deliver a 60 TOPS NPU, better efficiency and full AMD ROCm platform support for seamless cloud-to-client AI scaling. First systems ship in January 2026, with broader OEM availability in Q1 2026.
AMD also expanded its breakthrough on-device AI compute offerings with Ryzen AI Max+ 392 and Ryzen AI Max+ 388 which supports models of up to 128-billion-parameters with 128GB unified memory. These platforms enable advanced local inference, content creation workflows and incredible gaming experiences in premium thin-and-light notebooks and small form factor (SFF) desktops.
For developers, the Ryzen AI Halo Developer Platform brings powerful AI development capabilities to a compact SFF desktop PC, delivering leadership tokens-per-second-per-dollar with high-performance Ryzen AI Max+ Series processors. Ryzen AI Halo is expected to be available in Q2 2026.
AI transforming the physical world
AMD introduced the Ryzen AI Embedded processors, a new portfolio of embedded x86 processors designed to power AI-driven applications at the edge.
From automotive digital cockpits and smart healthcare to physical AI for autonomous systems, including humanoid robotics, the new P100 and X100 Series processors deliver high performance, efficient AI compute for the most constrained embedded systems.