For the first time, chip manufacturer AMD has showcased a static display of its “Helios” rack scale platform at the Open Commute Project (OCP) Global Summit in San Jose.
Developed based on the new Open Rack Wide (ORW) specification introduced by Meta, “Helios” extends the AMD open hardware philosophy from silicon to system to rack – representing a major step forward in open, interoperable AI infrastructure.
AMD says “Helios” provides the foundation to deliver the open, scalable infrastructure that will power the world’s growing AI demands.
Designed to meet the demands of gigawatt-scale data centres, the new ORW specification defines an open, double-wide rack optimised for the power, cooling, and serviceability needs of next-generation AI systems.
By adopting ORW and OCP standards, “Helios” provides the industry with a unified, standards-based foundation to develop and deploy efficient, high-performance AI infrastructure at scale.
“Open collaboration is key to scaling AI efficiently,” says Forrest Norrod, executive vice-president and GM, Data Center Solutions Group at AMD. “With ‘Helios’, we’re turning open standards into real, deployable systems – combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads.”
The AMD “Helios” rack scale platform integrates open compute standards including OCP DC-MHS, UALink, and Ultra Ethernet Consortium (UEC) architectures, supporting both open scale-up and scale-out fabrics.
The rack features quick-disconnect liquid cooling for sustained thermal performance, a double-wide layout for improved serviceability, and standards-based Ethernet for multi-path resiliency.
As a reference design, “Helios” enables OEMs, ODMs, and hyperscalers to adopt, extend, and customise open AI systems quickly – reducing deployment time, improving interoperability, and supporting efficient scaling for AI and HPC workloads.