AMD has announced the latest accelerator and networking solutions that it says will power the next generation of AI infrastructure at scale: AMD Instinct MI325X accelerators; the AMD Pensando; Pollara 400 NIC; and the AMD Pensando Salina DPU.

AMD Instinct MI325X accelerators, it says, set a new standard in performance for Gen AI models and data centres.

Built on the AMD CDNA 3 architecture, AMD Instinct MI325X accelerators are designed for exceptional performance and efficiency for demanding AI tasks spanning foundation model training, fine-tuning and inferencing. Together, these products enable AMD customers and partners to create highly performant and optimised AI solutions at the system, rack, and data centre level.

“AMD continues to deliver on our roadmap offering customers the performance they need and the choice they want to bring AI infrastructure, at scale, to market faster,” says Forrest Norrod, executive vice-president and GM, Data Center Solutions Business Group at AMD. “With the new AMD Instinct accelerators, EPYC processors, and AMD Pensando networking engines the continued growth of our open software ecosystem – and the ability to tie this all together into optimised AI infrastructure – AMD underscores the critical expertise to build and deploy world class AI solutions.”

 

AMD Instinct MI325X extends leading AI performance

AMD Instinct MI325X accelerators deliver industry-leading memory capacity and bandwidth, with 256GB of HBM3E supporting 6.0TB/s offering 1.8X more capacity and 1.3x more bandwidth than the H200. The AMD Instinct MI325X also offers 1.3X greater peak theoretical FP16 and FP8 compute performance compared to H200.

This leadership memory and compute can provide up to 1.3X the inference performance on Mistral 7B at FP16, 1.2X the inference performance on Llama 3.1 70B at FP8, and 1.4X the inference performance on Mixtral 8x7B at FP16 of the H200.

AMD Instinct MI325X accelerators are currently on track for production shipments in Q4 2024 and are expected to have widespread system availability from a broad set of platform providers including Dell Technologies, Eviden, Gigabyte, Hewlett Packard Enterprise, Lenovo, Supermicro and others starting in Q1 2025.

 

AMD next-gen AI networking

AMD is leveraging the most widely deployed programmable DPU for hyperscalers to power next-gen AI networking. Split into two parts: the front-end, which delivers data and information to an AI cluster; and the backend, which manages data transfer between accelerators and clusters, AI networking is critical to ensuring CPUs and accelerators are utilised efficiently in AI infrastructure.

To effectively manage these two networks and drive high performance, scalability and efficiency across the entire system, AMD has introduced the AMD Pensando Salina DPU for the front-end and the AMD Pensando Pollara 400 – the industry’s first Ultra Ethernet Consortium (UEC) ready AI NIC, for the back-end.

The AMD Pensando Salina DPU is the third generation of the world’s most performant and programmable DPU bringing up to 2X the performance, bandwidth, and scale compared to the previous generation. Supporting 400G throughput for fast data transfer rates, the AMD Pensando Salina DPU is a critical component in AI front-end network clusters optimising performance, efficiency, security, and scalability for data-driven AI applications.

The UEC-ready AMD Pensando Pollara 400, powered by the AMD P4 Programmable engine, is the industry’s first UEC-ready AI NIC. It supports the next-gen RDMA software and is backed by an open ecosystem of networking. The AMD Pensando Pollara 400 is critical for providing leadership performance, scalability and efficiency of accelerator-to-accelerator communication in back-end networks.

Both the AMD Pensando Salina DPU and AMD Pensando Pollara 400 are sampling with customers in Q4 24 and are on track for availability in the first half of 2025.

 

AMD AI software delivering new capabilities for GenAI

AMD says it continues its investment in driving software capabilities and the open ecosystem to deliver powerful new features and capabilities in the AMD ROCm open software stack.

Within the open software community, AMD is driving support for AMD compute engines in the most widely used AI frameworks, libraries, and models including PyTorch, Triton, Hugging Face, and many others.

This work translates to out-of-the-box performance and support with AMD Instinct accelerators on popular generative AI models like Stable Diffusion 3, Meta Llama 3, 3.1 and 3.2 and more than one million models at Hugging Face.

Beyond the community, AMD continues to advance its ROCm open software stack, bringing the latest features to support leading training and inference on Gen AI workloads. ROCm 6.2 now includes support for critical AI features like FP8 datatype, Flash Attention 3, Kernel Fusion, and more.

With these new additions, ROCm 6.2 – compared to ROCm 6.0 – provides up to a 2.4X performance improvement on inference and 1.8X on training for a variety of LLMs.