NVIDIA has announced the fourth-generation NVidia DGX system, believed to be the first AI platform to be built with new NVidia H100 Tensor Core GPUs.
DGX H100 systems deliver the scale demanded to meet the massive compute requirements of large language models, recommender systems, healthcare research and climate science. Packing eight NVidia H100 GPUs per system, connected as one by NVidia NVLink, each DGX H100 provides 32 petaflops of AI performance at new FP8 precision — 6-times more than the prior generation.
DGX H100 systems are the building blocks of the next-generation NVidia DGX PO and Nvidia DGX SuperPOD AI infrastructure platforms.
The latest DGX SuperPOD architecture features a new NVidia NVLink Switch System that can connect up to 32 nodes with a total of 256 H100 GPUs.
Providing 1 exaflops of FP8 AI performance, 6-times more than its predecessor, the next-generation DGX SuperPOD expands the frontiers of AI with the ability to run massive LLM workloads with trillions of parameters.
“AI has fundamentally changed what software can do and how it is produced. Companies revolutionizing their industries with AI realize the importance of their AI infrastructure,” says Jensen Huang, founder and CEO of NVidia. “Our new DGX H100 systems will power enterprise AI factories to refine data into our most valuable resource – intelligence.”
NVidia will be first to build a DGX SuperPOD with the new AI architecture to power the work of NVidia researchers advancing climate science, digital biology and the future of AI.
Its “Eos” supercomputer is expected to be the world’s fastest AI system after it begins operations later this year, featuring a total of 576 DGX H100 systems with 4 608 DGX H100 GPUs.
NVIDIA Eos is anticipated to provide 18,4 exaflops of AI computing performance, 4-times faster AI processing than the Fugaku supercomputer in Japan, which is currently the world’s fastest system. For traditional scientific computing, Eos is expected to provide 275 petaflops of performance.
DGX H100 systems easily scale to meet the demands of AI as enterprises grow from initial projects to broad deployments.
In addition to eight H100 GPUs with an aggregated 640-billion transistors, each DGX H100 system includes two NVidia BlueField(r)-3 DPUs to offload, accelerate and isolate advanced networking, storage and security services.
Eight NVidia ConnectX-7 Quantum-2 InfiniBand networking adapters provide 400 gigabits per second throughput to connect with computing and storage – double the speed of the prior generation system. And a fourth-generation NVLink, combined with NVSwitch, provides 900 gigabytes per second connectivity between every GPU in each DGX H100 system, 1,5-times more than the prior generation.
DGX H100 systems use dual x86 CPUs and can be combined with NVidia networking and storage from NVidia partners to make flexible DGX PODs for AI computing at any size.
The DGX H100 nodes and H100 GPUs in a DGX SuperPOD are connected by an NVLink Switch System and NVidia Quantum-2 InfiniBand providing a total of 70 terabytes/sec of bandwidth – 11-time higher than the previous generation. Storage from NVIDIA partners will be tested and certified to meet the demands of DGX SuperPOD AI computing.
DGX H100 systems will be available from Nvidia partners from the third quarter of this year.