The full-stack Nvidia accelerated computing platform has demonstrated high performance in the latest MLPerf Training v4.0 benchmarks.

Nvidia more than tripled the performance on the large language model (LLM) benchmark, based on GPT-3 175B, compared to the record-setting Nvidia submission made last year.

Using an AI supercomputer featuring 11 616 Nvidia H100 Tensor Core GPUs connected with Nvidia Quantum-2 InfiniBand networking, Nvidia achieved this remarkable feat through larger scale — more than triple that of the 3 584 H100 GPU submission a year ago — and extensive full-stack engineering.

Thanks to the scalability of the Nvidia AI platform, Eos can now train massive AI models like GPT-3 175B even faster, and this great AI performance translates into significant business opportunities.

The Nvidia H200 Tensor GPU builds upon the strength of the Hopper architecture, with 141Gb of HBM3 memory and over 40% more memory bandwidth compared to the H100 GPU. The Nvidia H200 Tensor Core GPU extended the H100’s performance by up to 47% in its MLPerf Training debut.

Additionally, submissions using a 512 H100 GPU configuration are now up to 27% faster compared to just one year ago due to numerous optimisations to the Nvidia software stack. This improvement highlights how continuous software enhancements can significantly boost performance, even with the same hardware.

The work also delivered nearly perfect scaling. As the number of GPUs increased by 3,2x — going from 3 584 H100 GPUs last year to 11 616 H100 GPUs with this submission — so did the delivered performance.

As enterprises seek to customise pretrained large language models, LLM fine-tuning is becoming a key industry workload. MLPerf introduced a new LLM fine-tuning benchmark this round, based on the popular low-rank adaptation (LoRA) technique applied to Meta Llama 2 70B.

The Nvidia platform excelled at this task, scaling from eight to 1 024 GPUs, with the largest-scale Nvidia submission completing the benchmark in a record 1,5 minutes.

Nvidia also accelerated Stable Diffusion v2 training performance by up to 80% at the same system scales submitted last round. These advances reflect numerous enhancements to the Nvidia software stack, showcasing how software and hardware improvements go hand-in-hand to deliver top-tier performance.

On the new graph neural network (GNN) test based on R-GAT, the Nvidia platform with H100 GPUs excelled at both small and large scales. The H200 delivered a 47% boost on single-node GNN training compared to the H100. This showcases the powerful performance and high efficiency of Nvidia GPUs, which make them ideal for a wide range of AI applications.

Ten Nvidia partners submitted results, including ASUS, Dell Technologies, Fujitsu, Gigabyte, Hewlett Packard Enterprise, Lenovo, Oracle, Quanta Cloud Technology, Supermicro and Sustainable Metal Cloud.