The Nvidia Blackwell platform set records inthe latest MLPerf Inference V5.0 benchmarks.
Marking Nvidia’s first MLPerf submission using the Nvidia GB200 NVL72 system, a rack-scale solution designed for AI reasoning, the testing reflecting some of the most challenging inference scenarios.
The latest updates to MLPerf Inference, a peer-reviewed industry benchmark of inference performance, include the addition of Llama 3.1 405B, one of the largest and most challenging-to-run open-weight models. The new Llama 2 70B Interactive benchmark features much stricter latency requirements compared with the original Llama 2 70B benchmark, better reflecting the constraints of production deployments in delivering the best possible user experiences.
In addition to the Blackwell platform, the Nvidia Hopper platform demonstrated exceptional performance across the board, with performance increasing significantly over the last year on Llama 2 70B thanks to full-stack optimisations.
New Records
The GB200 NVL72 system — connecting 72 Nvidia Blackwell GPUs to act as a single, massive GPU — delivered up to 30x higher throughput on the Llama 3.1 405B benchmark over the Nvidia H200 NVL8 submission this round. This feat was achieved through more than triple the performance per GPU and a 9x larger Nvidia NVLink interconnect domain.
While many companies run MLPerf benchmarks on their hardware to gauge performance, only Nvidia and its partners submitted and published results on the Llama 3.1 405B benchmark.
Production inference deployments often have latency constraints on two key metrics. The first is time to first token (TTFT), or how long it takes for a user to begin seeing a response to a query given to a large language model. The second is time per output token (TPOT), or how quickly tokens are delivered to the user.
The new Llama 2 70B Interactive benchmark has a 5x shorter TPOT and 4,4x lower TTFT — modeling a more responsive user experience. On this test, Nvidia’s submission using an Nvidia DGX B200 system with eight Blackwell GPUs tripled performance over using eight Nvidia H200 GPUs, setting a high bar for this more challenging version of the Llama 2 70B benchmark.
Combining the Blackwell architecture and its optimised software stack delivers new levels of inference performance, paving the way for AI factories to deliver higher intelligence, increased throughput and faster token rates.
This MLPerf round, 15 partners submitted results on the Nvidia platform, including ASUS, Cisco, CoreWeave, Dell Technologies, Fujitsu, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Oracle Cloud Infrastructure, Quanta Cloud Technology, Supermicro, Sustainable Metal Cloud and VMware.