MLCommons’ results of its industry AI performance benchmark, MLPerf Training v4.0, demonstrate the choice that Intel Gaudi 2 AI accelerators give enterprises and customers.

Community-based software simplifies generative AI (GenAI) development and industry-standard Ethernet networking enables flexible scaling of AI systems.

For the first time on the MLPerf benchmark, Intel submitted results on a large Gaudi 2 system (1 024 Gaudi 2 accelerators) trained in Intel Tiber Developer Cloud to demonstrate Gaudi 2 performance and scalability and Intel’s cloud capacity for training MLPerf’s GPT-3 1 75B parameter benchmark model.

“The industry has a clear need: address the gaps in today’s generative AI enterprise offerings with high-performance, high-efficiency compute options,” says Zane Ball, Intel corporate vice-president and GM: DCAI product management.

“The latest MLPerf results published by MLCommons illustrate the unique value Intel Gaudi brings to market as enterprises and customers seek more cost-efficient, scalable systems with standard networking and open software, making GenAI more accessible to more customers.”

More customers want to benefit from GenAI but are unable to because of cost, scale and development requirements. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel’s AI offerings address the challenges businesses face in scaling AI initiatives.

Intel Gaudi 2 is an accessible, scalable solution that has proven its ability to handily train large language models (LLMs) from 70-billion to 175-billion parameters.

The soon-to-be-released Intel Gaudi 3 accelerator will bring a leap in performance, as well as openness and choice to enterprise GenAI.

The MLPerf results show Gaudi 2 continues to be the only MLPerf-benchmarked alternative for AI compute to the Nvidia H100. Trained on the Tiber Developer Cloud, Intel’s GPT-3 results for time-to-train (TTT) of 66,9 minutes on an AI system of 1 024 Gaudi accelerators proves strong Gaudi 2 scaling performance on ultra-large LLMs within a developer cloud environment1.

The benchmark suite featured a new measurement: fine-tuning the Llama 2 70B parameter model using low-rank adapters (LoRa). Fine-tuning LLMs is a common task for many customers and AI practitioners, making it a relevant benchmark for everyday applications.

Intel’s submission achieved time-to-train of 78,1 minutes on eight Gaudi 2 accelerators.

Intel utilised open source software from Optimum Habana for the submission, leveraging Zero-3 from DeepSpeed for optimising memory efficiency and scaling during large model training, as well as Flash-Attention-2 to accelerate attention mechanisms.

The benchmark task force – led by the engineering teams from Intel’s Habana Labs and Hugging Face – are responsible for the reference code and benchmark rules.

To date, high costs have priced many enterprises out of the market. Gaudi is starting to change that. At Computex, Intel announced that a standard AI kit including eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) offered to system providers at $65 000 is estimated to be one-third the cost of comparable competitive platforms. A kit including eight Intel Gaudi 3 accelerators with a UBB lists at $125 000, estimated to be two-thirds the cost of comparable competitive platforms2.