Dell Technologies and Red Hat are working together to bring Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimised operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (GenAI) models, to Dell PowerEdge servers.

The collaboration will help organisations more readily implement successful AI and machine learning (AI/ML) strategies to scale their IT systems and power enterprise applications across their businesses.

RHEL AI brings together open source-licensed Granite large language models (LLMs) from IBM Research, InstructLab model alignment tools based on the LAB (Large-scale Alignment for chatBots) methodology and a community-driven approach to model development through the InstructLab project. The solution is packaged as an optimised, bootable Red Hat Enterprise Linux (RHEL) image for individual server deployments across the hybrid cloud and is included as part of Red Hat OpenShift AI, Red Hat’s hybrid cloud machine learning operations (MLOps) platform, for running models and InstructLab at scale across distributed cluster environments.

Joe Fernandes, vice-president and GM: GenAI foundation model platforms at Red Hat, says: “AI by nature requires extensive resources spanning enabled servers, compute power and GPUs. As organizations evaluate and implement gen AI use cases, it is imperative that they build on a platform that is able to scale with their business while also providing the agility to experiment and develop AI-driven innovations.

“By collaborating with Dell Technologies to validate and empower RHEL AI on Dell PowerEdge servers, we are enabling customers with greater confidence and flexibility to harness the power of gen AI workloads across hybrid cloud environments and propel their business into the future.”

Arun Narayanan, senior vice-president at Dell Technologies, adds: “Validating RHEL AI for AI workloads on Dell PowerEdge servers provides customers with greater confidence that the servers, GPUs, and foundational platforms are tested and validated on an ongoing basis. This simplifies the gen AI user experience and accelerates the process to build and deploy critical AI workloads on a trusted software stack.”

Bob Pette, vice-president: enterprise platforms at Nvidia, comments: “In today’s fast-paced market, it is critical for organisations to be equipped with validated and trusted AI-enabled solutions to kick-start their gen AI use cases. Red Hat and Dell will extend gen AI capabilities for customers with an optimized experience for Nvidia accelerated computing, including Nvidia H100 Tensor Core GPUs, with Dell PowerEdge servers and RHEL AI.”