Nutanix has announced a collaboration with Nvidia aimed at helping enterprises more easily adopt generative AI (GenAI).
Through the integration of Nvidia NIM inference microservices with Nutanix GPT-in-a-Box 2.0, customers will be able to build scalable, secure, high-performance GenAI applications across the enterprise and at the edge.
Today, most AI innovation is centered on the public cloud due to access to infrastructure and tooling able to support the needs of AI applications. Additionally, only the largest enterprises with teams of data scientists have made progress in GenAI adoption.
However, most enterprises are looking to invest in supporting their AI strategy, including boosting their investment at the edge, according to the State of Enterprise AL. What’s missing is a fast-track for organizations to mainstream GenAI beyond the public cloud, across the enterprise, and at the edge.
Nutanix’s integration of Nvidia NIM microservices will enable its customers to leverage Nutanix GPT-in-a-Box 2.0, built on top of the company’s rich data services and compute platform, and use it to simplify AI model deployment and more effectively and efficiently run enterprise AI/ML applications. This will expand access to the growing catalog of Nvidia NIM microservices from across the enterprise and at the edge, helping to fast-track GenAI initiatives without requiring a team of data scientists.
Nutanix’s collaboration with Nvidia helps simplify the experience, which many enterprises find challenging today, of making all the decisions required to stand up AI solutions. These include choosing among hundreds of thousands of models, serving engines, and supporting infrastructure, while lacking the new skill sets needed to deliver GenAI solutions to their customers.
Nutanix GPT-in-a-Box simplifies building an AI-ready stack, integrated with Nutanix Objects and Nutanix Files for model and data storage, enabling customers to maintain control over their data. New features delivered in GPT-in-a-Box 2.0 will also automate deploying and running inference endpoints for a wide range of AI models and secure access to the model using fine-grained access control and auditing.
Running on top of the Nutanix Cloud Platform, NIM microservices will enable seamless AI inferencing on a wide range of models, including open-source community models, Nvidia AI Foundation models, and custom models, leveraging industry-standard application programming interfaces. To support the integration, Nutanix also announced certification for the Nvidia AI Enterprise 5.0 software platform for streamlining the development and deployment of production-grade AI, including Nvidia NIM.
“Enterprises are looking to simplify GenAI adoption, and Nutanix enables customers to move to production more easily while maintaining control, privacy, and cost,” says Tarkan Maner, chief commercial officer at Nutanix. “This collaboration will add to this value by making it even easier for customers to leverage Nvidia’s latest innovation with NIM.”
Manuvir Das, vice-president of enterprise computing at Nvidia, comments: “Across every industry, enterprises are working to efficiently integrate AI into the cloud and data platforms that power their operations. The integration of Nvidia NIM into Nutanix GPT-in-a-Box gives enterprises an AI-ready solution for rapidly deploying optimised models in production.”