Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries and, through deepened collaboration, Nvidia and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC.

At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.

Microsoft Discovery will integrate the Nvidia Alchemi NIM microservice, which optimises AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation.

The platform will also integrate Nvidia BioNeMo NIM microservices, tapping into pre-trained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries.

In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centres in under 200 hours – rather than months or years with traditional methods.

Microsoft is rapidly deploying hundreds of thousands of Nvidia Blackwell GPUs using Nvidia GB200 NVL72 rack-scale systems across AI-optimised Azure data centres around the world, boosting performance and efficiency. Customers including OpenAI are already running production workloads on this infrastructure today.

Microsoft expects each of these Azure AI data centres to offer 10x the performance of today’s fastest supercomputer in the world and to be powered by 100% renewable energy by the end of this year.

Azure’s ND GB200 v6 virtual machines – built on this rack-scale architecture with up to 72 Nvidia Blackwell GPUs per rack and advanced liquid cooling – deliver up to 35x more inference throughput compared with previous ND H100 v5 VMs accelerated by eight Nvidia H100 GPUs, setting a new benchmark for AI workloads.

This scale and performance is underpinned by custom server designs, high-speed Nvidia NVLink interconnects and Nvidia Quantum InfiniBand networking – enabling seamless scaling to thousands of Blackwell GPUs for demanding generative and agentic AI applications.