Amazon Web Services (AWS) and OpenAI have announced a multi-year, strategic partnership that provides AWS’s infrastructure to run and scale OpenAI’s core artificial intelligence (AI) workloads starting immediately.

Under this new $38 billion agreement, which will have continued growth over the next seven years, OpenAI is accessing AWS compute comprising hundreds of thousands of state-of-the-art Nvidia GPUs, with the ability to expand to tens of millions of CPUs to rapidly scale agentic workloads.

AWS has unusual experience running large-scale AI infrastructure securely, reliably, and at scale–with clusters topping 500K chips. AWS’s leadership in cloud infrastructure combined with OpenAI’s pioneering advancements in generative AI will help millions of users continue to get value from ChatGPT.

The rapid advancement of AI technology has created unprecedented demand for computing power. As frontier model providers seek to push their models to new heights of intelligence, they are increasingly turning to AWS due to the performance, scale, and security they can achieve.

OpenAI will immediately start utilising AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026, and the ability to expand further into 2027 and beyond.

The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimised for maximum AI processing efficiency and performance.

Clustering the Nvidia GPUs – both GB200s and GB300s – via Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models – with the flexibility to adapt to OpenAI’s evolving needs.

“Scaling frontier AI requires massive, reliable compute,” says OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

Matt Garman, CEO of AWS, adds: “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”