Today, OpenAI has announced a strategic partnership with Amazon Web Services (AWS), which will allow the maker of ChatGPT to run its advanced AI workloads on AWS infrastructure. The deal is effective immediately.
AWS is providing OpenAI with Amazon EC2 UltraServers, which feature hundreds of thousands of Nvidia GPUs and the ability to scale to tens of millions of CPUs for advanced generative AI workloads.
The seven-year deal represents a $38 billion commitment, and will help OpenAI “rapidly expand compute capacity while benefiting from the price, performance, scale, and security of AWS”, the official press release says. It goes on – “AWS has unusual experience running large-scale AI infrastructure securely, reliably, and at scale–with clusters topping 500K chips. AWS’s leadership in cloud infrastructure combined with OpenAI’s pioneering advancements in generative AI will help millions of users continue to get value from ChatGPT”.
All of the AWS capacity that’s part of this deal will be deployed before the end of 2026, and there’s also an option to expand further from 2027 onwards. The architecture design of this deployment clusters Nvidia GPUs (both GB200s and GB300s) on the same network for low-latency performance across interconnected systems, letting OpenAI run workloads with optimal performance.
Source


