AWS, OpenAI Announce 38 Billion Dollar AI Partnership

Amazon Web Services and OpenAI have announced a strategic multi-year partnership valued at 38 billion dollars to provide advanced computing infrastructure for OpenAI’s core artificial intelligence workloads. The agreement will deliver hundreds of thousands of state-of-the-art NVIDIA graphics processing units to OpenAI, with capacity expanding to tens of millions of central processing units to support large-scale artificial intelligence operations.
The partnership addresses the growing demand for computing power in the artificial intelligence industry. As artificial intelligence model providers work to develop increasingly sophisticated systems, they require massive infrastructure capable of handling complex computational tasks reliably and securely. AWS will deploy custom infrastructure specifically designed for OpenAI’s requirements, featuring NVIDIA GPU clusters connected through Amazon EC2 UltraServers to enable low-latency processing across interconnected systems.
The infrastructure deployment will support multiple artificial intelligence functions including serving inference for ChatGPT and training next-generation models. Full capacity is targeted for deployment before the end of 2026, with additional expansion possible into 2027 and beyond. AWS has demonstrated capability in running large-scale artificial intelligence infrastructure securely, with experience managing clusters containing over 500,000 chips.
AWS and OpenAI have previously worked together to expand artificial intelligence model availability. OpenAI open-weight foundation models have been made available through Amazon Bedrock, the fully managed service that provides access to high-performing foundation models. This expansion brought additional model options to millions of AWS customers working on various applications including coding, scientific analysis, and mathematical problem-solving.



