Tech News

Anthropic Expands Google Cloud TPU Deal Worth Billions

Claude AI Anthropic

Anthropic has announced a major expansion of its partnership with Google Cloud, securing access to up to one million Tensor Processing Unit chips in a deal valued at tens of billions of dollars. The artificial intelligence startup confirmed the agreement will bring more than one gigawatt of computing capacity online in 2026, marking the largest expansion of its TPU usage to date.

The expansion aims to provide Anthropic with the computational resources necessary to train and serve future generations of its Claude AI models. The company selected Google’s TPUs based on their price-performance ratio and efficiency, combined with Anthropic’s existing experience using these chips for model training and deployment.

Krishna Rao, Chief Financial Officer at Anthropic, stated that the partnership will help the company continue growing the computing infrastructure needed to advance AI development. He emphasized that customers ranging from Fortune 500 companies to AI-native startups rely on Claude for critical business operations, and the expanded capacity will enable the company to meet exponentially growing demand while maintaining competitive model performance.

Thomas Kurian, Chief Executive Officer at Google Cloud, noted that Anthropic’s decision to significantly expand TPU usage reflects the strong price-performance and efficiency the company has experienced with these chips over several years. Google Cloud continues to innovate and enhance the efficiency and capacity of its TPU infrastructure, building on its mature AI accelerator portfolio, including the seventh generation TPU called Ironwood.

Industry analysts estimate that establishing a one-gigawatt data center requires approximately 50 billion dollars in investment, with around 35 billion dollars typically allocated for chip procurement alone. The scale of this agreement underscores the massive capital requirements driving the AI industry as companies race to develop increasingly sophisticated models.

Anthropic and Google Cloud first established their strategic partnership in 2023, when Anthropic selected Google Cloud as a key infrastructure provider for training its AI models. Through this collaboration, Claude models became available to businesses via Google Cloud’s Vertex AI platform and Google Cloud Marketplace.

Today, thousands of businesses utilize Claude models on Google Cloud infrastructure, including notable companies such as Figma, Palo Alto Networks, and Cursor. The partnership has proven valuable for both organizations as enterprise adoption of AI technologies continues to accelerate.

Anthropic now serves more than 300,000 business customers, and the number of large accounts representing more than 100,000 dollars in run-rate revenue has grown nearly sevenfold in the past year. This rapid customer growth has driven the need for substantially increased computing capacity to support model training, inference operations, and ongoing research and development activities.

The expanded computational resources will also support more thorough testing, alignment research, and responsible deployment practices at scale. Anthropic has positioned itself as a company focused on AI safety and building models specifically designed for enterprise use cases.

Despite this major expansion with Google Cloud, Anthropic maintains a diversified compute strategy across multiple chip platforms. The company continues to work with three primary chip providers: Google’s TPUs, Amazon’s Trainium chips, and NVIDIA’s GPUs. This multi-platform approach allows Anthropic to advance Claude’s capabilities while maintaining strong partnerships across the industry.

Anthropic remains committed to its partnership with Amazon, which serves as the company’s primary training partner and cloud provider. The company continues collaborating with Amazon on Project Rainier, a massive compute cluster featuring hundreds of thousands of AI chips distributed across multiple data centers in the United States.

The announcement comes as the AI industry experiences insatiable demand for computing chips and infrastructure. Companies are investing massive amounts of capital to develop technology capable of matching or surpassing human intelligence. Rival OpenAI recently signed multiple deals potentially costing over one trillion dollars to secure approximately 26 gigawatts of computing capacity, enough to power roughly 20 million American homes.

Anthropic was founded in 2021 by former researchers from OpenAI and has adopted a measured approach emphasizing efficiency, diversification, and a concentrated focus on enterprise customers. The company’s annual revenue run rate is nearing seven billion dollars, representing substantial growth driven by enterprise product adoption.

The Claude coding assistant generated millions in annualized revenue within just two months of launch, making it the company’s fastest-growing product. The Claude series of language models operates on Google’s TPUs, Amazon’s custom Trainium chips, and NVIDIA’s GPUs, with each platform designated for specific tasks like training, inference, and research.

Anthropic’s capacity to distribute workloads across multiple vendors allows for optimization concerning cost, performance, and power limitations. Industry insiders note that each dollar spent on computing resources extends further under this multi-cloud model compared to setups confined to a single vendor.

Google has invested approximately three billion dollars in Anthropic over the past couple of years, while Amazon has invested around eight billion dollars, significantly outpacing Google’s confirmed equity stake. However, Anthropic maintains control over model design, pricing and customer relationships, ensuring no exclusivity with any cloud vendor.

Related Articles

Back to top button