Logo

How Anthropic and Google plan to challenge NVIDIA’s AI dominance

Anthropic, Google, and Broadcom struck an important compute deal, reshaping AI infrastructure and aiming to challenge NVIDIA’s dominance.

Published on April 9, 2026

Anthropic

© Anthropic

Team IO+ selects and features the most important news stories on innovation and technology, carefully curated by our editors.

American AI lab Anthropic is ramping up its compute capacity. Semiconductor infrastructure company Broadcom disclosed that it will supply 3.5 gigawatts of computing capacity by 2027–enough to power 35 million laptops simultaneously.  

Under this partnership, the maker of Claude AI is increasing the use of Google Cloud’s tensor processing units (TPUs). These are Google’s most advanced AI chips. In October, the AI lab secured a deal for over a gigawatt of compute capacity. 

Like other AI developers, including the French Mistral AI, Anthropic is moving towards owning and operating infrastructure. As part of the agreement, Anthropic will also have access to next-generation custom silicon through 2031, a move intended to challenge NVIDIA’s dominance in the AI chip market. 

Anthropic’s financial and customer growth 

Anthropic’s infrastructure expansion stems directly from explosive financial growth. Concurrently with the announcement of the deal with Broadcom and Google, the AI lab reported an annualized revenue run rate of $30 billion. 

The run rate is a projection of a company’s annual revenue based on a shorter recent period, typically a quarter, scaled up to a full year. The figure reported by Anthropic represents a massive surge from the $9 billion run rate recorded at the end of 2025. 

Enterprise adoption drives this revenue spike. Anthropic now serves over 1,000 business customers who spend more than $1 million annually on its Claude services. This high-value customer base doubled in less than two months, up from just 500 in February 2026. As customer demand rises, infrastructure has to evolve to maintain its competitive edge against rivals like OpenAI, whose revenue run rate recently declined to $24 billion. 

By committing to a direct hardware partnership, Anthropic secures the raw processing power necessary to train its next generation of foundation models without relying entirely on third-party cloud availability or facing sudden capacity constraints. 

The 3.5 GW blueprint

The agreement centers on 3.5 gigawatts of computing capacity starting in 2027, forming the core of  Anthropic’s $50 billion commitment to United States infrastructure. This capacity will be deployed across four locations in Ohio, Iowa, Texas, and Georgia to minimize strain on the electricity grid while ensuring redundancy for clients. 

Building infrastructure at this unprecedented scale costs an estimated $30 billion to $35 billion per gigawatt. Anthropic plans to fund this buildout through an initial public offering slated for late 2026. 

A credible alternative to NVIDIA

The 2027 infrastructure rollout perfectly aligns with the mass production of Google's seventh-generation TPU. This version will leverage Broadcom’s advanced 2-nanometer process technology. It features integrated optical interconnects designed specifically to support massive multi-gigawatt cluster scaling without data bottlenecks.

The deal includes strict commercial performance requirements to protect Anthropic's investment. For instance, the hardware must be capable of a 40% reduction in training time for AI models compared to its previous version. These performance mandates ensure that the custom hardware maintains a highly competitive cost-per-token ratio compared to NVIDIA’s upcoming Blackwell and Rubin platforms. Broadcom will manufacture the Google-authorized TPU racks, which Anthropic will then install directly into its own proprietary data centers.

Anthropic negotiation wins 

Anthropic and Google have signed a supply deal running until 2031. Under the agreement, Google retains ownership of the designs for its AI chips. But Anthropic negotiated an important carve-out: it gets first claim on 60% of all new TPUs produced over the next five years. This means that even if chips become scarce industry-wide, Anthropic is near the front of the queue.

The deal goes further than just guaranteed quantities. Anthropic can also ask Google to tweak the chips themselves — specifically the parts that handle the heavy mathematical calculations AI models rely on — to better suit how Anthropic's software works. This kind of influence over chip design used to be the exclusive territory of large, self-contained companies like Google and Apple, which design and use their own hardware in-house. By shaping the chips that run its models, Anthropic can train AI faster and more cheaply, which should speed up future versions of Claude.

Market’s reaction 

Financial markets responded quickly to the announcement. Broadcom's stock jumped more than 6% on the news, with analysts interpreting the deal as strong confirmation that Broadcom's bet on custom AI chips is paying off — some now project its AI chip revenue could surpass $100 billion by 2027.

NVIDIA, whose chips have dominated AI computing, saw a small drop in pre-market trading after the announcement. The signal from investors was clear: Broadcom and Google have shown they can build custom chips at the scale and performance level that the biggest AI companies require, meaning NVIDIA now faces credible competition.

A sliding door in the future of AI leadership?

The scale of this investment has implications well beyond Anthropic itself. The $50 billion commitment to building infrastructure inside the United States keeps critical AI capabilities on American soil, a move that reinforces the local leadership in computing technology.   

Until recently, AI development has been almost entirely dependent on NVIDIA’s chips. By demonstrating that custom-built alternatives can work at this scale, the Anthropic-Google-Broadcom partnership opens the door to a more competitive hardware market, which should drive down costs and accelerate progress for everyone.

The first gigawatt of this new infrastructure is due to come online in 2026, and the industry will be watching closely. If the TPU-based deployment hits its performance targets, it will likely push other major AI companies to follow a similar playbook — building or commissioning their own custom chips rather than defaulting to NVIDIA. AI leadership is no longer just about writing better software. Increasingly, it depends on who controls the physical infrastructure: the chips, the data centers, and the power supply to run them.