← Back to insights

Anthropic's AI Chip Deal With Broadcom and Google: Why This Three-Way Partnership Matters

AITechAI ChipsInfrastructure
Karan Gosrani
Team Converzoy|
Anthropic's AI Chip Deal With Broadcom and Google: Why This Three-Way Partnership Matters

On April 6, 2026, Anthropic announced what might be the most interesting infrastructure deal in the AI industry this year. Not because of the dollar amount (though $21 billion is nothing to sneeze at), but because of how it works. Three companies, each with a different superpower, locking arms to build the compute backbone for the next generation of AI.

Here is the deal in plain English: Broadcom will manufacture Google's custom TPU chips, and Anthropic will use those chips to train and run Claude. Starting in 2027, Anthropic gets access to 3.5 gigawatts of TPU compute capacity, on top of the 1 gigawatt already flowing in 2026. For context, 3.5 gigawatts is roughly enough electricity to power a city of 2.5 million people. That is a staggering amount of compute.

But the raw numbers are not the interesting part. The interesting part is why these three specific companies need each other, and what it tells us about where AI infrastructure is heading.

What Each Player Brings to the Table

Broadcom: The Chip Whisperer

Broadcom is the company most people outside of tech have never heard of, but they are arguably the linchpin of this deal. They do not design AI models. They do not run cloud platforms. What they do is manufacture custom silicon at a scale and quality that very few companies on Earth can match.

Google designed the TPU (Tensor Processing Unit), their custom AI chip that has been powering Google's own AI workloads for years. But designing a chip and manufacturing it at massive scale are two very different problems. Broadcom signed a long-term agreement with Google to develop and supply future generations of TPUs. Think of it like this: Google is the architect, Broadcom is the construction company that actually builds the skyscraper.

Mizuho analysts estimate Broadcom will pull in $21 billion in AI revenue from Anthropic alone in 2026, jumping to $42 billion in 2027. Those are eye-watering numbers that show just how central custom chip manufacturing has become to the AI industry.

Google: The Silicon Architect

Google has been designing TPUs since 2015, long before the current AI boom. That head start matters. While NVIDIA dominates the AI chip market with its GPUs, Google TPU chips are purpose-built for the specific math that AI training and inference require. They are not general-purpose graphics cards repurposed for AI. They are designed from the ground up for matrix multiplication and the other operations that make large language models tick.

For Google, this deal deepens a relationship that started in October 2025. Anthropic was already running workloads on Google Cloud, and Claude is available through Google's Vertex AI platform. Now Google gets a massive, committed customer for its custom silicon, which justifies continued investment in TPU development. It is a virtuous cycle: more customers mean more investment in better chips, which attract more customers.

Anthropic: The Compute-Hungry Model Builder

Anthropic is the company that actually needs all this compute. Training frontier AI models like Claude requires astronomical amounts of processing power, and the appetite only grows with each generation. With a revenue run rate that just crossed $30 billion (up from $9 billion at the end of 2025), Anthropic has the financial muscle to sign deals at this scale.

But here is what makes Anthropic's approach genuinely clever: they are not putting all their eggs in one basket. Anthropic trains Claude on AWS Trainium chips (Amazon is their primary cloud partner, with Project Rainier running roughly 500,000 Trainium 2 chips in Indiana), Google TPUs, and NVIDIA GPUs. This multi-hardware strategy gives them leverage in negotiations and resilience if any single chip supply chain hits a snag. In an industry where compute is the new oil, diversifying your supply is just smart business.

Why the Three-Way Partnership Model Matters

This deal is a signal that the AI industry is maturing beyond the "buy NVIDIA GPUs and stack them in a data center" playbook. Here is why this three-way structure is significant:

1. Custom AI chips are eating the AI world. NVIDIA still dominates, but the trend toward custom silicon is accelerating. Google TPU, Amazon Trainium, and even Meta's in-house chips all point to the same conclusion: when you are spending billions on compute, even a small efficiency gain from custom-designed chips translates into enormous savings. The era of one-size-fits-all AI hardware is fading.

2. Separation of concerns actually works. The beauty of this deal is that each company focuses on what it does best. Google designs chips. Broadcom builds them. Anthropic builds AI models. Nobody is trying to do everything in-house. This specialization means each layer can iterate independently.

3. It creates healthy competition at every layer. Anthropic is not locked into Google's chips. Google is not dependent on a single AI customer. Broadcom is not exclusive to any one chip designer. This keeps everyone honest and drives innovation.

4. The numbers suggest AI compute demand is far from peaking. Going from 1 gigawatt in 2026 to 3.5 gigawatts in 2027 is a 3.5x increase in a single year. And that is just one company's demand. If you needed evidence that the AI infrastructure 2026 buildout is still in its early innings, this is it.

The Bigger Picture for AI Infrastructure in 2026

Zoom out, and this deal fits into a broader pattern. In November 2025, Anthropic committed to investing $50 billion in U.S. AI infrastructure. Most of the new compute capacity from this Broadcom-Google deal will be located in the United States. That is not just a business decision. It is a strategic one, as governments worldwide increasingly view AI compute as a matter of national security.

The deal also highlights a fascinating tension in the AI industry. On one hand, companies like Anthropic need massive, centralized compute clusters to train frontier models. On the other hand, they need resilience and flexibility, which means spreading workloads across multiple chip architectures and cloud providers. This push and pull between concentration and diversification is going to define AI infrastructure strategy for years to come.

For Anthropic specifically, crossing the 1,000-customer milestone (each spending over $1 million annually) and hitting a $30 billion run rate means the revenue is there to justify these enormous infrastructure bets. This is not speculative spending. It is being funded by real, rapidly growing demand for Claude across enterprises.

What to Watch Next

A few things to keep an eye on as this deal unfolds:

First, the TPU v7p rollout. Anthropic secured nearly 1 million Google TPU v7p units. How these next-gen custom AI chips perform compared to NVIDIA's latest offerings will say a lot about whether custom silicon can truly challenge NVIDIA's dominance.

Second, Anthropic's multi-cloud balancing act. They are now deeply invested in both AWS and Google Cloud infrastructure. Managing workloads across competing cloud providers, each with their own custom chips, is a genuinely hard engineering problem. How well they execute this will be a competitive advantage (or liability).

Third, the ripple effects on Broadcom's stock and strategy. If the Mizuho estimates are even close to right ($21 billion in 2026, $42 billion in 2027 from Anthropic alone), Broadcom is rapidly becoming one of the most important companies in the AI supply chain. That kind of customer concentration is both an opportunity and a risk.

The Bottom Line

This is not just another big tech deal. It is a blueprint for how AI infrastructure might get built going forward: specialized companies collaborating at massive scale, each contributing their core competency. Google designs the chips. Broadcom builds them. Anthropic uses them to push the boundaries of what AI can do.

The old model of vertically integrated tech companies doing everything themselves is giving way to something more like the semiconductor industry's own supply chain, where design, fabrication, and application are handled by different specialists. If this Anthropic Broadcom Google AI chips partnership delivers on its promises, expect to see more deals like it.

One thing is clear: the companies that figure out the right partnerships for AI compute are going to have a serious edge. And right now, Anthropic is playing that game better than almost anyone.

Ready to convert more visitors?

Try Converzoy free. No credit card required.

Get started for free