SpaceX Is Building Its Own GPUs. The Nvidia Dependency Problem Just Got Personal.

SpaceX is preparing for a $1.75 trillion IPO this summer, and buried in its S-1 filing is a disclosure that says a lot about where the AI infrastructure race is heading.
The company has warned prospective investors it is planning "substantial capital expenditures" — and among the items listed is manufacturing its own GPUs. SpaceX already spent $20.74 billion on capital expenditure in 2025, driven largely by AI infrastructure investment. The in-house chip plan suggests that number is going up, and that SpaceX has decided the risk of depending on external chip suppliers is no longer acceptable.
The S-1 is direct about why: "We do not have long-term contracts with many of our direct chip suppliers." For a company operating at SpaceX's scale, that's a vulnerability it's clearly decided to eliminate.
The Terafab Connection
This isn't a standalone SpaceX initiative. It connects to a broader project Elon Musk has been building across his companies.
SpaceX, xAI, and Tesla are jointly developing what they're calling Terafab — an AI chip manufacturing complex planned for Austin, Texas. Tesla is planning to use Intel's 14A manufacturing process to produce chips there. The goal is to give Musk's network of companies a vertically integrated compute stack: their own chips, built in their own facility, trained on their own infrastructure, running in their own products.
The ambition is significant. Nvidia currently holds an overwhelming share of the AI training and inference chip market. The companies that have tried to build around that dependency — Google with TPUs, Amazon with Trainium, now Apple pushing its Neural Engine toward AI workloads — have done so over years with enormous investment. SpaceX is announcing its intention to join that list while simultaneously preparing for a public offering that would make it one of the most valuable companies ever listed.
Why This Is Happening Now
The timing is not accidental. GPU supply has been the single biggest operational constraint for any company running large AI workloads over the past two years. Nvidia's chips are expensive, allocation is competitive, and delivery timelines have been unpredictable. Companies that secured supply early have advantages over companies that didn't, and that dynamic has no obvious near-term resolution.
We've written about how this constraint is reshaping infrastructure strategy across the industry. [Anthropic locked in a deal with Broadcom and Google for custom AI silicon](https://converzoy.com/insights/anthropic-broadcom-google-ai-chips-partnership) specifically to reduce Nvidia dependency. [Amazon committed $33 billion to Anthropic](https://converzoy.com/insights/amazon-anthropic-33-billion-deal) and built Project Rainier around 500,000 of its own Trainium chips. The pattern is consistent: companies operating at the frontier are concluding that renting compute from Nvidia is a competitive liability, not just a cost issue.
SpaceX's situation is slightly different from a pure AI lab. Its core business is rockets and satellites, and AI is embedded across those systems — autonomous landing, Starlink network management, manufacturing robotics. For Musk's broader vision of what SpaceX becomes, the AI compute requirement is only going to grow. Locking in supply at the silicon level makes sense if you believe, as Musk clearly does, that the companies that control their own compute in ten years will have structural advantages over those that don't.
What It Means for the Chip Market
Every major tech company building its own silicon makes Nvidia's long-term dominance slightly less certain — though Nvidia's lead is deep enough that the threat is measured in years, not quarters.
The more immediate implication is for the chip manufacturing industry itself. Intel's 14A process being used for Terafab is a significant endorsement at a time when Intel has been under pressure to prove its foundry business is competitive with TSMC. If Musk's companies follow through, it gives Intel a flagship customer for its most advanced process node and adds credibility to the argument that the US can develop a viable domestic alternative to TSMC for advanced chip production.
That geopolitical dimension matters in the current environment. US semiconductor policy has been focused on reducing dependence on Taiwanese manufacturing, and a large-scale chip production facility in Austin — even if primarily for private use — fits that broader industrial policy direction.
The [AI data center capacity problem](https://converzoy.com/insights/ai-data-center-delays-power-shortage-2026) we covered earlier this year is partly a chip supply problem and partly a power and physical infrastructure problem. SpaceX's GPU plan doesn't solve the power side, but it signals that the largest players are treating compute supply as a strategic priority worth billions in capital expenditure rather than a procurement challenge to be managed quarterly.
Whether Terafab ships on time and at the performance targets Musk has in mind is a separate question. His manufacturing ambitions have a mixed track record on timelines. But the direction is clear, and it's the same direction every serious AI company is moving in: own your compute, or be at the mercy of someone who does.
You might also like

Google Just Split Its AI Chip Into Two. One for Training. One for Inference. That's a Bigger Deal Than It Sounds.

OpenAI Just Shipped GPT-5.5. It's Also Quietly Missing Its Own Revenue Targets.

Meta Paid $2 Billion for Manus. China Is Ordering It Back.

ChatGPT Images 2.0 Thinks Before It Draws. DALL-E 3 Has Three Weeks Left.