The Former OpenAI CTO Just Signed a Billion-Dollar Deal With Google. Here's Why It Matters.

Fourteen months ago, Mira Murati was OpenAI's Chief Technology Officer - the person who oversaw the technical development of GPT-4, DALL-E, and the infrastructure that made ChatGPT the fastest-growing consumer product in history. Then she left to build her own lab.
This week, at Google Cloud Next in Las Vegas, Google announced that Thinking Machines Lab - Murati's startup - has signed a new multibillion-dollar deal for Google Cloud infrastructure powered by Nvidia's latest GB300 chips. A 14-month-old company, founded by someone who just left the most talked-about AI company in the world, is now the third frontier AI developer in line for Google's most advanced compute capacity, behind only Anthropic and Meta.
That's a fast start.
Who Thinking Machines Lab Is
Murati left OpenAI in February 2025. The exit was sudden enough to generate weeks of speculation about what happened and what she'd do next. The answer came quickly: she raised a $2 billion seed round at a $12 billion valuation and started Thinking Machines Lab.
The company launched its first product in October - a tool called Tinker that automates the creation of custom frontier AI models. The idea is that building a specialised AI model for a specific use case is still too hard and too expensive for most organisations. Tinker is Murati's answer to that problem: make it accessible enough that companies without deep ML teams can create models tuned to their actual needs.
It's a smart wedge. The frontier model race - the one OpenAI, Anthropic, and Google are running - is expensive, compute-intensive, and increasingly winner-takes-most. The custom model market is fragmented, underserved, and growing as more businesses realise that a general-purpose model doesn't do their specific job as well as a focused one would. Murati knows the frontier side better than almost anyone. She's betting the more interesting opportunity right now is one step downstream.
What the Google Deal Actually Means
The agreement gives Thinking Machines access to Google's A4X Max virtual machines, which run on Nvidia GB300 chips and deliver roughly 2x the speed of the previous GPU generation. Google's Jupiter network handles the weight transfers for Thinking Machines' reinforcement learning workloads - the compute-intensive training process that shapes how a model behaves.
The deal is not exclusive. Thinking Machines can still use other cloud providers alongside Google, which is standard for companies at this stage. What it does is lock in a serious compute runway at a time when GPU access is one of the biggest constraints for any AI lab trying to train at scale.
The timing matters too. Google announced the deal at Cloud Next, which means it's as much a signal to the market as it is an infrastructure agreement. Google is saying: the serious people coming out of the most serious labs are choosing our infrastructure. That's a meaningful data point in a cloud compute market where AWS, Google, and Microsoft are all competing aggressively for frontier AI workloads.
The Broader Pattern Here
Murati is not the only person to leave a frontier AI lab and immediately attract serious capital and partnerships. The talent concentration at OpenAI, Anthropic, Google DeepMind, and a handful of others has created a generation of founders who exit with credibility, networks, and technical knowledge that takes most startups a decade to accumulate.
What makes Thinking Machines interesting is the specific gap it's targeting. General-purpose models are getting better and cheaper by the quarter. But the gap between a general-purpose model and a model that's genuinely excellent at one specific domain - medical coding, legal contracts, customer support at scale, financial analysis - is still large. And bridging that gap currently requires ML expertise that most companies don't have.
If Tinker works the way Murati intends, it starts to look like what AWS did for cloud infrastructure: take something that only large, technically sophisticated organisations could do, and make it accessible enough that everyone else can too.
For businesses already thinking about how AI fits into their specific workflows - not just using a general chatbot, but AI that understands their context and their customers - that kind of tool is worth watching. We've been covering how the infrastructure layer for AI is being built out rapidly, from [Amazon's $33 billion bet on Anthropic](https://converzoy.com/insights/amazon-anthropic-33-billion-deal) to [Google's agentic cloud push](https://converzoy.com/insights/google-cloud-next-2026-agentic-cloud). Thinking Machines is a different kind of bet - not on building the most powerful model, but on making powerful models accessible to everyone who can't afford to train one.
Whether Tinker becomes that infrastructure layer is still an open question. But Murati has earned the right to try. And Google, at least, thinks it's worth a few billion dollars to find out.
You might also like

Tim Cook Is Out. John Ternus Is In. Here's What It Means for Apple's AI Bet.

Google Just Declared the Age of the Agentic Cloud. Here's What That Actually Looks Like.

Amazon Just Bet $33 Billion on Anthropic. Here's What That Actually Means.

Google Just Let AI Agents Talk Directly to Your Database. Here's What That Actually Means.