What happened
Google's Tensor Processing Units (TPUs), application-specific integrated circuits optimised for deep learning, now challenge Nvidia's GPU dominance. Google's Gemini 3 AI model, trained on TPUs, demonstrates performance comparable to ChatGPT. Meta is negotiating to deploy Google TPUs, initially via rentals in 2026 then direct hardware in 2027, a multi-billion-dollar shift from its Nvidia reliance. This could reduce Nvidia's 90% market share to 70% by 2030, while Google's TPU share may increase from 5% to 25%. TPUs offer superior cost-performance for AI inference, projected to comprise 75% of AI compute by 2030.
Why it matters
The shift to Google TPUs introduces a new operational dependency and raises due diligence requirements for procurement and IT infrastructure teams. Organisations previously reliant on Nvidia's GPU ecosystem must now evaluate integrating a distinct ASIC architecture, impacting existing hardware and software stacks. This requires new skillsets for platform operators and AI/ML engineering teams to manage and optimise TPU-based workloads. The new vendor relationship also creates a different form of vendor lock-in, demanding careful consideration of long-term support and ecosystem compatibility.
Related Articles

Nvidia Faces AI Chip Challenge
Read more about Nvidia Faces AI Chip Challenge →
Qualcomm Enters AI Arena
Read more about Qualcomm Enters AI Arena →
OpenAI Announces 'Code Red' Focus
Read more about OpenAI Announces 'Code Red' Focus →
OpenAI's Dominance Under Threat
Read more about OpenAI's Dominance Under Threat →
