Google Challenges Nvidia's AI Dominance

Google Challenges Nvidia's AI Dominance

26 November 2025

What happened

Google is actively promoting its custom-designed Tensor Processing Units (TPUs) to major enterprises, expanding their availability beyond Google's internal operations. The latest Trillium TPU offers a 4.7 times performance increase over its predecessor, with pods scaling to 256 units. These TPUs, which power Google's Gemini and other AI applications for over a billion users, are now being deployed in smaller cloud provider data centres. Google is also enhancing the TPU ecosystem through SparseCore improvements, increased HBM capacity, and better inter-chip interconnects, alongside its JAX AI Stack, a modular, production-ready platform. Nvidia currently holds approximately 80% of the AI accelerator market, largely due to its CUDA software.

Why it matters

The introduction of Google's TPUs into a broader market, particularly via smaller cloud providers, introduces a new dependency for organisations considering diverse AI infrastructure. Procurement and platform operators face increased due diligence requirements to evaluate the long-term viability and support ecosystem of non-Nvidia hardware, especially given Nvidia's established market share and CUDA software. This creates a potential control gap in standardising AI acceleration platforms, requiring IT architecture teams to assess compatibility and integration complexities with existing AI development workflows and toolchains.

Source:ft.com

AI generated content may differ from the original.

Published on 26 November 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Google Challenges Nvidia's AI Dominance