What happened
Google has expanded the availability of its Tensor Processing Units (TPUs) to third-party organisations, departing from its prior exclusive internal use for cloud infrastructure. Meta is reportedly in discussions to integrate TPUs into its data centres by 2027 and to rent Google Cloud TPU capacity from 2025. This initiative includes offering on-premise TPU deployment, intended to enhance data control and meet regulatory requirements. Google's TPUs, reportedly cheaper than competitors' offerings, were utilised to train the Gemini 3 AI model, and the company aims to capture 10% of the AI chip market.
Why it matters
The introduction of third-party and on-premise TPU options increases the complexity of hardware procurement and supply chain management for organisations. This expands the vendor landscape for AI infrastructure, raising due diligence requirements for IT procurement and platform operators to evaluate new hardware specifications, integration challenges, and support models. It also introduces a new dependency on Google for core AI processing capabilities, potentially affecting long-term strategic planning and vendor lock-in considerations for decision-makers.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




