What happened
Google, Microsoft, Amazon, and Meta are investing billions into specialised AI data centres, fundamentally reshaping global tech infrastructure. These facilities integrate high-performance GPUs, TPUs, advanced networking, and sophisticated cooling and energy systems to support generative AI training and inference. Google deploys custom TPUs, AWS offers Trainium and Inferentia chips, while Nvidia remains a crucial supplier of H100 GPUs for nearly all major AI operations. This investment aims to manage AI model training and deployment in-house, securing a strategic competitive advantage.
Why it matters
Access to frontier AI models will increasingly depend on proprietary infrastructure. Procurement teams face escalating costs and supply chain risks for high-performance AI hardware, particularly GPUs and specialised processors, due to sustained demand outstripping supply. Platform engineers must prioritise energy efficiency and advanced cooling solutions in data centre design, as facilities now consume electricity comparable to small cities, driving a shift towards renewable energy partnerships and sourcing.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




