What happened
Amazon introduced its third-generation AI training chip, Trainium3, fabricated on a 3nm process, which is four times faster and 40% more energy-efficient than Trainium2. New UltraServers, each hosting 144 Trainium3 chips, can be linked to provide up to one million chips for a single application, offering up to 50% cost savings compared to Nvidia GPUs. Amazon is also developing Trainium4 to support Nvidia's NVLink Fusion technology, enabling interoperability with Nvidia GPUs and allowing customers to combine the Trainium stack with Nvidia's compute portfolio.
Why it matters
The introduction of Trainium3 and its planned interoperability with Nvidia's NVLink Fusion introduces a new dependency for platform operators managing heterogeneous AI compute environments. This increases the due diligence requirements for procurement and IT security teams to evaluate interoperability, performance consistency, and potential vendor lock-in risks when scaling AI infrastructure across different hardware architectures. The shift also creates an oversight burden for compliance regarding data sovereignty and processing locations within a hybrid compute strategy.
Related Articles

OpenAI Bets Big on AWS
Read more about OpenAI Bets Big on AWS →
AWS Enhances AI Agent Builder
Read more about AWS Enhances AI Agent Builder →
Nvidia unveils autonomous driving AI
Read more about Nvidia unveils autonomous driving AI →
China's AI Training Migration
Read more about China's AI Training Migration →
