Amazon has released its third-generation AI training chip, Trainium3, boasting significant performance and efficiency improvements. The new chip is four times faster and 40% more energy-efficient than its predecessor, Trainium2. Each UltraServer can host 144 Trainium3 chips, with the capability to link thousands of UltraServers, providing up to one million chips for an application. This allows for faster training and deployment of AI models at a reduced cost.
Trainium3 is built using a 3nm process and is designed to handle demanding generative AI workloads. Amazon is also working on Trainium4, which will support Nvidia's NVLink Fusion technology, enabling interoperability with Nvidia GPUs. This collaboration allows customers to scale their existing infrastructure by combining the Trainium stack with Nvidia's compute portfolio. Amazon claims that Trainium3 offers up to 50% cost savings compared to Nvidia GPUs.
Amazon's strategy involves offering a cheaper alternative to Nvidia while enhancing its AI capabilities. The company is also rolling out new servers based on Trainium3, each containing 144 chips and offering four times the computing power of the previous generation while using 40% less power. This positions Amazon as a strong competitor in the cloud AI infrastructure market.
Related Articles

OpenAI Bets Big on AWS
Read more about OpenAI Bets Big on AWS →
AWS Enhances AI Agent Builder
Read more about AWS Enhances AI Agent Builder →
Nvidia unveils autonomous driving AI
Read more about Nvidia unveils autonomous driving AI →
China's AI Training Migration
Read more about China's AI Training Migration →
