Amazon Web Services used its re:Invent conference to spotlight aggressive investment in proprietary AI chips and enterprise model-building tools, marking one of its strongest bids yet to compete with Nvidia while expanding its role in custom foundation model development.
AWS introduced Trainium3, the next generation of its in-house training accelerator, delivering four-times faster performance at lower power consumption than the current Trainium2. Amazon CEO Andy Jassy revealed that Trainium2 has already reached a multi-billion-dollar annual revenue run rate, with more than one million chips deployed and over 100,000 companies using it, largely through Bedrock. AWS CEO Matt Garman confirmed major demand from Anthropic, which is training its upcoming Claude models on more than 500,000 Trainium2 chips via Project Rainier, Amazon’s largest AI compute cluster to date.
Beyond hardware, AWS introduced expanded enterprise customization features across Amazon Bedrock and SageMaker AI, enabling organizations to build and fine-tune frontier-grade models without managing infrastructure. Developers can now access serverless customization workflows — from point-and-click interfaces to natural-language, agent-guided model building — as well as reinforcement-driven fine-tuning. These capabilities apply to AWS’s homegrown Nova models and select open-source models such as Llama and DeepSeek.
Together with the newly launched Nova Forge service, which lets customers commission proprietary variants of Nova models for $100,000 per year, AWS is positioning itself to win share not just on model performance, but on price, sovereignty, and differentiation. As enterprises increasingly seek private training environments and bespoke intelligence, AWS is betting that its vertically integrated stack — hardware, networking, and customizable AI — can capture meaningful business even in a market where Nvidia remains the standard.




