Amazon Web Services has unveiled a major push into enterprise-controlled AI infrastructure and autonomous software development, introducing AI Factories for on-prem deployment alongside a new suite of frontier AI agents. Together, they represent AWS’s most assertive move yet to meet rising demands for secure, sovereign AI workloads that operate with minimal human intervention.
AI Factories are full-stack systems installed directly inside a customer’s own data center, where the organization provides the facility and power while AWS manages the integrated AI environment. Developed in collaboration with Nvidia, the platform blends Blackwell GPUs with AWS technologies such as the Trainium3 chip, Bedrock model services, and SageMaker AI tooling. The initiative aims to satisfy government and corporate customers who require strict control over sensitive data — allowing AI to run locally without sending information to external clouds.
AWS also introduced three autonomous frontier agents designed to execute coding, security, and DevOps automation. The centerpiece is the Kiro autonomous agent, which continually learns a company’s coding practices and operates independently for extended periods, completing complex updates across multiple systems without frequent human oversight. Supporting agents — including the AWS Security Agent and DevOps Agent — identify vulnerabilities, test performance, and enforce operational standards as new code is deployed.
While competitors like Microsoft are also rolling out Nvidia-powered infrastructure for private AI workloads, AWS is positioning itself at the intersection of cloud intelligence and on-prem control. By binding hardware, sovereign deployment, and persistent agentic automation into a single strategy, the company is signaling a shift toward hybrid AI environments where enterprise data never leaves the building — and AI systems increasingly act as autonomous teammates.




