OpenAI has confirmed it reached an agreement allowing the U.S. Department of Defense to deploy its AI models within classified environments, following a high-profile breakdown in negotiations between the Pentagon and Anthropic. Chief Executive Officer Sam Altman acknowledged the deal was finalized under time pressure but stated it embeds clear safeguards prohibiting mass domestic surveillance, fully autonomous weapons, and high-stakes automated decision systems such as social credit scoring.
The agreement follows federal action against Anthropic after its Chief Executive Officer Dario Amodei refused to permit unrestricted military use of its models. President Donald Trump directed agencies to phase out Anthropic’s technology, and Defense Secretary Pete Hegseth designated the company a supply-chain risk. More than 60 OpenAI employees and hundreds at Google signed a letter supporting safeguards around surveillance and autonomous weapons.
OpenAI stated its deployment model relies on cloud-based APIs, retention of its internal safety stack, cleared personnel oversight, and contractual protections aligned with U.S. law. Katrina Mulligan, head of national security partnerships, emphasized that deployment architecture limits integration into weapons systems. Altman said the objective was to de-escalate tensions between government and AI providers while maintaining principled boundaries, even amid public scrutiny and competitive pressure.




