OpenAI Finalizes Classified Defense AI Agreement Following Pentagon Dispute with Anthropic

OpenAI has confirmed it reached an agreement allowing the U.S. Department of Defense to deploy its AI models within classified environments, following a high-profile breakdown in negotiations between the Pentagon and Anthropic. Chief Executive Officer Sam Altman acknowledged the deal was finalized under time pressure but stated it embeds clear safeguards prohibiting mass domestic surveillance, fully autonomous weapons, and high-stakes automated decision systems such as social credit scoring.

The agreement follows federal action against Anthropic after its Chief Executive Officer Dario Amodei refused to permit unrestricted military use of its models. President Donald Trump directed agencies to phase out Anthropic’s technology, and Defense Secretary Pete Hegseth designated the company a supply-chain risk. More than 60 OpenAI employees and hundreds at Google signed a letter supporting safeguards around surveillance and autonomous weapons.

OpenAI stated its deployment model relies on cloud-based APIs, retention of its internal safety stack, cleared personnel oversight, and contractual protections aligned with U.S. law. Katrina Mulligan, head of national security partnerships, emphasized that deployment architecture limits integration into weapons systems. Altman said the objective was to de-escalate tensions between government and AI providers while maintaining principled boundaries, even amid public scrutiny and competitive pressure.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape