Pentagon–Anthropic Dispute Escalates Over AI Access as OpenAI Deal Advances and Legal Challenge Looms

A dispute between Anthropic and the U.S. Department of Defense (DoD) has intensified after negotiations over a $200 million contract collapsed, triggering a broader conflict over how artificial intelligence can be used in military operations. The disagreement centers on whether the Pentagon should have unrestricted access to Anthropic’s AI systems.

Dario Amodei, CEO of Anthropic, refused to approve contract language that would allow the military to deploy the company’s AI models for “any lawful use.” Amodei argued the company would not permit its technology to support domestic mass surveillance or fully autonomous weapons systems without human oversight. Following the breakdown in negotiations, the Pentagon instead reached an agreement with OpenAI, whose leadership, including CEO Sam Altmanand President Greg Brockman, approved a defense contract allowing the use of its AI systems for lawful purposes.

The conflict escalated when the Department of Defense designated Anthropic a supply-chain risk, a classification typically reserved for foreign adversaries. The designation requires defense contractors to certify they are not using Anthropic’s models in work tied to Pentagon contracts. Despite the move, the U.S. military continues to rely on Anthropic’s Claude models within Palantir’s Maven Smart System, which supports operational analysis in ongoing U.S. military activity in the Middle East.

Amodei said Anthropic plans to challenge the designation in federal court, arguing the decision is legally unsound and unnecessarily punitive. He stated that the designation applies narrowly to Pentagon contracts and does not broadly restrict customers from using Claude in unrelated work.

Reports also indicate that discussions between Anthropic and Pentagon official Emil Michael have resumed in an effort to reach a compromise that would allow continued military access to the company’s AI models while addressing the company’s restrictions on surveillance and autonomous weapons use.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape