Anthropic Pushes Back as Pentagon Seeks Broader Military Access to AI Models

Anthropic is reportedly resisting pressure from the U.S. Department of Defense to allow its AI models, including Claude, to be used for all lawful military purposes, highlighting growing tensions between AI safety commitments and defense adoption of generative AI technologies. According to reporting by Axios, the Pentagon has made similar requests to multiple AI developers, including OpenAI, Google, and xAI, as it seeks broader integration of advanced AI systems into defense operations.

The dispute centers on usage-policy boundaries, particularly Anthropic’s restrictions around fully autonomous weapons systems and large-scale domestic surveillance. A company spokesperson indicated that discussions with defense officials have focused on policy limits rather than specific military missions.

The disagreement follows earlier reporting that Claude had been used in a U.S. military operation targeting Venezuelan leader Nicolás Maduro, underscoring how quickly generative AI tools are entering national-security workflows.

The Pentagon is reportedly considering the future of a $200 million contract with Anthropic as negotiations continue, reflecting the broader challenge of aligning AI innovation, safety commitments, and military demand.

James Dargan

James Dargan is a writer and researcher at The AI Insider. His focus is on the AI startup ecosystem and he writes articles on the space that have a tone accessible to the average reader.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape