Anthropic is reportedly resisting pressure from the U.S. Department of Defense to allow its AI models, including Claude, to be used for all lawful military purposes, highlighting growing tensions between AI safety commitments and defense adoption of generative AI technologies. According to reporting by Axios, the Pentagon has made similar requests to multiple AI developers, including OpenAI, Google, and xAI, as it seeks broader integration of advanced AI systems into defense operations.
The dispute centers on usage-policy boundaries, particularly Anthropic’s restrictions around fully autonomous weapons systems and large-scale domestic surveillance. A company spokesperson indicated that discussions with defense officials have focused on policy limits rather than specific military missions.
The disagreement follows earlier reporting that Claude had been used in a U.S. military operation targeting Venezuelan leader Nicolás Maduro, underscoring how quickly generative AI tools are entering national-security workflows.
The Pentagon is reportedly considering the future of a $200 million contract with Anthropic as negotiations continue, reflecting the broader challenge of aligning AI innovation, safety commitments, and military demand.




