The Pentagon is considering ending its relationship with Anthropic, the developer of the Claude AI model, following a dispute over limitations the company seeks to impose on military applications of its technology, Axios reported Sunday.  

The friction follows a separate report regarding the use of Claude during the raid that led to the capture of former Venezuelan president Nicolas Maduro. The AI was allegedly utilized through Anthropic’s existing partnership with the software firm Palantir.  

A senior official told Axios that the Pentagon is pushing for more permissive use of AI technology, arguing that the Department of Defense must be able to deploy it for “all lawful purposes,” ranging from weapons development and intelligence collection to direct battlefield operations.

Anthropic has consistently argued against the unrestricted use of AI models for military purposes, specifically designating two areas as off-limits: the mass surveillance of Americans and the development of fully autonomous weaponry.

“Everything's on the table, including dialing back the partnership with Anthropic or severing it entirely,” the official said. “But there'll have to be an orderly replacement for them, if we think that's the right answer.”

A person holds a smartphone displaying the logo of “Claude,” an AI language model by Anthropic, with the company’s logo visible in the background, December 29, 2025. (Illustrative).
A person holds a smartphone displaying the logo of “Claude,” an AI language model by Anthropic, with the company’s logo visible in the background, December 29, 2025. (Illustrative). (credit: Cheng Xin/Getty Images)

​In a statement to Axios, Anthropic maintained that it remains “committed to using frontier AI in support of US national security.”  

Pentagon’s AI portfolio

The Pentagon currently utilizes four primary AI models in its operations: Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, and Anthropic’s Claude.

​Claude is currently the only model authorized for operations involving classified documents, a result of a $200 million contract signed between the Pentagon and Anthropic in 2025.

​According to Axios, Google, OpenAI, and xAI have developed specialized modifications to versions of their models used by the Pentagon, lifting certain restrictions in their civilian counterparts.

​The Pentagon is now moving toward a new unified agreement with these providers to ensure their models can be used in both classified and unclassified operations under the “all lawful purposes” standard.

The official claimed that at least one company has already agreed to these terms, while two others are reportedly showing more flexibility than Anthropic.