OpenAI will begin working with the Department of Defense to provide AI services for classified documents, CEO Sam Altman announced on Friday night.

“Tonight, we reached an agreement with the Department of War [Rebrand made by the Trump administration to the Department of Defense] to deploy our models in their classified network,” Altman said in a statement.

According to a report by Axios, no new contract has been signed yet between the Pentagon and OpenAI for this agreement, which would allow ChatGPT to be used safely while working in classified settings.

According to Altman, the Department of Defense agreed to OpenAI’s two main requirements for using their technology: “Prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.”

“We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs [specialized technical professionals working directly with customers to deploy AI models into production environments] to help with our models and to ensure their safety, we will deploy on cloud networks only,” Altman announced.

Trump clashes with Anthropic over AI use by US Gov’t

The decision follows an announcement by US President Donald Trump to halt all federal agency operations using software developed by Anthropic, the company behind the AI model Claude.

"I am directing every federal agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!" Trump said in a Truth Social post.

Anthropic’s requirements were reportedly the same as OpenAI's, with a prohibition on using Claude for mass surveillance of Americans or to develop fully autonomous weapons.

Altman also asked for “all AI companies to be treated the same way” during his agreement announcement, saying that OpenAI has “expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.”

In a statement, Anthropic said it would challenge any risk designation by the Department of Defense in court.

"We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said.

"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."

The main conflict arose after a report by The Wall Street Journal, where Claude was allegedly utilized, through Anthropic’s existing partnership with the software firm Palantir, in the operation to capture former Venezuelan president Nicolas Maduro.

Reports also mentioned that the Pentagon was working with Google and xAI to establish more permissive contracts and use their AI models (Gemini and Grok, respectively) while working with classified material.