Could the secret weapon behind the success of the recent joint Israeli-American military operation in Iran, which resulted in the assassination of former Supreme Leader Ali Khamenei and other senior officials, be the same chatbot millions use every day?

According to a report by the Wall Street Journal, the United States military used Anthropic’s artificial intelligence model, "Claude," to assist in the Israeli-US strikes against Iran.

Dr. Michael C. Horowitz, of the Council on Foreign Relations and a former Deputy Assistant Secretary of Defense, noted that US Central Command (CENTCOM) has been "one of the most forward-leaning US commands when it comes to experimenting with emerging technologies."

The operation in Iran was not an isolated incident. The Wall Street Journal reported that the Pentagon had previously utilized Claude during the operation to capture Venezuelan President Nicolas Maduro in January. Dr. Horowitz suggested the AI’s role was likely focused on open-source intelligence (OSINT). "My bet is that it was used for something like looking at maps or checking Venezuelan media sources, like real-time monitoring of Venezuelan social media feeds to try to give the American military more information."

However, while the AI may have contributed to tactical successes in Tehran and elsewhere, a rift has opened between Silicon Valley and the defense establishment. The Pentagon parted ways with Anthropic after the company refused to lift safety guardrails designed to prevent its AI from assisting in lethal operations.

“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” US President Donald Trump wrote on Truth Social after the announcement that the White House had severed ties with Anthropic.

“We will decide the fate of our country - not some out-of-control, radical left AI company run by people who have no idea what the real world is all about,” he wrote.

The integration of Large Language Models (LLMs) into the kill chain represents a paradigm shift in modern warfare. Steve Feldstein, a Senior Fellow in the Democracy, Conflict, and Governance Program, told the Post that these commercial tools are dual-use.

"This is a tool that has both intelligence and surveillance purposes, and prospectively has purposes as well when it comes to lethal devices, lethal operations," Feldstein said.

While the notion of a chatbot pulling the trigger is science fiction, the reality of its logistical support is already here. Emil Michael, the Under Secretary of Defense for Research and Engineering, said in an interview with CBS that the military’s initial interest in tools like Claude stemmed from the complexity of modern deployments.

"In the military context, there's a lot of logistics," Michael said. "How do I get something from one place to another? How much stuff do I have in either place? What do I need to move efficiently forward? What supplies might I need for a certain mission?"

"I worry a lot about the unknowns," said Dario Amodei, CEO of Anthropic, in a 60 Minutes interview with CBS. "I don't think we can predict everything for sure. But precisely because of that, we're trying to predict everything we can. We're thinking about the misuse."

Horowitz noted that the hesitation from tech companies often isn't just moral, but practical. "The objection to autonomous weapon systems was not moral or ethical. Their objection was that they thought the technology wasn't ready for prime time yet."

The report highlights that while the US debates the ethics of AI in warfare, Israel has already integrated these systems deeply into its military architecture. The IDF’s use of AI in the Gaza Strip for target generation has been a subject of intense international scrutiny.

"Israel is one of the countries that uses them very well, called 'decision support systems,'" Feldstein noted. He explained that these systems are used "to identify suspects at a mass scale in order to then conduct lethal strikes. So, trying to identify where Hamas is, where are Hamas militants located? Geolocation, taking cell phone calls, taking text messages."

Pentagon pivots from Anthropic to OpenAI

With Anthropic exiting the defense sector, the Pentagon has now pivoted to a competitor with fewer qualms about military applications: OpenAI’s ChatGPT.

On Friday, OpenAI CEO Sam Altman announced that the company would begin working with the Department of Defense to provide AI services for classified documents.

“Tonight, we reached an agreement with the Department of War [Rebrand made by the Trump administration to the Department of Defense] to deploy our models in their classified network,” Altman said in a statement.

Under Secretary of Defense for Research and Engineering of the United States said in an interview, "What we're trying to do is we're trying to use it for all lawful use cases," Michael said. "As long as it's lawful, we want to treat it like any other technology."

However, Feldstein warned that swapping one AI for another does not solve the inherent risks of algorithmic warfare, particularly regarding hallucinations or bias.

"If it inserts its own biases when it provides information, I think it raises questions about how trustworthy that information is," Feldstein warned. "If you're relying on a system to provide intelligence information that you potentially would use for targeting, would you want to work on a system that shows biases that may not actually give you fully accurate information?"

As global tensions rise, the line between a search engine and a weapon of war is becoming increasingly blurred. What began as a tool for writing code and poetry is now, according to defense officials, a critical component of lethal force projection.

Tobias Holcman and Shir Perets contributed to this report.