Chinese AI laboratories DeepSeek, Moonshot, and MiniMax have launched a large-scale campaign to steal and replicate data from Anthropic's Claude, the AI company said in a Tuesday statement. 

Anthropic is the artificial intelligence company that developed Claude, the popular series of large language models.

"We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models," Anthropic said in its statement. "These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions."

According to Anthropic, the three labs used distillation, a method that involves training "a less capable model on the outputs of a stronger one." While distillation is often used as a legitimate training method, Anthropic noted that the companies' use of it here was "illicit distillation."

Possible national security concern

Anthropic emphasized that the danger of illicit distillation is not just a business one, but could also become a national security concern.

"Anthropic and other US companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities," the statement explained.

It further noted that "foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."

The statement concluded by detailing its findings from each exchange with the Chinese labs and described four new defense measures to prevent future incidents of this kind, including adding systems to identify "distillation attack patterns," sharing intelligence with other AI labs, strengthening verification systems, and developing countermeasures.

"As we noted above, distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers," Anthropic concluded. "We are publishing this to make the evidence available to everyone with a stake in the outcome."