United States artificial intelligence firm Anthropic is accusing three prominent Chinese AI labs of illegally extracting capabilities from its Claude model to advance their own, claiming it raises national security concerns.
The Chinese unicorns – DeepSeek, Minimax and Moonshot AI – created over 24,000 fraudulent accounts and trained their models using over 16 million exchanges with Claude, a process known as distillation, Anthropic alleged in a Monday blogpost. Distillation is a common method of training in the AI industry with frontier labs often distilling their own models to make cheaper versions for customers. But most leading proprietary AI model providers including Anthropic explicitly ban such practices. Claude is not available in China.The accusations come after Anthropic’s rival OpenAI made similar allegations earlier this month that DeepSeek and other Chinese AI companies are illegally distilling its ChatGPT models over the past year, in a memo sent to the US House Select Committee on China.
DeepSeek shocked the industry last year when it launched a powerful model close to matching industry frontrunners like ChatGPT – but with fewer computing resources required.
This development challenged the prevailing wisdom then that training advanced models require more processing power, and raised questions about the effectiveness of US tech and export controls.
OpenAI then said it was reviewing evidence that DeepSeek “may have improperly distilled” its models.In the memo this month, OpenAI said the rapid advancements of DeepSeek are based on “its ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.”
DeepSeek has yet to comment publicly on Open AI’s allegations.
Anthropic warned that illicitly distilled models may lack safety guardrails that companies like itself and other US model providers implement, and that they could create national security risks if they are used for cybercrimes and bio-weapons, for example.
These models could also enable “authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” it said. “The window to act is narrow.”
Source: Here