Key Takeaways
- Anthropic alleges that three Chinese AI companies fraudulently trained their models on Anthropic’s Claude model.
- Together, the campaign included over 16 million exchanges from about 24,000 accounts, Anthropic says.
- Rival OpenAI has also accused DeepSeek of this type of “distillation.”
AI might be eating its own, as Anthropic says that three China-based AI companies — DeepSeek, MiniMax Group Inc., and Moonshot — all violated the terms of service for Anthropic’s Claude AI.
Anthropic claims that its AI rivals have prompted Claude for a combined total of over 16 million times across thousands of fraudulent accounts.
The goal, allegedly, was to train and improve their own AI models using Claude’s responses, a process called “distillation” in the AI business.
Over 16 million Exchanges From 24,000 Accounts
Writing in a Monday blog post, Anthropic said that the “industrial-scale campaigns” from the three AI laboratories generated “over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
The three labs weren’t equaling invested in Claude distillations, according to Anthropic, which claims in its post that DeepSeek generated “over 150,000 exchanges” with Claude, while Moonshot AI generated “over 3.4 million” and MiniMax generated “over 13 million.”
This just in! View
the top business tech deals for 2026 👨💻
Ever since DeepSeek launched its competitive and affordable R1 model in 2025, China has paved the way for many more text, video, and image AI models that cost far less than their US alternatives, undercutting software adaptation. Now, those US companies are trying to stop them.
OpenAI Argued Against DeepSeek Distillation, Too
Anthropic is leaning into the concept of an multi-national AI competition, and they’re not alone.
The company’s biggest US rival, OpenAI, told US lawmakers earlier in February that DeepSeek was relying on distillation as a part of its “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs,” reports Business Insider.
OpenAI and Anthropic may be in competition, but it seems they can set aside their differences to
How Big of a Problem Is Distillation?
Not everyone agrees that distillation is bad for a few different reasons.
One response has been to point out that distillation can help one AI model catch up to another, but it will never help that model surpass the model that it is distilling. From this perspective, it’s not a tactic that will actually lead to the innovation that any AI company needs to demonstrate in order to get ahead of the competition.
Another perspective argues that distillation is inherently good, since it encourages competition by functioning to prevent any single AI lab from getting too far ahead of the rest.
Nevertheless, this new accusation from Anthropic takes the ongoing conversation about AI into a new direction.