Anthropic Accuses Trio of Chinese AI Firms of Illicitly Exploiting Claude to Train Rival Models
The global race for Artificial Intelligence (AI) supremacy has entered a contentious new chapter. Anthropic, the San Francisco-based AI safety and research company behind the acclaimed Claude model, has officially accused three prominent Chinese tech startups of systematic intellectual property exploitation. According to a detailed blog post and recent reports, DeepSeek, Moonshot AI, and MiniMax allegedly bypassed regional restrictions and terms of service to "siphon" intelligence from Claude to enhance their own Large Language Models (LLMs).
- This revelation follows a similar disclosure by OpenAI, suggesting a coordinated or widespread effort by Chinese firms to bridge the technological gap using American-developed AI.
![]() |
| Anthropic Accuses Trio of Chinese AI Firms of Illicitly Exploiting Claude to Train Rival Models |
The Allegations: 16 Million Interactions via 24,000 "Ghost" Accounts
In a startling disclosure, Anthropic revealed that the scale of the exploitation was both vast and calculated. The company alleges that DeepSeek, Moonshot, and MiniMax engaged in more than 16 million interactions with the Claude model. To facilitate this without triggering security protocols, the firms reportedly utilized approximately 24,000 fake accounts.
- These accounts were used to circumvent Anthropic’s regional access restrictions—as Claude is not officially available in mainland China—and to bypass the standard usage limits designed for individual users. Anthropic categorizes these actions not merely as a breach of contract, but as a sophisticated campaign to "harvest" the reasoning capabilities of their proprietary technology.
Understanding "Model Distillation": The Shortcut to AI Power
At the heart of the controversy is a technical process known as "Model Distillation." In the context of AI development, distillation is a technique where a smaller or newer model (the "student") is trained using the outputs of a larger, more established, and more powerful model (the "teacher").
- By analyzing how Claude responds to complex queries, the Chinese "student" models can effectively mimic Claude’s logic, style, and problem-solving frameworks. While distillation is a legitimate research tool when used with permission, Anthropic argues that doing so without a license—and against the Terms of Service—constitutes a "theft of intelligence."
Why Distillation Matters
Training a frontier AI model from scratch requires tens of thousands of specialized chips (like NVIDIA’s H100s) and billions of dollars in investment. Distillation allows a company to bypass much of the "trial and error" of initial training, significantly reducing the cost and time required to achieve high-level performance. Anthropic claims these Chinese firms are using Claude as a "shortcut" to catch up to Western AI capabilities.
A Breakdown of the Targets: DeepSeek, Moonshot, and MiniMax
Anthropic provided specific details on what each company appeared to be targeting during their millions of interactions:
- DeepSeek: This startup reportedly focused on Claude’s reasoning capabilities. Specifically, Anthropic noted that DeepSeek sought to create "safe" alternatives that were free from the Western censorship and safety filters embedded in Claude, potentially creating a model tailored for a different regulatory or political environment.
- Moonshot AI: Their interactions were centered on automated reasoning and tool-use. Moonshot allegedly utilized Claude to refine its models' abilities in programming and complex data analysis.
- MiniMax: This firm focused heavily on automated programming and coordination tools, seeking to bolster its model's efficiency in writing code and managing multi-step tasks.
The National Security Implication: Models Without Guardrails
Perhaps the most alarming part of Anthropic’s report is the warning regarding National Security. Anthropic is famously a "safety-first" company, spending significant resources on "constitutional AI" to ensure their models do not provide instructions for illegal acts, biological warfare, or cyberattacks.
- When a model is "distilled" illicitly, the resulting model gains the "intelligence" of the teacher but often lacks the "safety guardrails." Anthropic warns that if these distilled models are released as Open Source, they could provide bad actors with high-level capabilities without any of the built-in protections that Western companies are required to implement.
"The threat transcends any single company or region," Anthropic stated in its blog. "The opportunity to act is limited."
The Policy Shift: A Call for Stricter Export Controls
Anthropic is using this incident to advocate for a specific policy response: stricter export controls on high-end semiconductors.
The logic is simple: if Chinese firms cannot access the chips (GPUs) necessary to train massive models from scratch, they are forced to rely on distillation. However, Anthropic argues that by further tightening the belt on chip exports and simultaneously hardening the "API walls" of American AI models, the U.S. can prevent its technology from being used to build rival systems that might eventually undermine American interests.
This stance aligns with the current trend in Washington D.C., where both the Biden and Trump administrations have expressed concerns over China’s rapid AI advancement.
The Broader Context: The US-China AI Cold War
This incident does not exist in a vacuum. Earlier this month, OpenAI (the creator of ChatGPT) reportedly briefed U.S. lawmakers on similar tactics used by DeepSeek. The narrative emerging from Silicon Valley is one of a "siege," where American innovation is being systematically mined by overseas competitors.
- Anthropic, which recently reached a valuation in the tens of billions and has received massive backing from tech giants like Google and Amazon, finds itself at the center of this geopolitical storm. By going public with these allegations, Anthropic is signaling that the era of "open access" to AI might be closing in favor of a more "fortified" approach to model distribution.
Conclusion
The allegations against DeepSeek, Moonshot, and MiniMax represent a pivotal moment in AI ethics and international trade. As AI models become the most valuable intellectual property on the planet, the methods used to protect them—and the tactics used to "borrow" from them—will define the next decade of tech development.
- For now, DeepSeek, Moonshot, and MiniMax have not issued formal rebuttals to the specific claims of using 24,000 fake accounts. As the U.S. government looks closer at these reports, we can expect a significant tightening of how AI companies verify their users and how they monitor the "traffic" that flows through their digital borders.
