Anthropic is accusing three Chinese AI companies of setting up more than 24,000 fake accounts with its Claude AI model to improve their own models.
The labs — DeepSeek, Moonshot AI, and MiniMax — allegedly generated more than 16 million exchanges with Claude through those accounts using a technique called “distillation.” Anthropic said the labs “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
The accusations come amid debates over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China’s AI development.
Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products.
DeepSeek first made waves a year ago when it released its open-source R1 reasoning model that nearly matched American frontier labs in performance at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, which reportedly can outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The scale of each attack differed in scope. Anthropic tracked more than 150,000 exchanges from DeepSeek that seemed aimed at improving foundational logic and alignment, specifically around censor-ship safe alternatives to policy-sensitive queries.
Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, the firm released a new open source model Kimi K2.5 and a coding agent.
Techcrunch event
Boston, MA
|
June 9, 2026
MiniMax’s 13 million exchanges targeted agentic coding and tool use and orchestration. Anthropic said it was able to observe MiniMax in action as it redirected nearly half its traffic to siphon capabilities from the latest Claude model when it was launched.
Anthropic says it will continue to invest in defenses that make distillation attacks harder to execute and easier to identify, but is calling on “a coordinated response across the AI industry, cloud providers, and policymakers.”
The distillation attacks come at a time when American chip exports to China are still hotly debated. Last month, the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips (like the H200) to China. Critics have argued that this loosening of export controls increases China’s AI computing capacity at a critical time in the global race for AI dominance.
Anthropic says that the scale of extraction DeepSeek, MiniMax, and Moonshot performed “requires access to advanced chips.”
“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” per Anthropic’s blog.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch he’s not surprised to see these attacks.
“It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact,” Alperovitch said. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Anthropic also said distillation doesn’t only threaten to undercut American AI dominance, but could also create national security risks.
“Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities,” reads Anthropic’s blog post. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
Anthropic pointed to authoritarian governments deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that is multiplied if those models are open-sourced.
TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.
#Anthropic #accuses #Chinese #labs #mining #Claude #debates #chip #exports