Anthropic Alleges Massive AI Model Distillation by Chinese Firms Amid Pentagon Tensions

Last Updated:
Anthropic alleged Chinese AI firms DeepSeek, Moonshot AI and MiniMax conducted large-scale distillation attacks on Claude, even as the company faces tense Pentagon negotiations over military safeguards and defence contracts
Anthropic Alleges Massive AI Model Distillation by Chinese Firms Amid Pentagon Tensions
CEO of Anthropic, Dario Amodei Credits: ANI

Anthropic has alleged that it has identified “industrial-scale distillation attacks” on its reasoning models by Chinese artificial intelligence firms DeepSeek, Moonshot AI and MiniMax, even as the company faces mounting tensions with the US defence establishment over the military use of its AI system, Claude.

In a post on X, Anthropic said, "We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models."

Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
Anthropic said.
Sign up for Open Magazine's ad-free experience
Enjoy uninterrupted access to premium content and insights.

"These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community,” the company further added.

Anthropic, an artificial intelligence research company founded in 2021 by former OpenAI researchers including siblings Dario Amodei and Daniela Amodei, focuses on building reliable, interpretable and steerable AI systems with a strong emphasis on safety.

What makes Anthropic’s Claude models and constitutional AI approach distinctive?

open magazine cover
Open Magazine Latest Edition is Out Now!

AIming High

20 Feb 2026 - Vol 04 | Issue 59

India joins the Artificial Intelligence revolution with gusto

Read Now

The firm is best known for developing the Claude family of large language models designed to be helpful, honest and harmless.

Its research centres on “constitutional AI”, a method that guides models using explicit principles rather than relying solely on human feedback.

Backed by major investors such as Google and Amazon, the company aims to advance AI technology responsibly while minimising societal and ethical risks.

DeepSeek, founded in 2023, is known for developing open-weight large language models such as DeepSeek-R1 and DeepSeek-V3, which use efficient architectures like mixture-of-experts to deliver high performance at significantly lower computational cost compared with many Western AI systems.

Its chatbot became one of the most downloaded AI apps in early 2025 and is widely noted for strong reasoning, long-context understanding and multilingual capabilities, though it has also attracted regulatory scrutiny and national security concerns in several countries over data privacy and training practices.

There has been no official statement released by DeepSeek, Moonshot AI and MiniMax countering Anthropic’s claims.

However, the allegations have brought the Chinese AI companies under heightened public scrutiny. At present, no laws in either country govern such activities.

Why is Anthropic’s Claude at the centre of a high-stakes Pentagon meeting?

The claims surfaced ahead of a reported high-stakes meeting between Anthropic leadership and the US Department of Defense. According to Axios, US Defense Secretary Pete Hegseth has summoned Amodei to the Pentagon over the military’s use of Claude.

Citing sources, Axios reported that the talks are expected to be tense, with one senior Defence official describing it as “not a friendly meeting” and a make-or-break moment in negotiations.

Claude is currently the only AI model deployed within the military’s classified systems and is considered among the most capable tools for sensitive defence and intelligence work.

The Defense Department and Anthropic signed a $200 million pilot contract last year, but tensions escalated following a January 9 memo from Hegseth urging AI companies to remove restrictions on their technology, prompting a renegotiation of terms.

The Pentagon has expressed frustration with Anthropic’s refusal to fully lift safeguards. While the company has indicated it may ease some limits, it has sought to retain restrictions on mass surveillance of Americans and the development of fully autonomous weapons.

Will Pentagon pressure force Anthropic to broaden access to Claude?

An Anthropic spokesperson told Axios that discussions are “productive” and being held “in good faith,” adding that the company remains committed to supporting US national security.

Defence officials, however, said negotiations have stalled and warned that Anthropic could be designated a “supply chain risk,” potentially voiding its contracts and restricting other Pentagon partners from using Claude.

The Pentagon has also signed an agreement with Elon Musk’s AI firm xAI and is nearing a deal with Google for its Gemini model, according to The New York Times.

Officials reportedly hope these agreements will increase pressure on Anthropic to broaden access to Claude.

Axios added that Hegseth is expected to present Amodei with an ultimatum, though replacing Anthropic would be complex due to the deep integration of its AI systems into defence infrastructure.

Ahead of the meeting, Anthropic published the blog post alleging that three Chinese AI firms had siphoned information from its models to improve their own systems, linking the controversy over AI safeguards with broader geopolitical and security concerns surrounding advanced artificial intelligence technologies.

(With inputs from ANI)