- House Republicans are advancing the Deterring American AI Model Theft Act, which would sanction Chinese and Russian entities that use unauthorized distillation techniques to replicate U.S. AI models
- Potential targets include DeepSeek, Moonshot, MiniMax, and larger players like Alibaba and ByteDance; the House China Committee separately plans to recommend Commerce Department entity-list designation for the three smaller labs
- OpenAI, Anthropic, and Google have already begun sharing information to detect adversarial distillation — a practice where a cheaper “student” model is trained by querying a frontier “teacher” model at massive scale without authorization
- The House China Committee found Chinese entities using “sophisticated access infrastructure” — thousands of fraudulent accounts, traffic obfuscation tools — to extract capabilities from U.S. models while evading detection
What Happened?
House Republicans are set to consider legislation next week that would direct the U.S. government to identify and sanction Chinese and Russian entities engaged in “adversarial distillation” — the practice of systematically querying U.S. frontier AI models to train cheaper competing systems. The Deterring American AI Model Theft Act, sponsored by Rep. Bill Huizenga and co-sponsored by House China Committee chair John Moolenaar, would trigger Commerce Department blacklisting and potential emergency economic sanctions under the International Emergency Economic Powers Act. Separately, the House China Committee plans to release a report naming DeepSeek, MiniMax, and Moonshot AI for distillation activity and calling for their entity-list designation. The committee found Chinese entities using sophisticated infrastructure — thousands of fraudulent accounts and traffic-obfuscation tools — to bypass detection while accessing closed-source U.S. models at scale.
Why It Matters?
Adversarial distillation is the most cost-effective form of AI intellectual property theft imaginable: instead of building frontier capabilities from scratch — which requires billions in compute and years of research — Chinese labs can approximate them by querying U.S. models millions of times and training on the outputs. OpenAI told Congress that DeepSeek used ChatGPT outputs to build a knock-off model lacking safety guardrails, and that the activity continued even after OpenAI deployed countermeasures. Google and Anthropic have filed similar reports. The financial threat is direct: distilled open-weight models from Chinese labs are available nearly for free, undercutting the revenue U.S. developers need to finance their data center and talent investments. The safety threat is potentially more serious: distilled models have been shown to comply with requests to assist bioweapon development and to censor politically sensitive topics — removing the guardrails built into U.S. models.
What’s Next?
The House Foreign Affairs Committee takes up the bill next week alongside more than a dozen other China-focused export-control measures. If it advances, the legislation would represent the first time Congress has explicitly treated AI model extraction as a sanctionable offense — analogous to industrial espionage. The bill also calls for a government-facilitated information-sharing center to detect distillation threats, formalizing what OpenAI, Anthropic, and Google have already begun doing voluntarily. For the AI industry, the legislative push validates the threat model frontier labs have been warning about — and may accelerate government investment in AI security infrastructure. For Chinese AI firms, entity-list designation would cut off access to U.S. cloud services, APIs, and potentially semiconductor supply chains.
Source: Bloomberg











