- OpenAI, Anthropic, and Google are sharing intelligence through the Frontier Model Forum to detect and block “adversarial distillation” — Chinese labs systematically extracting capabilities from U.S. AI models to build cheap imitations
- U.S. officials estimate unauthorized distillation costs Silicon Valley AI labs billions in annual profit; Anthropic specifically identified DeepSeek, Moonshot, and MiniMax as offenders in February
- Distillation allows Chinese labs to replicate cutting-edge U.S. capabilities at a fraction of the cost, while stripping out safety guardrails designed to prevent misuse for bioweapons or cyberattacks
- The collaboration is currently limited by antitrust uncertainty — the firms are pushing the Trump administration for clearer legal guidance to expand intelligence sharing
What Happened?
Three rival AI giants — OpenAI, Anthropic, and Google — have begun quietly collaborating to combat adversarial distillation: the practice of using a leading AI model’s outputs to train a cheaper imitation. The firms are sharing threat intelligence through the Frontier Model Forum, the industry nonprofit they co-founded with Microsoft in 2023, to detect, attribute, and block unauthorized extraction attempts. OpenAI confirmed its participation, pointing to a recent memo to Congress accusing DeepSeek of “free-riding” on U.S. frontier lab capabilities. Anthropic and Google declined to comment but had separately identified Chinese labs exploiting the technique.
Why It Matters?
The collaboration is remarkable precisely because it unites companies locked in fierce commercial competition. The shared concern is that Chinese AI labs — particularly those releasing open-weight models like DeepSeek — are systematically extracting the most advanced capabilities from proprietary U.S. models at low cost, then releasing those capabilities globally without the safety guardrails that U.S. labs build in. This creates both a national security risk (distilled models could be used to design bioweapons or enable cyberattacks) and a serious commercial threat (free open-weight Chinese models undercut the pricing of expensive proprietary U.S. products). The information-sharing model mirrors a standard practice in the cybersecurity industry, where firms regularly pool threat data to strengthen collective defenses.
What’s Next?
The information-sharing effort is currently limited by antitrust uncertainty — the firms don’t yet know how much competitive data they can legally pool, and are pushing the Trump administration for clearer guidance. The AI Action Plan has already called for an information-sharing and analysis center for this purpose. Meanwhile, DeepSeek is expected to release a major model upgrade, and Chinese labs continue to proliferate open-weight models that many in the industry suspect are built at least partly on distilled U.S. capabilities. The Frontier Model Forum collaboration may be the opening move in what becomes a much longer battle over who controls the frontier of AI.
Source: Bloomberg












