- VP JD Vance convened an April call with the CEOs of OpenAI, Anthropic, Google, Microsoft, and SpaceX to warn that Mythos — Anthropic’s most advanced AI model — could enable autonomous cyberattacks on banks, hospitals, and water plants that local governments are ill-equipped to handle.
- The White House is weighing an executive order to create a formal oversight process for the most advanced AI models, potentially modeled on FDA drug approval — a sharp reversal from its previously hands-off, pro-growth AI stance.
- National Cyber Director Sean Cairncross has been tapped to lead the administration’s Mythos response, asking Anthropic to limit access to the model while Treasury Secretary Bessent has separately briefed top banking executives on the risks.
- White House AI adviser David Sacks is pushing back, calling the reaction an overreaction, while AI safety advocates are cheering the shift as a long-overdue acknowledgment of frontier model risks.
What Happened?
An April White House briefing on Anthropic’s Mythos model — which can autonomously identify software vulnerabilities — alarmed VP JD Vance enough to convene a rare joint call with the CEOs of OpenAI (Sam Altman), Anthropic (Dario Amodei), Google (Sundar Pichai), Microsoft (Satya Nadella), and SpaceX (Elon Musk). Vance warned that models like Mythos could enable cyberattacks on critical infrastructure that local institutions aren’t equipped to defend against. The White House has since asked Anthropic to pause broader rollout of Mythos and tapped National Cyber Director Sean Cairncross to coordinate the response. Anthropic CEO Dario Amodei has since met with Treasury Secretary Bessent and Chief of Staff Susie Wiles to try to resolve a months-long feud between the company and the administration.
Why It Matters?
This is a significant inflection point in U.S. AI policy. The Trump administration had positioned itself as the antithesis of Biden-era AI safety oversight — focused on winning the AI race against China and removing regulatory barriers. Mythos has forced a reckoning: a model capable of autonomous cyberattacks is not just a competitive asset but a national security liability. A proposed executive order creating an FDA-like review process for frontier models would represent a dramatic policy U-turn and put the administration in tension with its tech-industry allies. The internal fault lines are real: Sacks and the pro-growth camp versus Cairncross and the national security apparatus.
What’s Next?
Watch for whether Trump signs an executive order creating formal AI oversight mechanisms — any announcement would be a landmark shift. OpenAI is already limiting access to its analogous GPT-5.5-Cyber model after consulting the administration. The U.S.-China AI safety talks ahead of the May 14–15 Trump-Xi summit add urgency to getting the domestic governance framework right. The battle between the Sacks “hands-off” camp and the Cairncross national-security approach will define the administration’s AI posture for the rest of the term.
Source: The Wall Street Journal












