President Trump arrived at his desk in January with a clear mandate for artificial intelligence: get government out of the way. Fifteen months later, the White House is preparing to vet every major new AI model before it reaches the public.
The reversal reflects a hard reality that even the most anti-regulation administration cannot dodge. A new generation of AI systems has become so powerful and potentially dangerous that leaving them unchecked is no longer politically viable, regardless of ideological commitment to free markets.
The pivot accelerated after Anthropic's Mythos model raised alarms across the federal government and national security circles. The model demonstrated an ability to identify cybersecurity vulnerabilities with unusual speed and precision. OpenAI's GPT-5.5 now matches those capabilities, and Chinese laboratories are racing to develop their own versions.
Just two months ago, the Pentagon had declared Anthropic a "supply chain risk" and effectively blacklisted the company. Now the White House is crafting guidance to allow federal agencies to work around that designation and adopt new Anthropic models. The White House also briefed executives from Anthropic, Google, and OpenAI on the emerging plans last week.
The administration is moving toward an executive order that would establish formal government vetting for all new AI models before public release. The order would create a working group of technology executives and federal officials to design the oversight process, with options including formal government review and early access for federal agencies to new systems before they launch commercially.
Some officials want the government to receive first access to new models, though not necessarily the power to block their release. The White House cyber office is simultaneously developing an AI security framework that would require the Pentagon to safety-test models before federal, state, and local governments deploy them.
Industry sources tell reporters that major AI companies are cooperating with the White House effort. The Trump administration recognizes the advancing capabilities of these systems present genuine risks, and the labs recognize that working with government now beats facing stricter rules later. Sources involved in the conversations suggest an agreement could materialize within weeks.
This represents a stunning reversal from Trump's first year, when the administration systematically dismantled Biden-era AI safety initiatives. Trump rescinded Biden's AI executive order on his first day, which had required developers to conduct safety evaluations and report on models with potential military applications. Vice President JD Vance told the AI Action Summit in Paris months later that the future belonged to those who built boldly, not those who "hand-wrung about safety."
The administration's core priority remains unchanged: winning the race against China in AI development. But the speed at which these models have advanced has forced even committed deregulators to acknowledge that some guardrails are necessary. The White House remains skeptical of regulation in principle, yet increasingly willing to deploy it in practice when national security is at stake.
Author James Rodriguez: "This isn't a principled reversal so much as a collision between Trump's deregulation instincts and the reality that AI capabilities are outpacing his comfort level with letting markets run wild."
Comments