OpenAI Pitches Spy Allies on Cyber-Powered AI Model

OpenAI Pitches Spy Allies on Cyber-Powered AI Model

OpenAI is making the rounds with intelligence partners and federal agencies this week, demonstrating a powerful new cybersecurity tool designed to identify vulnerabilities faster than human teams can manage. The company held a showcase in Washington on Tuesday for roughly 50 government cyber defenders, unveiling its GPT-5.4-Cyber model to officials spanning national security agencies and day-to-day federal operations.

The briefing represents an aggressive push into a market where AI companies and government are colliding over access, control, and risk. Both OpenAI and competitor Anthropic have rolled out specialized cyber models in recent weeks, each taking different approaches to gatekeeping.

OpenAI is operating on two tracks. It plans to release a more restrictive version of GPT-5.4-Cyber to mainstream users like local water utilities, while offering a less constrained variant through its Trusted Access for Cyber program to vetted defenders. This strategy lets the company argue it's spreading capability responsibly while still serving those who need maximum power.

At Tuesday's event, OpenAI Chief Global Affairs Officer Chris Lehane framed the dual approach as essential for organizations that lack resources to hire top security talent. The company's head of national security policy, Sasha Baker, pitched deeper partnerships with federal agencies, proposing that OpenAI help prioritize which security problems matter most and serve as a hub for sharing threat intelligence across sectors.

OpenAI is also beginning briefings with Five Eyes members this week, the intelligence-sharing alliance that includes Australia, Canada, New Zealand, and the United Kingdom. The company wants those nations to vet and adopt the cyber model.

Anthropic took a more restrictive path with its own model, Mythos Preview. The company withheld a public release, citing cybersecurity risks, and handed access to only about 40 companies and organizations. At least two sit within the federal government. That cautious approach became more complicated after the Pentagon labeled Anthropic a supply chain risk, though the NSA is still testing Mythos despite the designation.

The real draw for both models is speed. Government agencies are saddled with aging systems that are expensive and time-consuming to secure. The latest AI tools promise to automate the hunt for exploitable flaws, compressing work that once took weeks or months into days. Most companies already using either model are running them against their own internal networks to find problems before adversaries do.

That's where the pressure comes in. Defenders want these tools now. Malicious actors want them even more. Neither AI company nor government can afford to move slowly.

Author James Rodriguez: "OpenAI's two-track strategy is smart politics but raises real questions about whether 'safeguards' on one version mean anything when the unrestricted model is out there with vetted insiders."

Comments