Anthropic has built an AI system that does something remarkable and unsettling: it hunts down hidden security flaws in operating systems and browsers, then exploits them without human intervention. The company announced its Claude Mythos model this month but refuses to release it publicly, comparing the risk to handing a burglar a master key to every building on the internet.
What makes Mythos dangerous is not that it invents new attack methods. Rather, it transforms a latent weakness into something immediately actionable. Hacking has always been hard, requiring rare expertise. AI tools collapse that barrier. Suddenly the same vulnerability a skilled cybersecurity researcher might take months to find, Mythos locates in minutes.
Anthropic is sharing Mythos only with a select group under an initiative called Project Glasswing. The 40 named partners are all American, sitting inside the U.S.-led digital infrastructure. Britain's government received access as well, allowing its AI Security Institute to test the model. British ministers emerged from that briefing with a stark warning: most businesses are nowhere near prepared for what AI-powered attacks will look like.
The company's defense strategy amounts to a controlled vulnerability hunt. It wants partners to patch holes before criminals find them. Yet this approach hinges entirely on trusting a private corporation with the keys to the kingdom. When reports surfaced recently of unauthorized access incidents, that trust became harder to maintain. The question lurking beneath the headlines is whether any single company should hold this kind of power.
There is a flip side. When Mozilla tested Mythos on Firefox, it discovered 10 times more flaws than human testers had found. The company fixed them. None of those vulnerabilities were hidden from human expertise in principle, just in practice. What changed is speed and scale. An AI can find and catalog security gaps across entire systems in days.
The geopolitical dimension adds another layer. The U.S. government's embrace of Anthropic represents a striking reversal. In February, the Pentagon had labeled the company a security risk and cut off lucrative contracts after Anthropic refused to allow its technology for mass surveillance or autonomous weapons. OpenAI got those deals instead. Now, with Mythos in the picture, the calculus has shifted. The White House is signaling a pivot from treating AI firms as contractors to treating them as strategic partners.
This matters because whoever controls the most advanced AI models gains a tangible advantage over adversaries. Anthropic has positioned itself as the ethical choice in the AI race, but that brand identity took a hit last year with a 1.5 billion dollar settlement over piracy allegations. The company's narrative about Mythos has been shaped as much by public relations as by the underlying technology. Researchers have suggested that smaller, cheaper models deployed at scale can achieve similar results, which raises questions about whether Mythos truly represents a breakthrough or simply reflects where the entire field is heading.
The deeper concern is structural. Without international coordination on cybersecurity standards, the global internet risks fragmenting. Instead of a shared commons, the web could splinter into competing security alliances, each nation and bloc patching its own systems and trusting nothing beyond its borders. What was once a connected world would become a collection of walled gardens, each more heavily guarded and more isolated than the last.
Author James Rodriguez: "Anthropic built something powerful, but the real story is whether we're ready to live in a world where AI finds every weakness and private companies decide how to handle it."
Comments