In the span of two months, artificial intelligence has moved from theoretical concern to practical emergency. Six developments in the past 60 days paint a picture of technology accelerating faster than any safeguards can catch up, even as its own architects admit the stakes are higher than they ever anticipated.
Start with the sheer speed. Anthropic, the AI safety company founded by defectors from OpenAI, has achieved something no business in American history has managed. Its annualized revenue shot from $1 billion at year's end 2024 to $9 billion a year later, and hit $30 billion this month. For context: Standard Oil under Rockefeller, Google at its tech-boom peak, pandemic-era Zoom. None scaled organic revenue this fast at this base. More than 1,000 companies now spend at least $1 million annually on Claude, Anthropic's flagship model. That number doubled in under two months.
Yet growth alone isn't the story. The real story is what happens when capability outpaces disclosure.
Anthropic has built a model called Claude Mythos Preview that it will not release to the general public. The company itself explains why: the model can crack critical security vulnerabilities in the operating systems and browsers that power the modern internet. It is that powerful and that dangerous. Access is deliberately limited to 40-plus organizations tasked with defense. Anthropic imposed this restriction on its own. No regulator required it. No law demanded it. The company chose caution. Not every competitor will.
Meanwhile, AI is now writing AI. At Anthropic, nearly all code created internally flows from Claude itself. The company's official position is blunt: they build Claude with Claude. The same dynamic runs through OpenAI. A senior researcher there recently disclosed he no longer writes code by hand. OpenAI's chief scientist has set 2028 as the target for a fully autonomous AI researcher. Anthropic's co-founder anticipates a pivotal decision point between 2027 and 2030 when AI systems will be allowed to recursively improve themselves. He calls that step the ultimate risk.
As models grow more powerful, transparency shrinks. Stanford's 2026 AI Index Report found the Foundation Model Transparency Index dropped from 58 to 40 out of 100 in a single year. Its conclusion is stark: the most capable models are now the least transparent. The federal government has imposed no meaningful requirements to disclose how these systems are trained or what they can do.
The human toll is already visible. In April, a 20-year-old from Texas traveled to San Francisco and firebombed the home of OpenAI CEO Sam Altman in the middle of the night. Two days later, someone else fired shots at the same house. In between, the attacker attempted to force his way into OpenAI headquarters. His manifesto named AI executives as targets and expressed intent to kill.
Altman responded with a blog post that reads like a confession. Hours after an attempt on his life, the man behind the world's most adopted AI product wrote: The fear and anxiety about AI is justified. Power cannot be too concentrated. We are witnessing the largest change to society in a long time, and perhaps ever. We are all learning something new very quickly, and sometimes we will need to change our minds rapidly as the technology develops.
The market is already pricing in the disruption. In ten weeks, AI erased $2 trillion from the combined value of public software companies as investors realized, week by week, which human professions the latest models would eliminate: coding, legal research, financial analysis, real estate services. That destruction exceeded the damage of the dot-com bust or the 2008 financial crisis relative to market size.
None of these six facts stands alone as routine industry news. Together, in 60 days, they form a portrait of technology whose growth, power, and risk have escaped the understanding of the public and the grasp of government. The people building it are saying so openly. The person running the largest AI platform on Earth, hours after someone tried to kill him for it, admitted the public's fear is justified and that nobody really knows what comes next.
Author James Rodriguez: "When the industry's own leaders start sounding like they're issuing warnings from the future, it's time to stop treating this as a normal business story and start acting like the emergency it is."
Comments