Anthropic's crisis moment: Quality complaints, security breaches, and OpenAI circling

Anthropic's crisis moment: Quality complaints, security breaches, and OpenAI circling

Anthropic is careening through a minefield of operational problems that threaten to undermine years of developer goodwill at the worst possible time. The AI startup faces mounting turbulence across product quality, infrastructure capacity, security, and pricing transparency just as it prepares for a potential public offering that could value the company at nearly $800 billion.

The stakes are towering. Anthropic has tripled revenue to $30 billion this year on the strength of Claude, its flagship AI model, and developer devotion. OpenAI is now actively capitalizing on every stumble.

The trouble began two months ago and has spiraled. Users complained that Opus 4.6, Anthropic's top-tier model, had degraded in performance. The company released Opus 4.7 with benchmark improvements, but the rollout backfired. Customers griped about higher token costs, software bugs, and inconsistent results. Trust cracked.

The problems piled on. Anthropic's servers are throttling under demand, forcing stricter usage caps and occasional outages for customers who depend on the service. A software update accidentally exposed internal Claude Code repositories, giving outsiders a rare peek at the product's architecture. The company is now investigating claims that unauthorized users accessed Mythos, its most advanced model, which Anthropic had deliberately kept under wraps due to cybersecurity risks.

Then came the price shock. Users discovered Tuesday that Claude Code, a central feature on the $20-per-month Pro plan, was no longer available. The company quickly called it a limited test, but the damage was done. Customers interpreted the move as the opening salvo of a broader pricing restructuring that would wall off key features behind higher paywalls.

Anthropic insisted business fundamentals remain strong. Revenue continues climbing even as the company nudges enterprise clients toward consumption-based pricing models. A recent standoff with the Pentagon over AI ethics actually boosted Claude's visibility among developers and raised the app to the top of the U.S. App Store.

"We've seen extraordinary demand for Claude over the past several months, and our team is doing everything we can to scale quickly and responsibly," an Anthropic spokesperson said. "We know it hasn't always been smooth, and we're grateful to our community for the patience and feedback as we work through it."

The competitive pressure is intense. OpenAI is weaponizing Anthropic's stumbles. A leaked memo from OpenAI's chief revenue officer Denise Dresser accused Anthropic of overstating its revenue run rate by billions and claimed the company cultivated an elitist image. CEO Sam Altman said on a podcast that Anthropic relied on "fear-based marketing." When users turned on Anthropic over the Claude Code pricing test, OpenAI engineers openly mocked the company on social media, with Altman cheering from the sidelines.

Both company leaders once spoke about the AI race having room for multiple winners. Their recent conduct tells a different story. Altman and Anthropic CEO Dario Amodei are locked in a winner-take-most competition ahead of dual IPOs, and each stumble by the other is treated as a kill shot.

Anthropic built its reputation on technical excellence and developer trust. A cascade of security lapses, product confusion, and capacity problems threatens to chip away at that foundation precisely when investor scrutiny will be highest.

Author James Rodriguez: "Anthropic's real problem isn't a single misstep,it's that it got caught scrambling on multiple fronts and customers saw the seams."

Comments