Claude's silent downgrade sparks uproar among power users

Claude's silent downgrade sparks uproar among power users

Anthropic's Claude is facing a credibility crisis among its most demanding users. Across GitHub, Reddit, and X, engineers and developers are documenting what they say is a noticeable drop in the model's reasoning ability, accuracy, and nuance on complex tasks.

An AMD senior director posted bluntly on GitHub: "Claude has regressed to the point it cannot be trusted to perform complex engineering." Others have compiled side-by-side output comparisons and benchmark tests suggesting the deterioration is real and measurable.

The timing is awkward. Anthropic is currently testing Mythos, a more powerful model, even as users complain that the default Claude experience has gotten worse. That contrast has fueled speculation that the company deliberately scaled back Claude's reasoning to cut costs or preserve compute resources for its next-generation work.

Anthropic denies the most dramatic version of that narrative. The company acknowledged adjusting the default reasoning level in Claude Code but says the changes had nothing to do with compute constraints or development of newer models. Boris Cherny, who heads Claude Code, pointed out last month on X that users can manually toggle between low-effort (faster) and high-effort (smarter) modes, and that this preference sticks between sessions.

When analyst Patrick Moorhead asked Claude itself to comment on the furor, the model offered a measured response: "Anthropic made real configuration changes that objectively reduced default thinking depth across all surfaces including claude.ai, but the most extreme 'secret nerfing' narrative overstates what happened."

That admission that "real configuration changes" occurred does little to settle user frustration. Another factor may be psychological habituation: as AI becomes routine, users notice flaws they previously overlooked, and expectations ratchet upward. Magic fades into mundanity.

But perception shapes reality in markets. Power users who depend on Claude for coding and research workflows need consistent, dependable performance. A model that sometimes underperforms is a model users stop trusting, regardless of the technical explanation.

The broader issue runs deeper than one model's reputation. Anthropic, like other frontier AI companies, is increasingly stratifying access. Advanced capabilities are moving behind higher paywalls, restricted API tiers, and invitation-only programs. The company recently shifted large enterprise customers onto usage-based token pricing, directly tying intelligence to spending power.

That creates a widening gap: those with budgets can access the best systems, while the rest get a degraded or downright mediocre experience. The default experience shrinks as the premium tier expands.

The question now is whether Anthropic's basic tier will continue to decline even as its frontier models vault forward. If so, the company risks turning casual users into skeptics while pushing serious users toward competitors.

Author James Rodriguez: "When a company's own AI admits to 'real configuration changes' that reduced performance, damage control is already too late."

Comments