HACKER Q&A
📣 sahli

Is Claude Getting Worse?


It feels like most Claude Code users have already noticed a quality drop in the Claude models. As a Claude Pro subscriber (Web version; I don't use Claude Code), I’ve seen a clear decline over the last couple of weeks. I can’t complete tasks in a single turn anymore. Claude often stops streaming because it hits some internal tool-call/turn limit, so I have to keep pressing “Continue.” Each continuation has to re-feed context, which quickly burns through tokens and quota. The model also makes more mistakes and fails to fully complete tasks it used to handle reliably.

This is especially frustrating because Sonnet 4.6 was a real step up: it could produce long, correct code in one pass much more often. That seems basically gone now.

As a paying Pro user, I honestly find myself using free alternatives like DeepSeek and Z.ai (GLM) more than Claude lately. I’ve also stopped touching Opus entirely—it’s so token-hungry that it drains my weekly quota too fast to be practical.

Is Anthropic trying to limit usage or drive people away?


  👤 LatencyKills Accepted Answer ✓
I've been an engineer for almost 30 years (@ MS & Apple). I've been using Claude to perform code reviews for several of my macOS apps (that I wrote without AI).

A few weeks ago, its performance was impressive and helpful.

In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").


👤 palata
I was working really well, so I paid a yearly subscription. Two weeks later, it got to a point where I wouldn't pay for it if I had a choice.

I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.

I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.


👤 the_inspector
At least I could not get claude projects working, after it worked perfectly about 2 months ago. Because Anthropic introduced a chatbot as only line of support, help is not on is way.

👤 throwawayffffas
My guess they are trying lower quantization. I have not noticed I recently begun trying qwen 3.5 locally.

👤 loolhahalmao
vibing prod has its consequences. i don't believe theyre purposefully trying to make their products worse, but it's a result of not reviewing / testing their code and then trying to stem costs, resulting in higher cost for users with worse quality.

👤 niobe
100%. Claude is, I'm sorry to say, basically nerfed.

I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.


👤 jazz9k
It's most likely an intentional downgrade, so they can sell a better model to corporate and enterprise clients. This was bound to happen, especially since all of these companies are bleeding cash.

👤 jqpabc123
Is Anthropic trying to limit usage or drive people away?

They are most likely attempting to become profitable.

AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.

A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.

By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.

For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.