This was a real issue, and Anthropic recently awknowledged it:
https://www.anthropic.com/engineering/april-23-postmortem
Of course, it sucks when companies screw up ... but at the same time, they "paid everyone back" by removing limits for awhile, and (more importantly to me) they were transparent about the whole thing.
I have a hard time seeing any other major AI provider being this transparent, so while I'm annoyed at Claude ... I respect how they handled it.
Yes that was one issue. It’s not the general degradation I have been talking about though, which is ongoing.
I recall reading similar tales of woe with other providers here on HN. I think the gradual dialling back of capability as capacity becomes strained as users pile on is part of the MO of all the big AI companies.
Amusingly, when a coworker was looking for this postmortem, they found a different postmortem of three Claude issues that caused decay. This one was in the platform, not in Claude Code:
https://www.anthropic.com/engineering/a-postmortem-of-three-...
I think there's a certain amount of running with scissors going on here. I appreciate the transparency, but the time to remediation here seems pretty long compared to the rate of new features.