Mounting evidence that claude max users are put into one big compute fuel pool. Demand increased dramatically with OpenAI's DoD PR snafu (even though Anth was already working with the DoD? But I digress...). The pool hit a ceiling. Anth has no compute left to give. Hence people maxing out after 1 query. "Working on it" means finding a way to distill Claude Code that isn't enough of a quality degradation to be noticed[0], in order to get the compute pool operational again. The distillation will continue until uptime improves.
0 as of this writing, it's noticeable. Lots of "should I continue?" And "you should run this command if you want to see that information." Roadblocks that I hadn't seen in a year+
The trend on the status page[1] does not inspire confidence. Beginning to wonder if this might be a daily thing.
No one is going to like this answer, but there’s a simple solution: pay for API tokens and adjust your use of CC so that the actions you have it take are worth the cost of the tokens.
It’s great to buy dollars for a penny, but the guy selling em is going to want to charge a dollar eventually…
HN’s guidelines say ‘Don’t editorialize’. The original title here is ‘[BUG] Claude Code login fails with OAuth timeout on Windows’, which is more specific and less clickbaity.
My biggest frustration right now is the seeming complete loss of background agent functionality. Permissions seem completely botched for background agents right now. When that happens, the foreground agent just takes over the task despite:
1. Me not wanting that for context management reasons
2. It burning tokens on an expensive model.
Literally a conversation that I just had:
* ME: "Have sonnet background agent do X"
* Opus: "Agent failed, I'll do it myself"
* Me: "No, have a background agent do it"
* Opus: Proceeds to do it in the foreground
* Flips keyboard
This has completely broken my workflows. I'm stuck waiting for Opus to monitor a basic task and destroy my context.
Looks to be sourced from an outage:
I'm finding queries are taking about 3x as long as they used to regardless of whether I use Sonnet or Opus (Claude Code on Max)
Wonder what the next AI winter trigger would be. Coding agent client collapsing under its own tech debt?
I'm more surprised that OpenAI is extremely subsidising their ChatGPT subscriptions. With Plus you can do a lot more than with Calude's x5 Max. Is it an expense they just can afford, while people have not migrated over from CC?
The commenters here don't seem to realize this was posted during the outage yesterday that affected login for most claude code users.
Isn't it a little weird that we trust this app to help us build some of the most important parts of our business and the company that vends this app keep breaking it in unique ways.
At my workplace we have been sticking with older versions, and now stick to the stable release channel.
Is this really relevant news? Please share more bug reports from popular services and tools. Feels a tiny bit biased. My CC is just fine since at least three weeks.
If you prepare yourself a token with "claude setup-token" (presuming you're not already locked out and had one) you can run "CLAUDE_CODE_OAUTH_TOKEN=sk.. claude" to use your account.
Run LLMs locally. Otherwise suffer service disruptions and very likely price hikes in the future.
If anthropic‘s reliability becomes a meme, they risk brand death like Microsoft. Go to hand it to them though, they’re really living that “AI writes all of our code and it should write your code too” life.
IME this isn't just a 'Claude Code' problem, I'm seeing extremely degraded / unresponsive performance using Opus 4.6 in Cursor.
98% uptime is not great. Our eng department is thinking about going half half with Codex but of course there’s a switching cost.
15000 milliseconds! Makes me laugh. I've had the same issue! Usually happens in the morning
I solved this by upgrading Claude Code, closing down all instances, closing my browser, starting claude again, and doing a /login
Simply put, Anthropic does not have enough compute.
Not sure how Claude and CC has become the defacto best model given gpt 5.3 codex and 5.4 exist. This space moves so fast you should be testing your workflows on different models at least once every quarter, prudently once a month.
I'm getting "Prompt is too long" a lot today
I stopped using Claude Code several months ago and I can't say I've missed it.
There was constant drama with CC. Degradation, low reliability, harness conspiring against you, and etc – these things are not new. Its burgeoning popularity has only made it worse. Anthropic is always doing something to shoot themselves in the foot.
The harness does cool things, don't get me wrong. But it comes with a ton of papercuts that don't belong in a professional product.
For a lot of my work, I'm pretty happy with OpenCode + GLM-4.7-Flash-REAP-23B-A3B-Q4_K_M.gguf running in llama.cpp.
Free and local.
How are they making billions with reliability like that?
How is coding "solved" then?
Unless they meant "all code that needs to be written has already been written" so their mission is to prevent any new code from being written via a kind of a bait and switch?
I found that telling Claude that it is trying to defraud you and making spend money often gets it back on track and return to pervious performance briefly until it agains starts doing nonsense.
I think Anthropics model has conflict of interest. They seem to have nerfed the models so that it takes more iterations to get the result (and spend more money) than it used to where e.g. Opus would get something right first time.
This was an outage.
I really don't understand the way Claude does rate limiting, particularly the 5 hour limit. I can get on at 11:30, blow through my limit doing some stupid shit like processing a pile of books into my llm-wiki, and then get notified that I've used 90% of my 5 hour session limit and I have to wait for noon (aka wait 10 minutes) for the five hour limit to reset. Baffling.
Anyone played much with Jetbrain’s LLM agent?
I’ve been toying around at home with it and I’ve been fine with its output mostly (in a Java project ofc), but I’ve run into a few consistent problems
- The thing always trips up validating its work. It consistently tries to use powershell in a WSL environment I don’t have it installed in. It also seems to struggle with relative/absolute paths when running commands.
- Pricing makes no sense to me, but Jetbrains offering seems to have its own layer of abstraction in “credits” that just seem so opaque.
Then again, I mostly use this stuff for implementing tedious utilities/features. I’m not doing entity agent written and still do a lot of hand tweaks to code, because it’s still faster to just do it myself sometimes. Mostly all from all from the IDE still.
Antigravity has become near unusable too for the last week with Opus. Continual capacity alerts meaning tasks stop running.
Not worth the money now, will be canceling unless fixed soon.
The eternal return of https://xkcd.com/303/
It started again.
Claude is now making itself unavailable after it was on vacation yesterday.
Maybe you should consider....local models instead?
[dead]
[dead]
[dead]
[dead]
[dead]
The solution is clearly more vibe coding at anthropic.
I doubt even the core engineers know how to begin debugging that spaghetti code.
i upgraded to the 20x plan, and hit the weekly limit within 24 hours. i was running some fairly large tasks, but was still surprised it hit the weekly session limit so quickly. now i can't use it for 6 more days :( i didn't even have time to ask it to help setup logs or something to track my usage before i hit the session limit.
As much as people on Hacker News complain about subscription models for productivity and creativity suites, the open arms embrace of subscription development tools (services, really) which seek to offload the very act itself makes me wonder how and why so many people are eager to dive right in. I get it. LLMs are cool technology.
Is this a symptom of the same phenomenon behind the deluge of disposable JavaScript frameworks of just ten years ago? Is it peer pressure, fear of missing out? At its root, I suspect so; of course I would imagine it's rare for the C-suite to have ever mandated the usage of a specific language or framework, and LLMs represent an unprecedented lever of power to have an even bigger shot at first mover's advantage, from a business perspective. (Yes, I am aware of how "good enough" local models have become for many.)
I don't really have anything useful nor actionable to say here regarding this dialling back of capability to deal with capacity issues. Are there any indications of shops or individual contributors with contingency plans on the table for dialling back LLM usage in kind to mitigate these unknowns? I know the calculus is such that potential (and frequently realised) gains heavily outweigh the risks of going all in, but, in the grander scheme of time and circumstance, long term commitments are starting to be more apparently risky. I am purposefully trying to avoid "begging the question" here; if instead of LLMs, this were some other tool or service, reactions to these events would have been far more pragmatic, with less of a reticence to invest time on in-house solutions when dealing with flaky vendors.