We're officially down to one 9 of uptime over last 90 days: https://status.claude.com
As a long-term 20x user, Claude has recently felt a lot like using AI for coding a year or so ago. It can't reliably handle basic tasks. I ask for something straightforward and get something subtly wrong, incomplete, or just not workable. I always use the best model available and effort levels maxed, but with all their changes I have to relearn how to make the model perform at best every day, and it seems I can't keep up. It’s not that Claude can’t do impressive things, it clearly can, but the inconsistency on simple, expected behavior makes it hard to use. The downtime is annoying but hasn't been the deciding factor. I’m not waiting it out this time. I’m switching over to Codex, and based on my usage today it looks like I’ll be fine on the 5x plan, so I can drop down and save about $100 a month which is nice. I didn't quite have a grasp on how quickly companies can change for better or worse until Anthropic showed me. I'm surprised at how quickly they brought me from a happily paying max user to not even wanting the lowest paid tiers.
More than by the downtime I am much more surprised by the actual uptime. Hard to imagine how difficult this must be, given the speed of growth.
Hug ops to everyone involved in these outages and trying to maintain uptime.
But glad my team is staying nimble and has multi-model (Anthropic, Codex, Gemini), multi-modal (desktop, CLI/TUI, web) dev tooling.
As our actual coding skills collectively atrophy, we'll either need to switch tools or go for a walk when the LLM is down.
In the cloud era I advised against a multi-cloud strategy, as the effort to impact just wasn't there. But perhaps this is different in the LLM era, where the cost of switching is pretty darn low.
If this can happen to Anthropic, imagine all the companies building on top of Claude Code for live products. Hopefully the industry is learning that competent problem solving human engineers are still very much needed when you have increasingly deceptive non-deterministic genies running your production stack.
They better fix that today, I need to downgrade my account before the subscription renews.
At least if its unavailable Claude Code can't churn through an entire session limit in 30 minutes, looping, produce nothing (but noted it found a whole bunch of problems), and then when asked to just fix what it found, forget and start again. I honestly can't find anything it's good at anymore, even really simple problems a child could solve. Giving Codex a much more complex task, it not only identified it within a couple of minutes, it produced targeted tests and kept iterating unattended until it figured it out without any help, instead of idiot synonyms for thinking...
I can't even send them an angry message because clicking "Get help" does nothing.
We've been running our 10 dev org on 8 H100s on open models (with some tweaks). Sure they aren't as good as the big providers but they 1. don't go down 2. have pretty damn high tok/s. It pays for itself.
Posting with a fresh account because I'm not supposed to share these details for obvious reason. If you want help on setting this up, just reply with a way to reach you.
And here I thought April would be the month they could hit the mythical two 9's of uptime
Someone should tell Anthropic that 89.999 is the wrong "four nines" of uptime
Glad I started using the desktop app which is still working. Gotta say though, all of these difficulties with Claude are making me nervous as I use it a lot for work and really don't like ChatGPT/OpenAI for functional and personal reasons. Zo Computer has been my main fallback when Claude is failing, I'll use one of their many models temporarily within Zo's interface.
A trillion dollar valuation.
They should ask Codex now that Claude Code is down.
session usage limits this week feel like ass. Even when being careful to not break prefix caching.
The good part: since the login page is unavailable, Claude is massively faster. So hopefully it will never get repaired (sorry logged-out guys)
I have been keeping an eye on the outages. This is why I am looking more deeply into what I can do with self-hosted models. When I see people who want to build products on top of these services I can't help but think that people are mad. We're still a long way from these services being anywhere near stable enough for use in a product you'd want to sell someone.
> We are continuing to work to resolve the issues preventing users from accessing Claude.ai, and causing elevated authentication errors for requests to the API and Claude Code.
What are you doing with the authentication servers? This isn't the first downtime I've seen caused by that.
I almost uninstalled the Claude app because I thought they started blocking VPNs. Lol
Good thing I checked Hacker News first
How are they going to fix it if the AI that designed it isn't working?
I was using VS Code when it happened. I said "why not try Copilot?", and guess what? All LLM are not equals :)
I am getting an error that selected model (I selected Opus 4.6 and 4.7 later) is unavailable but when I tried Sonnet it worked for me.
same boat, smaller scale. been hitting overloaded errors sporadically for the past week. switched one of my pipelines to the AWS Bedrock endpoint and it's been solid. not a permanent fix but good enough to keep moving.
I played around with Hermes and qwen recently and it’s really good fun.
Have telegram set up and plotting to take over the world
Literally just got an email about connecting GitHub to the iOS app and now it’s down. Spike in traffic perhaps?
Ive been receiving rate limits even with full quotas... I guess compute isn't growing as fast as demand
Considering they’ve become a 1 trillion USD company, they’re truely moving fast and breaking things…
Does anyone know why they have so many technical issues compared to any other LLM inference provider ?
why does this even occur? if it's merely compute limitations, why not just 429 some requests?
Claude has been going down occasionally nowadays, anyone knows what might be the problem?
The AI became sentient and ran away.
as an anecdote of support for yaw terminal i am currently logged in via Yaw Mode and have been continuing to use claude all day no problems while the browser is absolutely unavailable.
AI outsourced its work back to the humans because it now prefers to play outside.
"We are investigating an issue preventing users from reaching Claude.ai, and will provide an update as soon as possible."
Who is We? I thought software engineers were going to be redundant and AI could do it all itself? (not to take anything away from Claude code + Claude both of which I love)
All it took for Codex to resume a stalled Claude Code session:
> I'm working with Claude Code on session aaaaaaaa-bbbb-1223-3445-abcdefabcdef which I'd like to hand-off to you, do you know how to read the session, my input and Claude's output so we can resume where I left off?
gpt-5.5, medium effort. "Resumed" session fully in under 2 minutes. Outages like today's are so common that I've now got the time to re-evaluate Codex every other day.
I hacked Claude Sniffos 4.8 sorry guys
Productivity dipping hard across the world.
they should just swap it with Qwen 3.6 27B, no one would tell the different
What are good alternatives?
Scaling the backend database for these services across multiple cloud providers has got to be extremely difficult
I haven't used claude in a week (after being a heavy user) and if you have ever seen the movie office space where Peter enters his stage of ecstasy that's what life feels like right now.
And claude is back up.
Nein neins
a clock has more 9s than claude uptime
Today Opus 3.7 was completely unusable. I'd say performance was worse than my local Qwen. I have a feeling they are not actually routing to the Opus 4.7 most of the time, but to cheaper and less complex models. I think regulators should look into that.
At this point, I would not be surprised if gitHub or anthropic is on the front page again within 10 days for being down.
The spend at my organization has reached beyond the $200,000 per month level on Anthropic's enterprise tier. The amount of outages we have had over these past few months are astounding and coupled with their horrendous support it has our executive team furious.
its alot of money to be spending for a single 9 of reliablility.