I've started getting some 401 errors on a subscription again and oauth seems to be struggling to restore the session. Is it just me?
Does anyone recall how to code manually? I certainly don't :-)
It's really amazing how stability of platforms has gone down in the last year or so.
Are they going to extend my subscription time as a result? It ends today, but I was locked out an hour or so ago, and I'm not sure if that was actually due to this outage.
All the vibe coding is clearly not working out too well.
Can someone that's worked at one of these big companies honestly explain how it happens that when these guys are down, it's never for like 10-15 mins ... it's always 1-2+ hours? Do they not have mechanisms in place to revert their migrations and deployments? What goes on behind the scenes during these "outages"?
I don't know about down but I use the VS Code extension on a Pro plan (that I'm considering upgrading from) and it's been slower than molasses flowing uphill in winter for me this afternoon. I'm (a) feeling unwell, and (b) up against a deadline, so this is starting to damage my calm.
Login failed: Request failed with status code 500
Good times..
Same issue. Getting an "internal server error" message
I was wondering the same. They just updated the status page but it was showing green for a while and I couldn't login.
I use Big-AGI [1] as selfhosted open source LLM workspace, and it's quite telling that when adding API keys for Anthropic, it presents a note inbetween reading "Experiencing Issues? Check Anthropic status" that it doesn't for any other model provider.
[1] https://github.com/enricoros/big-AGI (no affiliation)
Yes they are down --
/login
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Login
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
Esc to cancelTime for all of the cosplaying developers to sit around and twiddle their thumbs.
Looks like I'm debugging this issue myself.
Can people drop a good LocalLlama setup that I can run on M4?
Experienced same... they logged me out on claude code a few minutes ago. And when I login, it makes me wait >15000ms for the auth (which exceeds their cutoff time), so auth fails!
Slightly ot but I've been using OpenAI's GPT 5.4 on Codex and so far finding it more convincing than Claude with Opus 4.6 at maximum thinking for my use cases.
I'm more interested in helping with design and architecture rather than having it author tons of code.
Keep in mind that OpenAI has a way more generous tier for 20$ than Anthropic's one, and I think you can even use codex for free with the latest models, so give it a shot, you may find it better than you expected and a solid backup to Claude.
oauth `redirect_url` points to localhost, so the login redirect hangs
I can't login to my subscription
Auth is failing, session kicked out.
Weird, it' all looks good to me. Using both Chat and Code variants without issues.
I found it absurdly slow yesterday.
Yup, I get:
OAuth token has expired. Please obtain a new token or refresh your
existing token.
And their /login page doesn't work.claudown
claude looking for that $500k salary, too
It's down for me
Yep same here:
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
Same here:
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
Looks like it is back.
woohoo, break time!
seems to be.
Nope. Me too.
It's down. I hate this
[dead]
[dead]
[dead]
Same issue here. What was the name of that question and answer site again where you had to manually copy and paste code from? ;-)
Official status is still green: https://status.claude.com/
But downdetector is clear: https://downdetector.com/status/claude-ai/
/edit: there's an official incident now: https://status.claude.com/incidents/jm3b4jjy2jrt