logoalt Hacker News

bryanhoganyesterday at 4:03 PM10 repliesview on HN

Claude.ai is now at a 98.85% uptime. There's been so many frustrations with Claude / Anthropic lately (very heavy usage limits, wrong A / B testing, etc.).

Claude status: https://status.claude.com/

I have been really happy with my Codex subscription lately, but feels like these things change every other day. The OpenCode Go subscription for trying out GLM, Kimi, Qwen, Deepseek and friends also looks useful.

But nonetheless, Opus 4.6 is a very capable model, but justifying a Claude subscription gets more and more difficult, think I might just sometimes use it through OpenRouter or as part of something like Cursor (although I'm not sure about the value of that subscription as well).

OpenCode Go: https://opencode.ai/go

Cursor: https://cursor.com


Replies

oefrhayesterday at 4:41 PM

There were periods where I was entirely unable to use Claude Code for hour+ due to auth gateway always returning 500 or timing out, there was an "elevated errors" incident shown on status.claude.com, but zero minute of downtime recorded (not even "partial outage"). So the real uptime should be even worse.

show 1 reply
rubslopesyesterday at 4:29 PM

April has been a crazy month for open weights models. I've been using Claude Code for work and Kimi 2.6 for personal projects and Kimi has been very good. Glm-5.1 is also great. Qwen, Mimo and Deepseek I need to test some more, but they all have been producing good results. I have the impression that they are all are at the same level, or close to, Sonnet 4.6.

show 3 replies
dubcanadatoday at 11:05 AM

If only opencode wasn't super buggy, it is really bad at just not returning responses, wasting tokens, duplicating responses, lagging, etc. It is nowhere near claude code levels, not even close. Even codex which is also not near claude is much better then opencode.

nclin_yesterday at 9:00 PM

The last few days I've seen more degradations and canceled my Max subscription.

Presumptuous and wrong "memories" from a one-off command which affect all future commands, repeated/nonsensical phrases in messages, novel display bugs which make going back in the conversation impossible (I can't tell where I am), lack of basic forking features (resume a current convo in a second CC instance -> fork = no history for that convo?), poor/unclear reasoning, a new set of unclear folksy phrases (it really wants to "cut code" all of a sudden).

Qwen + Opencode has been a game changer: which runs very well on a 4090 for basic/exploratory/private tasks, and being able to switch to and between frontier models (using openrouter in my case) to avoid vendor lock in feels like basic hygiene.

There's also the homo economicus psychological difference between having a token budget to use up, and a cost per token. I'm more thoughtful about my usage now.

loloquwowndueoyesterday at 4:43 PM

> Claude.ai is now at a 98.85% uptime.

So, at least better than GitHub, right? :)

show 1 reply
egeozcanyesterday at 5:28 PM

Codex randomly stops working because some silly cybersecurity detector. Insane amount of false positives. Last time it happened, I was just letting it write me a small tool to translate the text in my clipboard. What cybersecurity? Code wasn't even published, or remotely like anything hacking related. I'm always letting AI write some boring CRUD tools that I don't want to code myself.

It's bordering on being useless.

show 1 reply
tappioyesterday at 5:24 PM

I have used past week opencode go with deepseek v4 pro and claude code with opus 4.7 side by side and... they are both good. They are different, both have their good and bad sides... but they do get things done. Especially the OpenCode has been very enjoyable experience. Thank you Anthropic for all the down time, I would have probably not explored alternatives otherwise. I can vouch for the OpenCode Go sub!

biztostoday at 2:30 AM

The "nines" measure of uptime is not some divine law. Even 80% Claude uptime would still be great value for money.

You just need to have some idea of what to do when your frontier model is not available. Use Qwen? Read the code you've been generating?

Multi-model coding tools seem like the obvious, sane path forward, but the Will to Lockin is strong.

show 2 replies
selfawareMammalyesterday at 7:27 PM

New codex limits make it unusable though. Switched to Opencode.

qingcharlesyesterday at 7:26 PM

Codex has been pretty reliable. Google's API is a trash fire of 503s on their paid models. Copilot is a lottery too.