How can an OpenClaw user use 6 times what a human subscriber is using when I'm four hours into the week and 15% of my weekly limit is already used up, just by coding? OpenClaw can't use 600% of my weekly limits.
Not sure what tier you're on.
Basically; spin up in the morning eats a lot of tokens because the cache is cold. This has actually gotten worse now that Opus supports a 1Mt context.
So: compact before closing up for the night (reduces the size of the cache that needs to be spun up); and the default cache life is 5 minutes, so keep a heartbeat running when you step away from the keyboard to keep the cache warm.
Also, things like web-research eat context like crazy. Keep those separate, and ask for an md report with the key findings to feed into your main.
This is not exhaustive list and it's potentially subtly wrong sometimes. But it's a good band-aid.
https://news.ycombinator.com/item?id=47616297
Know what's funny? Openclaw might actually burn less tokens than a naive claude code user; if configured correctly. %-/
Man, I run 3-5 sessions an evening for 5-6 hours, and longer on weekends and feel like I'm barely using what I paid for. I've only hit five hour limits a small number of times. Genuinely baffled when I hear people blow through tokens apparently several times faster than me. Are you going out of your way to design complex subagent workflows or something? I just let claude code use subagents when it wants to but don't give it any extra direction to use them.
Without data, this is just a bunk excuse to defend the walled garden practices.
With data, it's an engineering target.
They could just 429 badly behaved clients.
>How can an OpenClaw user use 6 times what a human subscriber is using when I'm four hours into the week and 15% of my weekly limit is already used up, just by coding?
Perhaps because your Claude agent usage is not representative of the average user, and closer to the average OpenClaw user levels...