Well, I can share my experience from a few days ago. Gave the same task (a major refactor) to both Claude and Codex.
Codex finished in 5 minutes, Claude was still spinning after 20 minutes. Also it used up all my usage, about twice over (the 5-hour window rolled over in the middle of the task, so the usage for one task added up to 192%). Codex usage was 9%. So, 21x difference there, lol
They're saying there's bugs lately with how usage is being measured, but usage being buggy isn't exactly more encouraging...
So I was on task #4 with Codex while Claude was still spinning on #1.
I didn't like the results Codex gave me though. It has the habit of doing "technically what you asked, but not what a normal human would have wanted."
So given "Claude is great but I can't actually use it much" and "Codex is cheap and fast but kinda sucks", the current optimum seems to be having Claude write detailed specs and delegate to Codex. (OpenAI isn't banning people for using 3rd party orchestration, so this would actually be a thing you could do without problems. Not the reverse though.)
> Claude was still spinning after 20 minutes.
I have been using Claude Code on a medium codebase (~2000 files, ~1M lines of code) for over a year and have never had to wait this long. Also I'm on the max plan and have not seen these limits at all.