logoalt Hacker News

strongpigeonyesterday at 6:32 PM2 repliesview on HN

It's interesting that they charge more for the > 200k token window, but the benchmark score seems to go down significantly past that. That's judging from the Long Context benchmark score they posted, but perhaps I'm misunderstanding what that implies.


Replies

Tiberiumyesterday at 6:57 PM

They don't actually seem to charge more for the >200k tokens on the API. OpenRouter and OpenAI's own API docs do not have anything about increased pricing for >200k context for GPT-5.4. I think the 2x limit usage for higher context is specific to using the model over a subscription in Codex.

simianwordsyesterday at 6:43 PM

This is exactly what I would expect. Why do you find it surprising