logoalt Hacker News

tyleotoday at 3:35 PM2 repliesview on HN

This is foolish. High token use is associated with worse output. If you fill your models context you are going to be using a lot more context but the labs literally put out charts of how the models degrade at high context use.

This is analogous to measuring productivity by LoC output.


Replies

Insanitytoday at 7:01 PM

High token usage does not mean high token usage in the same session / context window. But yeah, context rot hits hard, I find that with Codex/GPT5.4 after about 50% context window usage it's hard to get anything useful out of it on a moderately sized codebase.

drivingmenutstoday at 4:06 PM

> This is analogous to measuring productivity by LoC output

True, but it looks like productivity to people whose own productivity is measured by how busy their subordinates appear to be.