logoalt Hacker News

gizmodo59yesterday at 11:44 AM2 repliesview on HN

My take on long context for many frontier models is not about support but the accuracy drops drastically as you increase the context. Even if a model claims to support 10M context, reality is it doesn’t perform well when you saturate. Curious to hear others perspective on this


Replies

kridsdale3yesterday at 12:15 PM

This is my experience with Gemini. Yes, I really can put an entire codebase and all the docs and pre-dev discussions and all the inter-engineer chat logs in there.

I still see the model becoming more intoxicated as turn count gets high.

show 1 reply
vessenesyesterday at 1:11 PM

Agreed. That said, in general a 1M context model has a larger usable window than a 260k context model.