Man I hope so - the context limit is hit really quickly in many of my use cases - and a compaction event inevitably means another round of corrections and fixes to the current task.
Though I'm wary about that being a magic bullet fix - already it can be pretty "selective" in what it actually seems to take into account documentation wise as the existing 200k context fills.
Is this a case of doing it wrong, or you think accuracy is good enough with the amount of context you need to stuff it with often?
lmao what are you building that actually justify needing 1mm tokens on a task? People are spending all this money to do magic tricks on themselves.
Hello,
I check context use percentage, and above ~70% I ask it to generate a prompt for continuation in a new chat session to avoid compaction.
It works fine, and saves me from using precious tokens for context compaction.
Maybe you should try it.