Long context will be solved and capped and turned into a theta 1 operation or, at worst, theta log(n). People don't have infinite perfect recall so agents don't need it. Also, there are really good solutions to it that just aren't explored enough right now since transformer architectures are where everyone is dumping money and time. I suspect very soon somone will have a much better system that just takes over and then the idea of context limits will be a thing of the past. I've actually built something myself that allows infinite context/perfect recall in theta 1 (minor asterisk here as there has to be but meh). I know others have solutions too.