Humans do have an upper limit on how much working memory they have. Which I see as the closest thing to the "O(N^2) attention curse" of LLMs.
That doesn't stop an LLM from manipulating its context window to take full advantage of however much context capacity it has. Today's tools like file search and context compression are crude versions of that.
Human brain's prediction loop is bayesian in nature.