Why can't the agent harness dynamically decide whether outputs should be put into the context or not? It could check with an LLM to determine if the verbatim output seems important, and if not, store the full output locally but replace it in the prompt with a brief summary and unique ID. Then make a tool available so the full output can be retrieved later if necessary. That's roughly how humans do it, you scroll through your terminal and make quick decisions about what parts you can ignore, and then maybe come back later when you realize "oh I should probably read that whole stack trace".
It wouldn't even need to send the full output to make a decision, it could just send "npm run build output 500 lines and succeeded, do we need to read the output?" and based on the rest of the conversation the LLM can respond yes or no.
Isn't that what subagents do to a certain degree?