I'm suspicious of their results with regards to tool usage.
It's unsurprising that round-tripping long content through an LLM results in corruption. Frequent LLM users already know not to do that.
They claim that tool use didn't help, which surprised me... but they also said:
> To test this, we implemented a basic agentic harness (Yao et al., 2022) with file reading, writing, and code execution tools (Appendix M). We note this is not an optimized state-of-the-art agent system; future work could explore more sophisticated harnesses.
And yeah, their basic harness consists of read_file() and write_file() - that's just round-tripping with an extra step!
The modern coding agent harnesses put a LOT of work into the design of their tools for editing files. My favorite current example of that is the Claude edit suite described here: https://platform.claude.com/docs/en/agents-and-tools/tool-us...
The str_replace and insert commands are essential for avoiding round-trip risky edits of the whole file.
They do at least provide a run_python() tool, so it's possible the better models figured out how to run string replacement using that. I'd like to see their system prompt and if it encouraged Python-based manipulation over reading and then writing the file.
Update: found that harness code here https://github.com/microsoft/delegate52/blob/main/model_agen...
The relevant prompt fragment is:
You can approach the task in whatever
way you find most effective:
programmatically or directly
by writing files
As with so many papers like this, the results of the paper reflect more on the design of the harness that the paper's authors used than on the models themselves.I'm confident an experienced AI engineer / prompt engineer / pick your preferred title could get better results on this test by iterating on the harness itself.
It's worth noting that Claude Code itself doesn't use the `insert` tool. (It also uses custom edit tool not the suite's predefined str_replace)
Also as a person developing agentic code tools since before Claude Code, I'm skeptical if str_replace provides accuracy improvement over just full rewrite.
Back in the day when SOTA models would do lazy coding like `// ... rest of the code ...`, full rewrite wasn't easy. Search/replace was fast, efficient and without the lazy coding. However, it came with slight accuracy drop.
Today that accuracy drop might be minimal/absent, but I'm not sure if it could lead to improvements like preventing doc corruption.
Only sort of related, but I would love to see a harness with ed as the primary file editing / reading tool. Half the bash Claude runs seems to be sed anyway, having some state persist in ed would seem to help.
What does one do when a full editor consumes too much bandwidth^H tokens? Use ed, the standard editor!
I think your argument makes sense but my understanding is that adding the document to the context and spitting it back is prone to corruption in any scenario.
I think this is closely related to other sources saying that even if you have huge context the attention mechanism itself is not back referencing thus any tasks related to bigger contexts are prone to errors.
because I have some preconception of this maybe I am assuming this is what they were saying. Am I missing something ?
Any rando can publish research nowadays. It means nothing. Just like "X country published N research papers last year". It is noise. In a world where it was required to attach age, experience level, and country of origin to every comment, research paper, or post on the internet, it would shatter the conviction we mistakenly have towards the information we receive.
This team is inexperienced and it shows.
The noise to signal ratio will get worse, even in "academia". Brace yourselves. The kids are growing up in this new world.
Yeah, this is a bit of a strawman of an LLM task.
On editing tasks, one should only allow programmatic editing commands, the text shouldn't flow through the LLM at all. The LLM should analyze the text and emit commands to achieve a feedback directed goal.
People love to interpret the results in the most negative way possible because it's a threat to their occupation and identity. I refer to HN specifically.
The fact of the matter is, if you want to edit a document by reading the document and then regurgitating the entire document with said edits... a human will DO worse then a 25% degradation. It's possible for a human to achieve 0% degradation but the human will have to ingest the document hundreds of times to achieve a state called "memorization". The equivalent in an LLM is called training. If you train a document into an LLM you can get parity with the memorized human edit in this case.
But the above is irrelevant. The point is LLMs have certain similarities with humans. You need to design a harness such that an LLM edits a document the same way a human would: Search and surgical edits. All coding agents edit this way, so this paper isn't relevant.
[flagged]
[flagged]
[dead]
The incomprehensible methodology due to resource constraints or straight up for simplicity's sake make these papers worthless unfortunately
It could also be that much like most large orgs now you've made LLMs your entire personality, so you don't see the inherent bias.
Most LLM users who are not touching code are certainly not going to be using a harness. They're going to take all the documents, slam all those tokens into the context window, see they have only used 500k out of their 1M tokens and say "summarize".
I agree with most of what you wrote except for this:
>Frequent LLM users already know not to do that.
And I think that’s the biggest problem. Amidst the current push to utilize LLMs across orgs and groups there are a large (if even say majority) of people that are using them every day but who have never approached anything as technical as a “harness” before let alone an entire setup.
For them the behavior mentioned here is a major issue.