I typically tell my agents to only treat document writing as a last "rendering" pass. LLMs are so good at taking sparse knowledge and compiling it, that I prefer to store knowledge as composable ideas/facts.
What has worked well in practice is giving the agent a directory, and tell it to make independent markdown files for facts/findings it locates - with each file having front-matter for easy search-ability.
This de-complects most tasks from "research AND store iteratively in a final document format" to more cohesive tasks "research a set of facts and findings which may be helpful for a document", and "assemble the document".
Only a partial mitigation, but find it leads to more versatile re-use of findings, same as if a human was working.
Sounds like a good system. To use the analogy from ths other comment, this would be like running an image through JPEG compression twice.
The issue happens then if you're updating the individual research files on a regular basis. (Or making a long series of commits on a starting code base.) Every edit has a chance of doing a drive-by cleanup on nearby lines. Over a long enough timeline, it'll ablate your logic into something featureless, like if you compress an image too many times.