Comments are the ultimate agent coding hack. If you're not using comments, you're doing agent coding wrong.
Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.
You get free long term agent memory with zero infrastructure.
This isn't just great advice ⸻ it's terrific advice. I'd love to delve a little deeper.
Experience doesn’t leave me with any confidence that the long term memory will be useful for long. Our agentic code bases are a few months old, wait a few years for those comments to get out of date and then see how much it helps.
Comments are great for developers. I like having as much design in the repo directly. If not in the code, then in a markdown in the repo.
> “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”
That's revealing waaaay more than the agent needs to know.
Hmm, I'm sure if you're getting parent's comment.
I think a big question is whether one wants your agent to know the reason for all the reasons for guidelines you issue or whether you want the agent to just follow the guidelines you issue. Especially, giving an agent the argument for your orders might make the agent think that can question and so not follow those arguments.
> If you're not using comments, you're doing agent coding wrong.
Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.
Agents and I apparently have a whole lot in common.
Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.
This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.