> This is especially important if the primary role of engineers is shifting from writing to reading.
This was always the primary role. The only people who ever said it was about writing just wanted an easy sales pitch aimed at everyone else.
Literate programming failed to take off because with that much prose it inevitably misrepresents the actual code. Most normal comments are bad enough.
It's hard to maintain any writing that doesn't actually change the result. You can't "test" comments. The author doesn't even need to know why the code works to write comments that are convincing at first glance. If we want to read lies influenced by office politics, we already have the rest of the docs.
> You can't "test" comments.
I'm thinking that we're approaching a world where you can both test for comments and test the comments themselves.
I don't buy that. Writing is taking a bad rap from all this. Writing _is_ a form of more intense reading. Reading on steroids, as they say. If reading is considered good, writing should be considered better.
You're right that you can't test comments, but you can test the code they describe. That's what reproducibility bundles do in scientific computing ;; the prose says "we filtered variants with MAF < 0.01", and the bundle includes the exact shell command, environment, and checksums so anyone can verify the prose matches reality. The prose becomes a testable claim rather than a decorative comment. That said, I agree the failure mode of literate programming is prose that drifts from code. The question is whether agents reduce that drift enough to change the calculus.
Literate programming failed because people treated long essays as the source of truth instead of testable artifacts.
Make prose runnable and minimal: focus narrative on intent and invariants, embed tiny examples as doctests or runnable notebooks, enforce them in CI so documentation regressions break the build, and gate agent-edited changes behind execution and property tests like Hypothesis and a CI job that runs pytest --doctest-modules and executes notebooks because agents produce confident-sounding text that will quietly break your API if trusted blindly.