The "human in the loop at key checkpoints" pattern has been the most practically useful for us. We found that giving the agent full autonomy end-to-end produces subtly broken code that passes tests but violates implicit invariants you never thought to write down. Short loops with a human sanity check at decision forks catches that class of failure early.
The thing I keep wrestling with is where exactly to place those checkpoints. Too frequent and you've just built a slow pair programmer. Too infrequent and you're doing expensive archaeology to figure out where it went sideways. We've landed on "before any irreversible action" as a useful heuristic, but that requires the agent to have some model of what's irreversible, which is its own can of worms.
Has anyone found a principled way to communicate implicit codebase conventions to an agent beyond just dumping a CLAUDE.md or similar file? We've tried encoding constraints as linter rules but that only catches surface stuff, not architectural intent.
[dead]