My half-baked solution is requiring colocation of the "why" for every decision and doc the llm writes, ideally my exact words. And similarly, every so often the llm why it's doing something reveals a mismatch between your intent and its PoV.