Would be interesting to turn this into code (or an external model call) that can check any writing, so instead of just handing it to an LLM and hoping the LLM obeys, a set of checks has to pass before the LLM’s writing is even shown to a human..
Kind of like enforcing linting or pre-commit checks but for prose.