Yeah building acceptance criteria first is the way. An LLM is a goal machine. It uses probability over and over to advance towards the goal(s). That’s all it is and wants to do. So giving it well defined and granular goals and guardrails will get the best results.