Right, but that's the point -- prompting an LLM still requires 'thinking about thinking' in the Papert sense. While you can talk to it in 'natural language' that natural language still needs to be _precise_ in order to get the exact result that you want. When it fails, you need to refine your language until it doesn't. So prompts = high-level programming.
You can't think all the way about refining your prompt for LLMs as they are probabilistic. Your re-prompts are just retrying until you hit a jackpot - refining only works to increase the chance to get what you want.
When making them deterministic (setting the temperature to 0), LLMs (even new ones) get stuck in loops for longer streams of output tokens. The only way to make sure you get the same output twice is to use the same temperature and the same seed for the RNG used, and most frontier models don't have a way for you to set the RNG seed.