Actually, I do use compiled languages for this reason. Even Opus 4.7 and GPT-5.5 will leave unassigned variables lying around in Python code of sufficient size. If you've got sufficient testing you'll exercise all paths, and I imagine a good prompt would ensure adding testing with coverage to see that it does happen. However, I do not have (yet) such a system but using Go/Rust helps a lot because the compile phase actually helps detect correctness issues.
My other problem with most of the other ecosystems: ts/npm, python/uv, rust/cargo is that they all have build-time scripts that are controlled by others that execute automatically. This is a real problem because the LLM will just install things and proceed to send your home directory through a juicer. I feel a bit of a paranoiac now doing this, but I have a script that launches a podman container with just the source directory and a binary directory loaded (for caching) which compiles everything.
I know there's some sequence of steps I can take to protect myself, but if the LLM accidentally uses pnpm to run dev build scripts when I had the right config on npm or whatever, I know I'm screwed. So now I do all these shenanigans with Rust (to the extent that I vendor old deps sometimes). So the ideal language to me now is one with very few of these footguns and sandtraps which has a tight iteration loop.
And you can spend your effort on features and architectural issues rather than smaller scope bugs. My experience is that Rust enables me to focus on features as long as I don't give the AI free reign. Architecture matters for correctness bugs, because some solutions are inherently more prone to the AI becoming confused along the way than others.
The more effort I spend on planning architecture with the AI, the less runtime bugs I need to investigate after it did the implementation.