> Why do they often make completely unintuitive decisions
Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.
Senior developers know what behavior to constrain.
If incorrect LLM output is a prompt issue then demand for experienced developers will remain, and demand may actually increase as time passes.
They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.