We're talking about the capabilities of the technology. I would not even think about using LLM output in life critical roles, but we're not talking about only such scenarios. I'm addressing your claim that these
> blows up in damn well near every professionals face
If you know how to use the tools, and know their limitations, you can generate vast quantities of useful code very quickly. If you can't manage this, its PEBKAC. You're saying you don't trust these to make more than minor changes, which might make sense if your code could kill people buy otherwise you're being overcautious or severely underestimating what these can do.
Useful is relative. Silly little toy aps? Sure go full auto, heck, i personally do. However, these will fail catastrophically or have failures that should never even have happened.
So, in professional environments, full auto is negligent to a point i hope it becomes a fireable offense. Like trusting lane-assist and adaptive-cruise in a car to handle full auto driving. It might even seem like it can, until the leading car disappears, until it hits a t intersection. You get me? Modern llms are lane assist and adaptive cruise, not full self driving. It frees up some of your headspace and attention but not all, in fact not even most of it.