What I've read is that even with all the meticulous planning, the author still needed to intervene. Not at the end but at the middle, unless it will continue building out something wrong and its even harder to fix once it's done. It'll cost even more tokens. It's a net negative.
You might say a junior might do the same thing, but I'm not worried about it, at least the junior learned something while doing that. They could do it better next time. They know the code and change it from the middle where it broke. It's a net positive.
Unfortunately, you could argue that the model provider has also learned something, i.e. the interaction can be used as additional training data to train subsequent models.
this comment is the first truly humane one ive read regarding this whole AI fiasco
When the user intervenes, the model providers will look at those signal and tweak the next version of the model on what it can do better so that the user does not need to intervene next time.