I sometimes wonder if I'm in a different universe to other devs. Anytime AI coding is brought up, comments are overwhelmingly negative and often point out correctness, quality, slop, etc.
There's also the 'more stuff is being delivered, but it's not right, full of holes and papercuts'.
I'm 22 years into development and couldn't think of going back to non AI programming now. Not only has it sped up velocity by an order of magnitude, it's also helped me unlock side projects that I would never even begin in the past as I knew I didn't have that time.
It's just like any tool though, and I've found enormous differences in outcome depending on how you drive it. Launching into 'build this' and expecting it to output code that you would manually write would not get you there; and I feel this is where most developers stall out.
Getting the right outcomes takes a lot of harness set up - the same as if you wanted to hire new devs and get them productive without peering with them. You would set up linting, good test coverage and approaches, thorough documentation about what your project is, the domain, the architecture etc. This at least gets good code consistency for the most part.
For how to build, https://github.com/bmad-code-org/BMAD-METHOD is really good and I've onboarded a few Saas projects into it now. Tech speccing and multiple cycles of elicitation are what deal with all the edge cases that you normally only encounter during coding. It does front-load all of the planning brainwork; but condensing that into a couple days of solid speccing is far more productive than spreading it out over months.
It's taken a while to get to this point, and most agents aren't good for substantial work out of the box. Most of the time what the agent does will be a product of its environment.