The thing is the code quality is still ultimately up to you
Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write
"... iterating with the agent till the code is the exact same quality that you yourself would write"
I don't want my code quality, I want AGI code quality - that's what I was promised and jetpacks and flying cars too!
This is precisely why these types of articles don't make any sense to me, and strike me as case studies on human laziness. If you want good output, you'll review the output and iterate. If you want good foundations, you'll write them, and then later those foundations will prevent, to a very great degree, bad code from getting written by the LLM.
These articles frustrate me greatly. That said, the author's point about token cost is real, and a risk.
Nothing is stopping you... but that's slower than just writing it yourself to begin with. AI productivity gains are a myth.
>Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write
Yeah, but in my experience, it takes the same amount of time or longer to cajole the AI to get it there. I'd rather write it myself and know how it works than insert an LLM as the middleman, especially when it isn't really proving to be any faster.
IME, it's faster and less frustrating to just write the code myself, if the goal is to get code to my quality standards.