I've been through a few cycles of using LLMs and my current usage does scratch the itch. It doesn't feel like I've lost anything. The trick is I'm still programming. I name classes and functions. I define the directory structure. I define the algorithms. By the time I'm prompting an LLM I'm describing how the code will look and it becomes a supercharged autocomplete.
When I go overboard and just tell it "now I want a form that does X", it ends up frustrating, low-quality, and takes as long to fix as if I'd just done it myself.
YMMV, but from what I've seen all the "ai made my whole app" hype isn't trustworthy and is written by people who don't actually know what problems have been introduced until it's too late. Traditional coding practices still reign supreme. We just have a free pair of extra eyes.
I also use AI to give me small examples and snippets, this way it works okay for me
However this still takes away from me in the sense that working with people who are using AI to output garbage frustrates me and still negatively impacts the whole craft for me
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.