I care deeply about craft, but:
a) I cannot effectively review more than 2000 lines of code a day. The LLMs can produce much more than that. b) Even if I accepted my reading throughput limitations as the cost of being in the loop, reading is not enough to keep cognitive debt in check: my skills will atrophy if I do not participate in the writing ("What I cannot create I cannot understand").
So, to me, it seems like we, humans, either have to come up with higher (and deterministic) abstractions than code to communicate with LLMs or resign ourselves to letting the LLM guess what we want from English and then banging on the output to see if it sort of works. This later state of affairs seems to be what the current trend is and I find that absolutely revolting.
I think the distinction is that for experiments and prototypes the behaviour of the final system is what we are trying to design. We can experiment and see the tradeoffs and explore the design space before committing to a direction. And then we can sit down and produce the final code to a quality we are happy with. If you are serious about this process, there is no way you are producing 1000s of lines of code a day, unless it is trivial boilerplate.
In terms of higher-level abstractions, I agree this is one particularly treacherous rung on the ladder of abstractions. Previous abstractions like compilers or garbage collectors have at least had more structure/rules to rely upon. I don't know exactly how that will look but I don't think we will solely be relying on banging on the output, we will also be spot-checking the source code, using profilers or other tools to inspect the behaviour of systems, and asking the agent to explain the architectural decisions made. I'm not sure exactly how this will look, but I do believe that people who care will still find ways to do good work.