What the author is missing is that in his decision to limit the use of LLMs in his work, he omits the part where he “can”. E.g. he is resourceful and accomplished enough to be able to do the work he desires with no LLMs - but most people actually can’t. There are whole swaths of people software engineers that don’t write tests because “it slows them down” but they have never learned how to write testable code. And when thrust into an environment where they need to learn quickly - they don’t really have a way not to use ai, if they don’t someone else will, and take all the credit.
Learning how software is built is hard and gruelling work, and you need to constantly invest in yourself. Trouble is there is no time left to “go back to basics and learn FP” for example, because you also need to keep up with all the new LLM stuff happening on top of that.
It is easy for us who already have the foundational knowledge to be able to step back, take the wheel and try to do it ourselves, but plenty of people simply don’t have that option.
And I expect this trend to deepen and broaden. There will definitely be a lot more “witches” than actual engineers.
People learn what they need to learn to be successful (if they want to be successful). The newer generation of coders will learn exactly what it takes to be better than their peers, and that will still include building rock solid, highly performant software to beat the competitors, or they'll lose their jobs and someone better will do it.
If they do it entirely using AI to code, and the end output is good enough, they'll learn all the right skills to do this.
Human's always think everything is sliding into doom, and inevitably, it doesn't.