There are definitely salient points in the article and I appreciate its value in imploring us to really stop and consider the ramifications of what this technology might deliver. I think the analogy to cars and the unintended consequences for all manner of society is particularly apt.
That said, the final point is one I take issue with:
> For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
To me, there is no intrinsic value in solving this problem other than rote problem solving reps to make you a better problem solver. There isn't anything fundamental about the protocol they've never heard of that operates the lights. It's likely similar to many other well-thought out protocols in the best case and in the worst case is something slapped together.
There are certainly deeper, more fundamental concepts to learn like congestion control algorithms in TCP. Most things in software though are just learning another engineer's preferences for how they thought to build something.
I poke at this because if an exercise only yields the benefit of another rep of solving a problem, then it holds less water to me. I personally don't think there will be fewer problems to solve with this technology, just a different sort at a different layer of the stack.