Hi HN! OP here. Thanks for reading and commenting -- (and @swah for posting!). It's unsettling to hit the HN front page, and even more so with an article that I hastily wrote down. I guess you never know what's going to hit a nerve.
Some context: I'm basically trying to make sense of the tidal wave that's engulfing software development. Over the last 2-3 weeks I've realized that LLMs will start writing most code very soon (I could be wrong, though!). This article is just me making sense of it, not trying to convince anybody of anything (except of, perhaps, giving the whole thing a think). Most of the "discarded" objections I presented in the list were things I espoused myself over the past year. I should have clarified that in the article.
I (barely) understand that LLMs are not a programming language. My point was that we could still think of them as a "higher level programming language", despite them 1) not being programming languages; 2) being wildly undeterministic; 3) also jumping levels by them being able to help you direct them. This way of looking at the phenomenon of LLMs is to try to see if previous shifts in programming can explain at least partially the dynamics we are seeing unfold so quickly (to find, in Ray Dalio's words, "another kind of those").
I am stepping into this world of LLM code generation with complicated feelings. I'm not an AI enthusiast, at least not yet. I love writing code by hand and I am proud of my hand-written open source libraries. But I am also starting to experience the possibilities of working on a higher level of programming and being able to do much more in breadth and depth.
I fixed an important typo - here I meant: "Economically, only quality is undisputable as a goal".
Responding to a few interesting points:
@manuelabeledo: during 2025 I've been building a programming substrate called cell (think language + environment) that attempts to be both very compact and very expressive. Its goal is to massively reduce complexity to turn general purpose code more understandable (I know this is laughably ambitious and I'm desperately limited in my capabilities of pulling through something like that). But because of the LLM tsunami, I'm reconsidering the role of cell (or any other successful substrate): even if we achieve the goal, how will this interact with a world where people mostly write and validate code through natural language prompts? I never meant to say that natural language would itself be this substrate, or that the combination of LLMs and natural languages could do that: I still see that there will be a programming language behind all of this. Apologies for the confusion.
@heikkilevanto & @matheus-rr: Mario Zechner has a very interesting article where he deals with this problem (https://mariozechner.at/posts/2025-06-02-prompts-are-code/#t...). He's exploring how structured, sequential prompts can achieve repeatable results from LLMs, which you still have to verify. I'm experimenting with the same, though I'm just getting started. The idea I sense here is that perhaps a much tighter process of guiding the LLM, with current models, can get you repeatable and reliable results. I wonder if this is the way things are headed.
@woodenchair: I think that we can already experience a revolution with LLMs that are not fully autonomous. The potential is that an engineering-like approach to a prompt flow can allow you to design and review (not write) a lot more code than before. Though you're 100% correct that the analogy doesn't strictly hold until we can stop looking at the code in the same way that a js dev doesn't look at what the interpreter is emitting.
@nly: great point. The thing is that most code we write is not elegant implementations of algorithms, but mostly glue or CRUDs. So LLMs can still broadly be useful.
I hope I didn't rage bait anybody - if I did, it wasn't intentional. This was just me thinking out loud.