logoalt Hacker News

OutOfHereyesterday at 12:06 AM3 repliesview on HN

The side effect of using LLMs for programming is that no new programming language can now emerge to be popular, that we will be stuck with the existing programming languages forever for broad use. Newer languages will never accumulate enough training data for the LLM to master them. Granted, non-LLM AIs with true neural memory can work around this, as can LLMs with an infinite token frozen+forkable context, but these are not your everyday LLMs.


Replies

spacephysicsyesterday at 3:24 AM

I wouldn’t be surprised if in the next 5-10 years the new and popular programming language is one built with the idea of optimizing how well LLM’s (or at that point world models) understand and can use it.

Right now LLMs are taking languages meant for humans to understand better via abstraction, what if the next language is designed for optimal LLM/world model understanding?

Or instead of an entirely new language, theres some form of compiling/transpiling from the model language to a human centric one like WASM for LLMs

raincoleyesterday at 4:52 AM

I don't think we need that many programming languages anyway.

I'm more worried about the opposite: the next popular programming paradigm will be something that's hard to read for humans but not-so-hard for LLM. For example, English -> assembly.

slopusilayesterday at 8:27 AM

you can invent a new language, ask LLM to translate existing code bases into it, then train on that

Just like AlphaZero ditched human Go matches and trained on synthetic ones, and got better this way