> If the training data is basically irrelevant, then an LLM should be able to iteratively improve the programming language it uses, resulting in a custom language optimally designed to maximize its own coding ability.
I won't be surprised if one day they do.
At least in their current form, I don't think they can independently design a language that is so much better than other available ones that it makes sense for them to use it.
There's a very good language for almost every use case already, designing one better than the ones already available is a VERY tall order.
It's almost like these languages aren't designed by morons, but built by teams of geniuses over a decade instead.
It's taken me 6 months of heavily steering an LLM to build a language that is not yet even ready for production use.
Maybe I'm the one slowing the LLM down. But it certainly does not seem that way.
The key to a good language for them - from my experience - is maximum expression plus minimum global complexity.
Anything that makes you manage memory lifetimes & memory safety is inherently unfriendly to LLMs - that's globally complex.
All scripting languages allow spaghetti aliases that let you hack your way into oblivion - and LLMs gladly ride that gravy train to hell.
Rust excels here, because it prevents the worst and is WAY more expressive than most people think.
Go has arguably the best runtime ever built, but it's not very expressive, and it still has a lot of problems from C and scripting languages - I don't think these types of languages will be the ones people chose to write code with for LLMs in the future.