I can't imagine a better output for llms than python. not because its particularly good. far from it, its got dynamic typing and more or less sets you up for runtime failure. however, it has probably the largest corpus of training data aside from javascript.
Part of my worries that all this push to LLMs will marginalize niche programming languages from being used in startups since the lack of training data means falling back to hardcoding. a skill that I have a feeling will get increasingly niche overtime. I feel capitalism will basically render programming languages into a build artifact overtime.
Most of the article makese sense but what is this supposed to mean? "Native Rust binaries are hostile to serverless runtimes" . I don't think that is true.
Python is incredibly readable too. I can scan through LLM Python changes in minutes instead of hours of other languages.
Because the training set is very good. Then ask to rewrite in rust
"The Python ecosystem is increasingly a Rust ecosystem wearing a Python hat"
If anything this is a reason to keep using Python.
Because AI creates unmaintainable messes in any language, and ergonomic ones help humans clean up.
Thats exactly what i did with https://panel-panic.com
What are some concise languages that are well received by humans (on par with Python)? Token efficiency might be a marked advantage.
Clojure comes to mind at least.
For me, whether it's AI or my own handcrafted artisanal code, the choice of language comes down to what has the least friction. This means I turn to vite/react for a lot of frontend requirements, and that the backend will be in nodejs or python, because those are easier for me to debug than writing an equivalent application in C++ or Rust.
Really agree. Python is popular because it's easy for human to implement. But now if the coder becomes AI, then Rust would be preferable for agent, just like Python for human. In addition, it brings better performance.
Bullshit article. AI is not meant to be a black box, you just spit at it and it'll generate you a whole app and you don't even understand a single line. That WILL eventually fail. There was an article here some time ago where someone described it pretty well "use AI as autocomplete on steroids". Therefore, use any language you can actually debug well and know well and use AI as a tool, not as your replacement. And don't use it to port your electron app to rust if you don't know rust, Jesus.
The article applies to a narrow case of a totally green field application that's going to be completely vibecoded. This is the only case where you reasonably can be indifferent to what the language is, and so you can abandon familiar Python and go with unfamiliar Rust. (If you _are_ familiar with Rust, the point of the article is moot.)
This "fair weather development" approach feels very risky if that application is going to be exposed to any serious usage. There WILL be a situation when things break and the AI will be powerless to fix it (quickly) without breaking something else in a vicious loop. There WILL be a situation where things work fine and tests pass with 3 concurrent users but grind to a complete halt with 1000 because there is something O(N^2) deep in the code. And you NEED a human to save your day (which requires also proper architecture for that to be possible in the first place). If you don't plan for this, and just hope for the best, then you are building nothing more than a toy. And if you plan for this, then it matters again what the language is, and whether your team is proficient in it.
Or maybe I too old fashioned or too behind the state of the AI art...
100%, I’ve been writting: Rust, Haskell and Lean 4 with great success with AI. E.g. https://github.com/typednotes/hale
Great question. And I don’t think that Python, Ruby and PHP have a good answer. Scripting languages cater to human weaknesses. The 10-100x perf cost was never really worth it but now it’s impossible to justify.
For the utilities I write it is faster to iterate without having to compile. When I get to the point where I'm done adding changing features, and performance is an annoyance I can always ask the AI to "rewrite this in Go". (I've never gotten to that point.)
Python has during the recent years become unnecessary complex and especially the type hints system is so dumb and already have a lot of legacy syntax that confuse AI agents.
This is a fairly crap post and the reasoning isn't sound but somehow the conclusion is still somewhat correct.
You do want to use Rust with LLMs.
The reason you want it is simple, it's more constrained.
LLMs thrive on constraint and drown in freedom.
The further you can constraint the solution space the more likely you are to end up with a solution you like/is actually good.
Rust has several properties that make it really good for LLMs:
* Really robust type system that is also very expressive, if guided LLMs can implement most of the invariants in types which substantially increases the chances of success.
* Great compile time errors, the specificity and brevity (vs say C++ template expansion) means token efficient correction of syntax and/or borrow mistakes etc.
* Protection against subtle errors at compile time, namely data races and memory safety issues.
* Great corpus of well designed code and patterns, higher quality on average than some other ecosystems more favored by begineers/mass-market programming.
* Stdlib is strong, small-ish number of blessed crates.
* Context friendly, type signatures, errors, etc are all dense information.
* Also bias towards compile time checks means less runtime tests which means less toolcall time (and less tests needed overall) which in turn makes the process a ton faster.
I have been continually using Rust, Python and Kotlin since ~Jan this year and keeping track of my thoughts and I increasingly bias towards Rust now where I would have previously chosen Python or Kotlin instead just because I am lazy and I prefer the tool that the computer writes better so I have to write less lol.
Nice perspective on languages in the AI era. I think AI should be used to build best performing and highly scalable software systems.
> why use Python
when I said “the ecosystem” I didn’t mean of libraries and other developers, I meant of recruiters and hiring managers
and whose humiliation ritual I could pass
You can also use Julia. It is both easy for humans to write and read and for AI to generate because of the sane and powerful type system.
However, I expect that in the future some new language will take this role of dual use.
So I can fix it when it breaks. I don’t understand anyone shipping real code without human review.
Give it 2 years, the ‘Blame the AI ‘ incidents will increase. Like an unfaithful partner you’ll always return to it
Python is rather a UI for human logic comprehension. A mathematical notation of logics. Not a code to drive computer.
And prompt does not replace that.
Rust is the way!
> The old open-source bargain had a positive feedback loop. You pick Python because it’s easy. You find a bug in a dependency. You fix it.
> Agents broke that loop in a specific way: the unit of contribution shifted from the patch to the port.
What does this even mean? Every time there's a bug we port the whole code to a different language instead of patching it? This sounds like absolute nonsense, and makes me wonder whether a human actually wrote this.
https://arxiv.org/pdf/2508.09101
tldr 2% average point lost on Rust compared to python, gap vary by model, go has a better upper bound but opus had it 3% below python.
benchmark is a bit old but research on why is there, article is just vibes
Clojure is better. REPL + immutable defaults.
1) python is one of the foremost trained upon languages
2) it's practically verbose, not technically
3) it resembles pseudocode
4) batteries included shortcuts a lot of work
all of these reasons are a boon for LLM work.
This hits hard, specially for PHP. Previously we had devs "who only knew" PHP, and once they started vibe coding most have started using Go.
As a benefit i find that static types help AI to make correct/better decisitions than you see in PHP (where types are mostly only class types, nominal or primitive [lol no generics])
But its pretty much true, i will forsee a fall in dynamic languges, as the usecase is pretty much void and null.
I share the sentiment unless you're working in an area where Python's library ecosystem is simply the better choice.
When I vibe, it's C# all the way. Not a popular opinion on HN, but the LLMs are trained heavily on the language and are very, very good at it, plus with the 1-file-per-class organization, it can stay pretty clean. I mean, v10 LTS was just released, with all kinds of new language features, EFCore is still the best ORM I've ever used, with full support for SQLite, Postgres, MySql, etc. It just makes writing and reviewing code a pleasure. And the LLMs don't f*ck it up.
I recently started a game project in Rust aided by Claude Code because I asked myself that same question. I like Rust, but it is definitely harder than C# for me. But with the AI aid, doesn't seem to matter which language I use. So I take the performance and safety wins.
In my case: AI might write the code, but I have to architect the system, read the code, iterate and learn from it. Validate whether an approach makes sense, whether the chosen dependencies make sense, whether the testing is adequate and covers known failure paths ... good luck if this is a language and ecosystem you are not proficient in.
Writing is half of the equation. Once written, you have to maintain it. That usually required understanding the language.
First one to vibe code a language for LLMs, by LLMs, wins a cookie?
Devs still have to maintain this code, the Python devs can definitely get the LLM to write (some kind of) Rust, but when it goes wrong and you hit the wall with the LLM then they can will have to learn Rust which might take a while, this sounds like a bit of a project risk.
Because I have to maintain it.
Yes, and wondering why all the AI tooling is written in node.
Simplicity of deployment. No need to compile. People bitch about virtualenvs but they pretty much just work.
Also, totally FOSS. Unparalleled library ecosystem (no, I don't buy into the hype about re-rolling all your own dependencies).
Beyond that, Go is kind of nice, but the lack of a inheritance is stifling. Python has everything that's needed and very little that's not.
Edit: Getting downvoted, probably because of the comment about virtualenvs. What's your alternative? .NET DLL's? The joke that is NPM? Go probably does this better, admittedly, but Python is practically one of the best out there.
One thing to consider:
The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.
If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.
- Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)
- APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)
- Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)
OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:
Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.
Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.
Data‑parallel kernels and "vectorized" loops when the starting point was APL.
In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.
By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.
Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.
Something to consider...
p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?
p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.
So we can read and debug it if we'd like?
This point only makes sense if you ship AI code without reviewing it. And if you're shipping AI code without reviewing it, you're going to run into much bigger problems than Python performance limitations.
If AI writes your code, why use frameworks?
1) I still have to comprehend it.
2) The corpus for the sort of applications I build is likely larger for Python than it is for C++ and Rust. Bigger corpus == more training data == better generated code.
3) The bottleneck in the applications I run aren't in the execution of the code; they're in the database/network latency.
4) I don't get anything extra for pushing Rust or C++ over Python.
Because once you leave Python or JS the quality of LLM-produced code degrades catastrophically.
Agreed. People should just use JavaScript since it's the one with the largest training set.
>Smaller languages like Zig, Haskell and Gleam don’t have the same quality when AI-generated (for now).
GPT 5.5 writes good haskell.
Honestly the bigger question is why we still write glue code at all. Let the agent orchestrate.
Also easier to ship a binary like a cli
Because I don't only write the code. I will also read it, many more times.