I am old. This immediately triggered "Perl!?" in me...
Joke aside: Programming languages and compilers are still being optimized until the assembly and execution match certain expectations. So prompts and whatever inputs to AI also will be optimized until some expectations are met. This includes looking at their output, obviously. So I think this is an overblown extrapolation like many we see these days.
I remember when everything was "machine learning" as opposed to the current LLM stuff. Some of the machine learning techniques involve training and using models that are more or less opaque, and nobody looked at what was inside those models because you can't understand them anyway.
Once LLM generated code becomes large enough that it's infeasible to review, it will feel just like those machine learning models. But this time around, instead of trying to convince other people who were downstream of the machine learning output, we are trying to convince ourselves that "yes we don't fully understand it, but don't worry it's statistically correct most of the time".
He is trying to use a different phrase “write-only code” to define exactly the same thing Karpathy defined last year as “vibe coding”.
For what it is worth, in my experience one of the most important skills one should strive to get much better at to be good at using coding agents is reading and understanding code.
> In the Write-Only Code story, that same engineer becomes a systems designer, a constraint writer, and a trade-off manager.
This is also what I see my job to be shifting towards, increasingly fast in recent weeks. I wonder how long we will stay in this paradigm, I dont know.
the real issue isn't whether AI writes the code -- it's whether you treat the output as a first draft or a final product.
i run multi-pass generation for everything now. first pass gets the structure, second pass refines, third pass i actually read and edit. it's basically a diffusion process for code. one-shotting complex logic with an LLM almost always produces something subtly wrong.
also learned the hard way that using the best frontier model matters way more than clever prompting. paying 10x more for opus over a cheaper model saves me hours of debugging garbage output. the "write-only" framing misses this -- it's not that you never read the code, it's that the reading/editing ratio has flipped.
I’ve started doing a quick background check on authors before I dive into their content. This piece starts with the assumption that the writer is closely involved in engineering, but a little research reveals they don't actually work in active software development.
I’ll pass on this.
p.s. I’m happy to read authors with opposing views. Issue is with people who make claims, without having recent direct experience.
> “AI writes the code” is already true inside many enterprise teams
I'm highly doubtful this is true. Adoption isn't even close to the level necessary for this to be the case.
If $x_{T+1}=$|mathbb(E){stitch^\top gauge^(-1)stitch]$. Lemma - The system still runs,because $pattern$ is stable ($||pattern<1$) the system remains bounded (The app doesn't crash yet) Multi-modal $NEWT decouple real time rendering (60 fps) from AI computation(<100ms) and network calls (Async latency to 5ms stitch count makes sense). The dominant eigenvector in the latent coupling.
There will be more of it where it does not matter. Maybe eventually with times. At the moment, in my experience, most systems rely on hyperlinear semantics. Especially scalable ones. Current llms cannot physically handle this at the moment. Maybe with biological or quantum (sic) computing.
But even then it is quite impressive.
Concretely in my use case, off of a manual base of code, having claude has the planner and code writer and GPT as the reviewer works very well. GPT is somehow better at minutiae and thinking in depth. But claude is a bit smarter and somehow has better coding style.
Before 4.5, GPT was just miles ahead.
> LLMs are clearly a massive productivity boost for software developers, and the value of humans manually translating intent into lines of code is rapidly depreciating.
This take is so divorced from reality it's hard to take any of this seriously. The evidence continues to show that LLMs for coding only make you feel more productive, while destroying productivity and eroding your ability to learn.
Write-only blog posts.
write-once read-never?
something that was not perl ;)
in ~2005 i lead a team to build horse-betting terminals for Singapore, and there server could only understand CORBA. So.. i modelled the needed protocol in python, which generated a set of specific python files - one per domain - which then generated the needed C folders-of-files. Like 500 lines of models -> 5000 lines 2nd level -> 50000 lines C at bottom. Never read that (once the pattern was established and working).
But - but - it was 1000% controllable and repeatable. Unlike current fancy "generators"..
> The role of the human engineer […] has been to reduce risk in the face of ambiguity, constraints, and change. That responsibility not only endures in a world of Write-Only Code, if anything it expands.
> The next generation of software engineering excellence will be defined not by how well we review the code we ship, but by how well we design systems that remain correct, resilient, and accountable even when no human ever reads the code that runs in production.
As a mechanical engineer, I have learned how to design systems that meet your needs. Many tools are used in this process that you cannot audit by yourself. The industry has evolved to the point that there are many checks at every level, backed by standards, governing bodies, third parties, and so on. Trust is a major ingredient, but it is institutionalized. Our entire profession relies on the laws of physics and mathematics. In other words, we have a deterministic system where every step is understood and cast into trust in one way or another. The journey began with the Industrial Revolution and is never-ending; we are always learning and improving.
Given what I have learned and read about LLM-based technology, I don't think it's fit for the purpose you describe as a future goal. Technology breakthroughs will be evaluated retrospectively, and we are in the very early stages right now. Let's evaluate again in 20 years, but I doubt that "write-only code" without human understanding is the way forward for our civilization.