> A sufficiently detailed spec is code
This is exactly the argument in Brooks' No Silver Bullet. I still believe that it holds. However, my observation is that many people don't really need that level of details. When one prompts an AI to "write me a to-do list app", what they really mean is that "write me a to-do list app that is better that I have imagined so far", which does not really require detailed spec.
The vibe coding maximalist position can be stated in information theory terms: That there exists a decoder that can decode the space of useful programs from a much smaller prompt space.
The compression ratio is the vibe coding gain.
I think that way of phrasing it makes it easier to think about boundaries of vibe coding.
"A class that represents (A) concept, using the (B) data structure and (C) algorithms for methods (D), in programming language (E)."
That's decodeable, at least to a narrow enough distribution.
"A commercially successful team communication app built around the concept of channels, like in IRC."
Without already knowing Slack, that's not decodable.
Thinking about what is missing is very helpful. Obviously, the business strategic positioning, non technical stakeholder inputs, UX design.
But I think it goes beyond that: In sufficiently complex apps, even purely technical "software engineering" decisions are to some degree learnt from experiment.
This also makes it more clear how to use AI coding effectively:
* Prompt in increments of components that can be encoded in a short prompt.
* If possible, add pre-existing information to the prompt (documentation, prior attempts at implementation).
I think it's only a matter of time before people start trying to optimize model performance and token usage by creating their own more technical dialect of English (LLMSpeak or something). It will reduce both ambiguity and token usage by using a highly compressed vocabulary, where very precise concepts are packed into single words (monads are just monoids in the category of endofunctors, what's the problem?). Grammatically, expect things like the Oxford comma to emerge that reduce ambiguity and rounds of back-and-forth clarification with the agent.
The uninitiated can continue trying to clumsily refer to the same concepts, but with 100x the tokens, as they lack the same level of precision in their prompting. Anyone wanting to maximize their LLM productivity will start speaking in this unambiguous, highly information-dense dialect that optimizes their token usage and LLM spend...
There are two kid of specs, formal spec, and "Product requirements / technical designs"
Technical design docs are higher level than code, they are impricise but highlight an architectural direction. Blanks need to be filled in. AI Shines here.
Formal specs == code Some language shine in being very close to a formal spec. Yes functional languages.
But lets first discuss which kind of spec we talk about.
> On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
I guess many of us quality for british parliament.
A spec is an envelope that contains all programs that comply. Creating this spec is often going to be harder than writing a single compliant program.
Since every invocation of an LLM may create a different program, just like people, we will see that the spec will leave much room for good and bad implementations, and highlight the imprecision in the spec.
Once we start using a particular implementation it often becomes the spec for subsequent versions, because it's interfaces expose surface texture that other programs and people will begin to rely on.
I'm not sure how well LLMs will fare are brownfield software development. There is no longer a clean specification. Regenerating the code from scratch isn't acceptable. You need TPS reports.
The cognitive dissonance comes from the tension between the-spec-as-management-artifact vs the-spec-as-engineering-artifact. Author is right that advocates are selling the first but second is the only one which works.
For a manager, the spec exists in order to create a delgation ticket, something you assign to someone and done. But for a builder, it exists as a thinking tool that evolves with the code to sharpen the understanding/thinking.
I also think, that some builders are being fooled into thinking like managers because ease, but they figure it out pretty quickly.
In my experience with “agentic engineering” the spec docs are often longer than the code itself.
Natural language is imperfect, code is exact.
The goal of specs is largely to maintain desired functionality over many iterations, something that pure code handles poorly.
I’ve tried inline comments, tests, etc. but what works best is waterfall-style design docs that act as a second source of truth to the running code.
Using this approach, I’ve been able to seamlessly iterate on “fully vibecoded” projects, refactor existing codebases, transform repositories from one language to another, etc.
Obviously ymmv, but it feels like we’re back in the 70s-80s in terms of dev flow.
I don't agree. The code is much more than the spec. In fact, the typical project code is 90% scaffolding and infrastructure code to put together and in fact contains implementation details specific to the framework you use. And only 10% or less is actual "business logic". The spec doesn't have to deal with language, framework details, so by definition spec is the minimum amount of text necessary to express the business logic and behaviour of the system.
>> Misconception 1: specification documents are simpler than the corresponding code
I used to be on that side of the argument - clearly code is more precise so it MUST be simpler than wrangling with the uncertainty of prose. But precision isn't the only factor in play.
The argument here is that essential complexity lives on and you can only convert between expressions of it - that is certainly true but it's is overlooking both accidental complexity and germane complexity.
Specs in prose give you an opportunity to simplify by right-sizing germane complexity in a way that code can't.
You might say "well i could create a library or a framework and teach everyone how to use it" and so when we're implementing the code to address the essential complexity, we benefit from the germane complexity of the library. True, but now consider the infinite abstraction possible in prose. Which has more power to simplify by replacing essential complexity with germane complexity?
Build me a minecraft clone - there's almost zero precision here, if it weren't for the fact that word minecraft is incredibly load bearing in this sentence, then you'd have no chance of building the right thing. One sentence. Contrast with the code you'd have to write and read to express the same.
Natural language is fluid and ambiguous while code is rigid and deterministic. Spec-driven development appears to be the best of both worlds. But really, it is the worst of both. LLMs are language models - their breakthrough capability is handling natural language. Code is meant to be unambiguous and deterministic. A spec is neither fluid nor deterministic.
I've been trying codex and claude code for the past month or so. Here's the workflow that I've ended up with for making significant changes.
- Define the data structures in the code yourself. Add comments on what each struct/enum/field does.
- Write the definitions of any classes/traits/functions/interfaces that you will add or change. Either leave the implementations empty or write them yourself if they end up being small or important enough to write by hand (or with AI/IDE autocompletion).
- Write the signatures of the tests with a comment on what it's verifying. Ideally you would write the tests yourself, specially if they are short, but you can leave them empty.
- Then at this point you involve the agent and tell it to plan how to complete the changes without barely having to specify anything in the prompt. Then execute the plan and ask the agent to iterate until all tests and lints are green.
- Go through the agent's changes and perform clean up. Usually it's just nitpicks and changes to conform to my specific style.
If the change is small enough, I find that I can complete this with just copilot in about the same amount of time it would take to write an ambiguous prompt. If the change is bigger, I can either have the agent do it all or do the fun stuff myself and task the agent with finishing the boring stuff.
So I would agree with the title and the gist of the post but for different reasons.
Example of a large change using that strategy: https://github.com/trane-project/trane/commit/d5d95cfd331c30...
It helps to decouple the business requirements from the technical ones. It's often not possible to completely separate these areas, but I've been on countless calls where the extra technical detail completely drowns out the central value proposition or customer concern. The specification should say who, what, where, when, why. The code should say how.
The code will always be an imperfect projection of the specification, and that is a feature. It must be decoupled to some extent or everything would become incredibly brittle. You do not need your business analysts worrying about which SQLite provider is to be used in the final shipped product. Forcing code to be isomorphic with spec means everyone needs to know everything all the time. It can work in small tech startups, but it doesn't work anywhere else.
Why is everyone still talking about markdown files as the only form of spec? The argument is true for text-based specs, but that's not the only option. Stop being so text-file-brained?
This article is really attacking vague prose that pushes ambiguity onto the agent - okay, fair enough. But that's a tooling problem. What if you could express structure and relationships at a higher level than text, or map domain concepts directly to library components? People are already working on new workflows and tools to do just that!
Also, dismissing the idea that "some day we'll be able to just write the specs and the program will write itself" is especially perplexing. We're already doing it, aren't we? Yes, it has major issues but you can't deny that AI agents are enabling literally that. Those issues will get fixed.
The historical parallel matters here as well. Grady Booch (co-creator of UML) argues we're in the third golden age of software engineering:
- 1940s: abstracted away the machine -> structured programming
- 1970s: abstracted away the algorithm -> OOP, standard libraries, UML
- Now: abstracting away the code itself
Recent interview here: https://www.youtube.com/watch?v=OfMAtaocvJw
Each previous transition had engineers raising the same objections: "this isn't safe", "you're abstracting away my craft". They were right that something was lost, but wrong that it was fatal. Eventually the new tools worked well enough to be used in production.
There's essential complexity and accidental complexity.
A sufficiently detailed spec need only concern itself with essential complexity.
Applications are chock-full of accidental complexity.
For this to be true, we should be able to
- Delete code and start all over with the spec. I don't think anyone's ready to do that.
- Buy a software product / business and be content with just getting markdown files in a folder.
This won't age well, or my comment won't age well. We'll see!
This is laid out in “the code is the design” - https://www.developerdotstar.com/mag/articles/reeves_design_... by jack reeves.
Like they say “everything comes round again”
Maybe an argument can be made that this definitely holds for some areas of the feature one is building. But in ever task there might be areas where the spec, even less descriptive than code, is enough, because many solutions are just „good enough“? One example for me are integration tests in our production application. I can spec them with single lines, way less dense than code, and the llms code is good enough. It may decide to assert one way or another, but I do not care as long as the essence is there.
Could be that the truth is somewhere in between?
I agree with this so much. And on top of this, I have the strong feeling that LLMs are BETTER at code than they are at english, so not only are you going from a lossy formate to a less-leossy format, you are specifying in a lossy, unskilled format.
A sufficiently detailed spec was actually a small step in the path to functional code.
Then came all sorts of shenanigans, from memory management to syntax hell, which took forever to learn effectively.
This stage was a major barrier to entry, and it's now gone — so yeah, things have indeed changed completely.
A corollary of this statement is that code without a spec is not code. No /s, I think that is true - code without a spec certainly does something, but it is, by the absence of a detailed spec, undefined behavior.
I am developing my own programming language, but I have no specification written for it. When people tell me that I need a specification, I reply that I already have one - the source code of the language compiler.
I tried myself to make a language over an agent's prompt. This programing language is interpreted in real time, and parts of it are deterministic and parts are processed by an LLM. It's possible, but I think that it's hard to code anything in such a language. This is because when we think of code we make associations that the LLM doesn't make and we handle data that the LLM might ignore entirely. Worse, the LLM understands certain words differently than us and the LLM has limited expressions because of it's limits in true reasoning (LLMs can only express a limited number of ideas, thus a limited number of correct outputs).
Such amazing writing. And clear articulation of what I’ve been struggling to put into words - almost having to endure a mental mute state. I keep thinking it’s obvious, but it’s not, and this article explains it very elegantly.
I also enjoyed the writing style so much that I felt bad for myself for not getting to read this kind of writing enough. We are drowning in slop. We all deserve better!
I agree to this, with the caveat that a standard is not a spec. E.g.: The C or C++ standards, they're somewhat detailed, but even if they were to be a lot more detailed, becoming 'code' would defeat the purpose (if 'code' means a deterministic turing machine?), because it won't allow for logic that is dependent on the implementer ("implementation defined behavior" and "undefined behavior" in C parlance). whereas a specification's whole point is to enforce conformance of implementations to specific parameters.
Just waterfall harder
A piece of code is only one of many different implementations of a spec. And a good spec is careful to state what sort of variation is allowed and when rigid adherence is required.
No, a spec is not code. It's possible to describe simple behavior that's nevertheless difficult to implement. Consider, say,
fn sin(x: f16) -> f16
There are only 64k different f16s. Easy enough to test them all. A given sin() is either correct or it's not.Yet sin() here can have a large number of different implementations. The spec alone under-determines the actual code.
> Misconception 1: specification documents are simpler than the corresponding code
That is simply not true. There is a ton of litterature around inherent vs accidental complexity, which in an ideal world should map directly to spec vs code. There are a lot of technicalities in writing code that a spec writer shouldn't know about.
Code has to deal with the fact that data is laid out a certain way in ram and on disk, and accessing it efficiently requires careful implementation.
Code has to deal with exceptions that arise when the messiness of the real world collides with the ideality of code.
It half surprises me that this article comes from a haskell developer. Haskell developers (and more generally people coming from maths) have this ideal view of code that you just need to describe relationships properly, and things will flow from there.
This works fine up to a certain scale, where efficiency becomes a problem.
And yes, it's highly probable that AI is going to be able to deal with all the accidental complexity. That's how I use it anyways.
Called tests yeah
Kind of, that is why non-functional requirements exist, because not everything is code.
I agree with the overall structure of the argument but I like to think of specifications like polynomial equations defining some set of zeroes. Specifications are not really code but a good specification will cut out a definable subset of expected behaviors that can then be further refined with an executable implementation. For example, if a specification calls for a lock-free queue then there are any number of potential implementations w/ different trade-offs that I would not expect to be in the specification.
Meh, it's the age old distinction between Formal vs Informal language.
Simply put: Formal language = No ambiguities.
Once you remove all ambiguous information from an informal spec, that, whatever remains, automatically becomes a formal description.
I don’t really find “can the model produce good code?” that interesting anymore. In the right workflow, it plainly can. I’ve gotten code out of LLMs that is not just faster than I’d write by hand, but often better in the ways that matter: tests actually get written, invariants get named, idempotency is considered, error conditions don’t get silently handwaved away because I’m tired or trying to get somewhere quickly.
When I code by hand under time pressure, I’m actually more likely to dig a hole. Not because I can’t write code, but because humans get impatient, bored, optimistic and sloppy in predictable ways. The machine doesn’t mind doing the boring glue work properly.
But that is not the real problem.
The real problem is what happens when an organisation starts shipping code it does not understand. That problem predates LLMs and it will outlive them. We already live in a world full of organisations that ship bad systems nobody fully understands, and the result is the usual quagmire: haunted codebases, slow change, fear-driven development, accidental complexity, and no one knowing where the actual load-bearing assumptions are.
LLMs can absolutely make that worse, because they increase the throughput of plausible code. If your bottleneck used to be code production, and now it’s understanding, then an organisation that fails to adapt will just arrive at the same swamp faster.
So to me the important distinction is not “spec vs code”. It’s more like:
• good local code is not the same thing as system understanding
• passing tests are not the same thing as meaningful verification
• shipping faster is not the same thing as preserving legibility
The actual job of a programmer was never just to turn intent into syntax anyway. Every few decades the field reinvents some story about how we no longer need programmers now: Flow-Matic, CASE tools, OO, RUP, low-code, whatever. It’s always the same category error. The hard part moves up a layer and people briefly mistake that for disappearance.
In practice, the job is much closer to iteratively solving a problem that is hard to articulate. You build something, reality pushes back, you discover the problem statement was incomplete, the constraints were wrong, the edge case was actually central, the abstraction leaks, the user meant something else, the environment has opinions, and now you are solving a different problem than the one you started with.
That is why I think the truly important question is not whether AI can write code.
It’s whether the organisation using it can preserve understanding while code generation stops being the bottleneck.
If not, you just get the same bad future as before, only faster, cleaner-looking, and with more false confidence.
It's a great argument against using software design tools (UML and other tools). The process of writing code is creating an executable specification. Creating a specification for your specification (and phrasing it as such) is a bit redundant.
The blue print analogy comes up frequently. IMHO this is unfortunate. Because a blueprint is an executable specification for building something (manually typically). It's code in other words. But for laborers, construction workers, engineers, etc. For software we give our executable specifications to an interpreter or compiler. The building process is fully automated.
The value of having specifications for your specifications is very limited in both worlds. A bridge architect might do some sketches, 3D visualizations, clay models, or whatever. And a software developer might doodle a bit on a whiteboard, sketch some stuff out on paper or create a "whooooo we have boxes and arrows" type stuff in a power point deck or whatever. If it fits on a slide, it has no meaningful level of detail.
As for AI. I don't tend to specify a lot when I'm using AI for coding. A lot of specification is implicit with agentic coding. It comes from guard rails, implicit general knowledge that models are trained one, vague references like "I want red/green TDD", etc. You can drag in a lot of this implicit stuff with some very rudimentary prompting. It doesn't need to be spelled out.
I put an analytics server live a few days ago. I specified I wanted one. And how I wanted it to work. I suggested Go might be a nice language to build it in (I'm not a Go programmer). And I went in to some level of detail on where and how/where I wanted the events to be stored. And I wanted a light js library "just like google analytics" to go with it. My prompt wasn't much larger than this paragraph. I got what I asked for and with some gentle nudging got it in a deployable state after a few iterations.
A few years ago you'd have been right to scald me for wasting time on this (use something off the shelf). But it took about 20 minutes to one shot this and another couple of hours to get it just right. It's running live now. As far as I can see with my few decades of experience, it's a pretty decent version of what I asked for. I did not audit the code. I did ask for it to be audited (big difference) and addressed some of the suggested fixes via more prompting ("sounds good, do it").
If you are wondering why, I'm planning to build a AI dashboard on top of this and I need the raw event store for that. The analytics server is just a dirt cheap means to an end to get the data where I need it. AI made the server and the client, embedded the client in my AI generated website that I deployed using AI. None of this involved a lot of coding or specifying. End to end, all of this work was completed in under a week. Most of the prompting work went into making the website really nice.
> A sufficiently detailed spec is code
Posting something like this in 2026: they must not have heard of LLMs.
And also: this is such a typical thing a functional programmer would say: for them code is indeed a specification, with strictly no clue (or a vague high level idea at most) as to how the all effing machine it runs on will actually conduct the work.
This is not what code is for real folks with real problems to solve outside of academic circles: code is explicit instructions on how to perform a task, not a bunch of constraints thrown together and damned be how the system will sort it out.
And to this day, they still wonder why functional programming almost never gets picked up in real world application.
I have a lot of fun making requirements documents for Claude. I use an iterative process until Claude can not suggest any more improvements or clarifications.
"Is this it?" "NOPE"
I recently left this comment on another thread. At the time I was focused on planning mode, but it applies here.
Plan mode is a trap. It makes you feel like you're actually engineering a solution. Like you're making measured choices about implementation details. You're not, your just vibe coding with extra steps. I come from an electrical engineering background originally, and I've worked in aerospace most of my career. Most software devs don't know what planning is. The mechanical, electrical, and aerospace engineering teams plan for literal years. Countless reviews and re-reviews, trade studies, down selects, requirement derivations, MBSE diagrams, and God knows what else before anything that will end up in the final product is built. It's meticulous, detailed, time consuming work, and bloody expensive.
That's the world software engineering has been trying to leave behind for at least two decades, and now with LLMs people think they can move back to it with a weekend of "planning", answering a handful of questions, and a task list.
Even if LLMs could actually execute on a spec to the degree people claim (they can't), it would take as long to properly define as it would to just write it with AI assistance in the first place.
IMHO, LLMs are better at Python and SQL than Haskell because Python and SQL syntax mirrors more aspects of human language. Whereas Haskell syntax reads more like a math equation. These are Large _Language_ Models so naturally intelligence learned from non-code sources transfers better to more human like programming languages. Math equations assume the reader has context not included in the written down part for what the symbols mean.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
> There is no world where you input a document lacking clarity and detail and get a coding agent to reliably fill in that missing clarity and detail
That is not true, and the proof is that LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions. Code is the detail being filled in. Furthermore, LLMs are the ultimate detail fillers, because they are language interpolation/extrapolation machines. And their popularity is precisely because they are usually very good at filling in details: LLMs use their vast knowledge to guess what detail to generate, so the result usually makes sense.
This doesn't detract much from the main point of the article though. Sometimes the interpolated detail is wrong (and indeterministic), so, if reliable result is to be achieved, important details have to be constrained, and for that they have to be specified. And whereas we have decades of tools and culture for coding, we largely don't have that for extremely detailed specs (except maybe at NASA or similar places). We could figure it out in the future, but we haven't yet.