I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical. The inability for the business to define a focused, productive roadmap has always been the problem in software engineering. Constantly jumping to the next shiny thing that yields almost no ROI but never allowing systemic tech debt to be addressed has crippled many company's I have worked at in the long-term.
Code is a liability.
I think it can be easy to look at code as an asset, but fundamentally it is a liability. Some of the "bottlenecks" to new code are in place to make sure that the yield outweighs the increased liability. Agents that produce more code faster are producing more liability faster. Much of the excitement and much of the skepticism about coding agents is about whether the immediate increased productivity (new features) and even immediate yield (new products or new revenue) outweighs the increased long term liabilities. I'd say we won't find out for another 1-3 years, and of course that the answer will differ in different domains.
From this perspective, attempting to build these bottlenecks into the agentic workflow directly makes some sense. Supplying coding agents with additional context that values a coherent project vision and that pushes back against new features or unconstrained processes would be valuable.
Is this what the article is trying to get at? Is this attempting to make some agents essentially take on product management responsibilities, synthesizing as much as possible into a cohesive product vision and reminding the coding agents of that vision as strictly as possible? Should these agents review new proposals and new pull requests for "adherence to the full picture", whether you want to call this "context" or "vision" or something else?
I think these agents might do an exceptionally good job at synthesizing context and presenting a cohesive roadmap that appears, linguistically, to adhere to the team values and vision. But I'm doubtful that they can have the discernment that a quality manager or team can have. Rapidly and convincingly greenlighting a particular roadmap could do more harm than good.
Yes, but: writing code always teaches you something.
I've worked at founder-sized startups and $xxb dollar public companies. I've never read a product spec, a pitch deck, or a PRD that describes a solution that, if implemented in the way described, would solve the problem. Building the thing teaches you how it should behave.
Software is a complex, interactive medium. Iterating in the code, with people who understand the problem and care to see it solved, is the only way I've seen valuable products get created. Meetings and diagrams help, but it's not until you write some working software that you know whether you have something.
From the article:
> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.
That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.
What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
Bottleneck for what? More features?
I don't think amount of software is what determines whether a company does well.
I don't think capturing quantity of context is that important either.
Now, quality of context. How well do the humans reason?
Then, attitude. How well do the humans respond to bad situations?
Then, resource management. How well does the company treat people and money?
Finally, luck. How much of the uncontrollables are in our favor?
Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
Love that.
I agree, in particular, about the context. That’s where long-retention, experienced, teams pay off.
I managed one of those for decades. When they finally rolled up our department, the engineer with the least seniority, had ten years.
When a team is together for that long, the communication overhead drops to an almost negligible level.
That’s what I find most upsetting about the current culture of mayfly-lifespan employment tenures.
Nowadays, I work mostly alone. I’m highly productive, but my scope is really limited.
I miss being on a good team.
What kind of projects are people working on, where understanding what features the management wants is the only difficult part and the rest can just be "typed out" (or, today, offloaded to an LLM)? If that's what you do, then I'm not surprised so many people on HN think LLMs can replace them.
36 years in, solo founder now, and this matches what I'm living. The code stopped being the bottleneck a while ago — Codex and Claude ship features faster than I can decide which features are worth shipping. The Jevons Paradox point lands hard: I have to actively resist building things just because I now can in an afternoon. Solo doesn't escape the coherence problem either, it just turns it into a fight with my past self about what this product actually is.
An awful lot of problems can in fact be solved by 'more code' in fact. People seem to straw man this in terms of product feature surface.
A lot of places skip creation and maintenance of decent observability - that's code.
We can now easily use advanced, code heavy testing techniques like property testing - code.
We can create environmental simulations to speed up and improve integration testing - code.
We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.
I think the argument here misses critical nuance; there is a difference between code used to implement a product and when code _is_ the product.
It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.
That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.
(not related to the article)
The flashing red dot on the web page is very annoying. Is there some design reason for that?
edit: I meant the <svg> inside `trail-map-container`
>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.
Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.
The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.
Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".
If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.
(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).
In other words, the code was the bottleneck.
The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.
Doesn't add up. I used to spend more than half my time coding, as did others. Besides the obvious cost, that coding took wall-time which meant talks had to wait. Sure a poor collaborator will jam things up a ton, but a team of at least ok collaborators used to be bottlenecked on code.
One of the bottlenecks has always been the code. That code has been stolen and is being laundered while companies rely on mediocre engineers who have never written anything of value to promote the burglary tools and call the process "writing software".
It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".
I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.
The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.
> Producing easily consumable context is precisely the thing humans don’t like to do.
I don't think this sentence speaks for me. This is the sort of thing I love to do.
"What slows down a team where agents do the implementation is the production of specifications precise enough for an agent to pick up and run. Roadmap, written down. Acceptance criteria, written down. The “what we actually want” forced into precision, be it via a test suite, a ticket, or a written design."
This is merely speed of development and not the velocity of a company towards higher value. There are many PMs confidently (using the same AI tools), without a clear deep understanding of the user problems or why the requirements will be adopted by their target users (or even who the target users really are), writing these done elaborately.
So yes this will lead to faster end-end execution. But if the product is used or if it sits unused will depend on things beyond the above.
Can someone explain the title? I think the author illustrates that the code was the bottleneck and it has shifted to context. What am I missing?
It typically is the code that’s the bottleneck, but not writing the code. My career is littered with numerous delays from slow applications.
I am stuck with an editor based on Eclipse. It’s slow and periodicity pauses or crashes. I am stuck with build jobs that take 15-20 minutes. I am often stuck with web apps that take forever to do a task that should take 50ms max.
The list can go on and on. Every delay is a distraction that shatters my concentration. I still write code at work but I am in management now with dozens of other people and administrative distractions. When the software is slow it become my lowest priority. I don’t care who that impacts because if it really mattered we wouldn’t be held hostage by all this slow syrup of software pulling each of us under.
It makes me wonder if remote-first companies will have an advantage in an AI-first world because they must be more intentional about communication. By their very nature remote teams use more written communication and more intentional, and documented, processes. Perhaps the RTO mandates will turn out to be the biggest organizational mistake?
For something that was supposedly always unimportant, huge amounts of energy were spent recruiting developers based on how they produced and interacted with code.
FizzBuzz was a litmus test that showed how hopeless the average developer was. Coding interviews were the real test of programming ability. Now we're being told none of that ever mattered for real?
We should just admit that the game has changed (possibly, I'm not 100% convinced). Code WAS the bottleneck and coding ability was the bottleneck, but it may not be going forward.
Sometimes code is definitely the bottleneck. For example some organizations have a very bureaucratic process guarding which projects get access to a development team and when. That's not needed if implementation is now faster/cheaper.
I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
My buddy made a music software, with synths, effects, visualizers, etc. with Claude and Codex.
For him, the bottleneck very much was the code. He still doesn't know any programming.
I want to say that his ability here has been accelerated by orders of magnitude, but without AI he couldn't have done it at all, so it's actually a divide by zero situation.
(Yeah, he could have just learned programming... and audio engineering... and the specifics of JavaScript ... and the web audio API, and the DOM, and WebGL, and his demo would be ready in like, 2030.)
> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.
The issue is that sometimes you don't know what the system should do until you build it.
A design is a hypothesis. Most of them are wrong, in subtle or not so subtle ways.
(Also, as a separate issue, having a group in the first place increasingly adds negative value. If it was ever a good idea to design by committee... it's increasingly expensive to do so, in opportunity cost.)
It shares some ideas with Peter Naur "Programming as Theory Building".
Quote from the post article: "To quote Michael Polanyi: we know more than we can tell. Some load-bearing context exists precisely because it was never put into words, and writing it down would change what it is."
Imagine how much knowledge exists only in the heads of software engineers, with code being just a functioning footprint of that "Theory". I know SRE in FAANG who told me that multi-billion system is supported by tribal knowledge within their group, and for years, even pre-AI it was a protection against automation.
Oh my god, I literally say this every day now. People think that just because its fast to generate a demo app using claude code that every production system can be built in a week. Generating code was never the bottleneck. It was deciding what to build. When you build an app using claude code, you are equal parts coder, designer, product owner, client, CEO of the universe. You make decisions lighting speed, iterate, and destroy anything you don't like at will. You can make strategic and tactical decisions at any frequency and point that you want to, without needing a bi-weekly board meeting to do it.
The bottleneck is always decision making and human review when multiple humans are involved. This is especially true when we are all trying to build agentic / llm based systems where the outcomes are highly varied and its impossible to write easy tests to automatically check quality or benchmark progress.
> Agents that consume context need agents that produce it. Once that loop is running, the organization has a written substrate it would never have produced on its own.
I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.
Absolutely matching the gut feel I've had lately. We've always been pretty good at producing bad code very fast. All of the other stuff - dependency management, learning what's valuable, ownership & boundaries, context switching costs, etc... have always been the bottlenecks and it's just more obvious now.
Yes it was. We were stuck on never-ending design and requirements discussions because writing the wrong code was too expensive.
Now if your design / requirements are wrong who cares? Tomorrow you will have a brand new stack.
Totally agree, we wrote our own piece similar to this: https://productnow.ai/blogs/teams-that-coordinate
I really think as code becomes cheap, misalignment between people, teams, and organizations is going to hurt a lot more, especially when everyone is trying to move at break neck speeds.
I also think a big piece of this is human attention and inertia. Aka, why bother doing the hard work to coordinate with others when you can just ship whatever you’re thinking. I think whichever organizations can figure out the human and cultural aspects to this will do phenomenally
Ask yourself what monks did when scribes were replaced by the printing press.
If I was a scribe at the time I’d be thrilled because of all that extra time available to work on beer productivity metrics.
I'm finding counterexamples of this constantly now that I can have an agent rewrite large sections of my codebase that have been sorely needing it.
- Moving to a newer and more modern test library
- Refactoring my data layer so it's easier to read, based on years of organic changes that need to be baked in and simplified
- Porting some functionality to another language to vastly improve performance
I agree with the overall sentiment, but having an agent at my finger tips who can really crank out large-scale, involved code changes is unclogging quite a few backburnered todos lately for me.
The company website linked in the article is broken https://www.dottxt.ai/ on (mobile and desktop) Safari. Looks like your cert doesn’t cover the www subdomain.
> They are waiting on the next well-formed spec
Is this actually true? Maybe in a widget factory. I think it’s an anti-pattern for the new world.
When you look at places that are shipping at insane pace (like Anthropic) the secret is not accelerating the writing down of a roadmap and we’ll groomed backlog, it’s empowering smart individuals to run their own end-to-end product improvement loops.
You can slightly reframe the OP by saying “the bottleneck is product ideas”, but “well formed backlog items” IMO frames it as more structured and hierarchical than it should be.
The bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.
That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.
The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.
That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
If I read “load-bearing” or “blast radius” one more time…
I think this is the wrong conclusion.
Whether code is the bottleneck likely depends on the organization. In mine, code is the bottleneck. AI has pushed it so validation is now the bottleneck. If it is such that the devs are "middlemen" such they can't spec things, then I think whoever can spec things is likely the bottleneck.
I can type faster than I can think of the correct things to type. My experience may be non-standard but I think for most serious software folks the code has never been the bottleneck.
I have been thinking about this a lot lately. How do you capture key factors succinctly, and even harder, keep it succinct as it evolves?
The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.
> Real programmers don’t document their programs.
Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
So managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.
As software engineer, we should collectively realize that this is all cope. Every article or comment about how AI will never be smart enough, etc, etc will only be true until its not. One of our main valuable skill sets is now partially automated. Some of us are completely obsolete and its coming for the specialists and more experienced ones within a decade tops. You're not going to convince anyone that "um actually we're better because we bike shed more".
Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.
> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.
The bottleneck for my personal projects was the code. So many have become unblocked because of LLMs
The .txt website fails to load if you won't enable WebGL on your browser. Incredible
Velocity, velocity, velocity! Ah yes, velocity always seems to matter except to those that don’t need to worry about it.
I can see the division here already, and the cogs are afraid. As a dev of 25+ years, currently working for a small company who came from a global company, I see both sides. I'm very excited about AI and love to see my projects come to life so much faster. I still love the craft of code, but its always been about the product for me.
The paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.
In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.
the bottleneck was never the software, that is the ship we ride,
people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,
the ship should carry food people want,
team decides what food will be consumed,
captain tries first the food,
if food is good and people want it, people buy more
Everything in life revolves around people, and even more so today
It's hilarious to me to see the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity to be protected at all costs, suddenly, and with no hint of shame, start preaching about about the vital importance of collaborative activities and the apparent inconsequence of code and coding, the moment a machine was able to do the latter faster than them. I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.