I'm going to shill my own writing here [1] but I think it addresses this post in a different way. Because we can now write code so much faster and quicker, everything downstream from that is just not ready for it. Right now we might have to slow down, but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code.
> The challenge is to develop new personal and organizational habits that respond to the affordances and opportunities of agentic engineering.
I don't think it's the habits that need to change, it's everything. From how accountability works, to how code needs to be structured, to how languages should work. If we want to keep shipping at this speed, no stone can be left unturned.
[1]: https://lucumr.pocoo.org/2026/2/13/the-final-bottleneck/
Nice to see you here (Just reached out on bluesky over sandboxing - gandolin). I follow your work and agree and am hoping that you and others who have well earned audiences based on awesome open source work, can help with the advocacy on mental shifts, not just for developers but also non Devs that become builders.
I'm very focused on their minimalistic building experience as a way to make me and other traditional developers, not the bottleneck and empowering them end to end.
I think AI evals [1] are a big part of that route and hope that different disciplines can finally have probable product design stories [2] instead of there being big gaps of understanding between them.
[1] https://alexhans.github.io/posts/series/evals/measure-first-...
One of the most interesting aspects is when LLMs are cheap and small enough so that apps can ship with a builtin one so that it can adjust code for each user based on input/usage patterns.
>but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code.
Why? Why do we need to "write code so much faster and quicker" to the point we saturate systems downstream? I understand that we can, but just because we can, does'nt mean we should.
The focus is on downstream, but is upstream ready for this speed up?
The linked blog post draws comparisons to the industrial revolution however in the industrial revolution the speed up caused innovation upstream not downstream.
The first innovation was mechanical weaving. The bottleneck was then yarn. This was automated so the bottleneck became cotton production, which was then mechanised.
So perhaps the real bottleneck of being able to write code faster is upstream.
Can requirements of what to build keep up with pace to deliver it?
Totally agree - that's what I was trying to get at with "organizational habits". The way we plan, organize and deliver software projects is going to radically change.
I'm not ready to write about how radically though because I don't know myself!
I was having this conversation at work, where if the promise of AI coding becomes true and we see it in delivery speed, we would need to significantly increase the throughput of all other aspects of the business.
> If we want to keep shipping at this speed
Do we? Spewing features like explosive diarrhea is not something I want.
The linked article is worth reading alongside this one.
The thing I'd add from running agents in actual production (not demos, but workflows executing unattended for weeks): the hard part isn't code volume or token cost. It's state continuity.
Agents hallucinate their own history. Past ~50-60 turns in a long-running loop, even with large context windows, they start underweighting earlier information and re-solving already-solved problems. File-based memory with explicit retrieval ends up being more reliable than in-context stuffing - less elegant but more predictable across longer runs.
Second hard part: failure isolation. If an agent workflow errors at step 7 of 12, you want to resume from step 6, not restart from zero. Most frameworks treat this as an afterthought. Checkpoint-and-resume with idempotent steps is dramatically more operationally stable.
Agree it's not just habits - the infrastructure mental model has to change too. You're not writing programs so much as engineering reliability scaffolding around code that gets regenerated anyway.
I don’t think we can expect all workers at all companies to just adopt a new way of working. That’s not how competition works.
If agentic AI is a good idea and if it increases productivity we should expect to see some startup blowing everyone out of the water. I think we should be seeing it now if it makes you say ten times more productive. A lot of startups have had a year of agentic AI now to help them beat their competitors.