logoalt Hacker News

tuhgdetzhhyesterday at 10:43 AM4 repliesview on HN

Yes, and that constraint shows up surprisingly early.

Even if you eliminate model latency and keep yourself fully in sync via a tight human-in-the-loop workflow, the shared mental model of the team still advances at human speed. Code review, design discussion, and trust-building are all bandwidth-limited in ways that do not benefit much from faster generation.

There is also an asymmetry: local flow can be optimized aggressively, but collaboration introduces checkpoints. Reviewers need time to reconstruct intent, not just verify correctness. If the rate of change exceeds the team’s ability to form that understanding, friction increases: longer reviews, more rework, or a tendency to rubber-stamp changes.

This suggests a practical ceiling where individual "power coding" outpaces team coherence. Past that point, gains need to come from improving shared artifacts rather than raw output: clearer commit structure, smaller diffs, stronger invariants, better automated tests, and more explicit design notes. In other words, the limiting factor shifts from generation speed to synchronization quality across humans.


Replies

hibikiryesterday at 2:39 PM

I've seen this happen over and over again well before LLMs, when teams are sufficiently "code focused" that they don't care much at all about their teammates. The kind that would throw a giant architectural changes over a weekend. You then get to either freeze a person for days, or end up with codebases nobody remembers, because the bigger architectural changes are secret.

With a good modern setup, everyone can be that "productive", and the only thing that keeps a project coherent is if the original design holds, therefore making rearchitecture a very rare event. It will also push us to have smaller teams in general, just because the idea of anyone managing a project with, say, 8 developers writing a codebase at full speed seems impossible, just like it was when we added enough high performance, talented people to a project. It's just harder to keep coherence.

You can see this risk mentioned in The Mythical Man Month already. The idea of "The Surgery Team", where in practice you only have a couple of people truly owning a codebase, and most of the work we used to hand juniors just being done via AI. It'd be quite funny if the way we have to change our team organization moves towards old recommendations.

andaiyesterday at 2:17 PM

I've mostly done solo work, or very small teams with clear separation of concerns. But this reads as less of a case against power coding, and more of a case against teams!

EdNuttingyesterday at 11:03 AM

This thread seems to have re-identified Amdahl’s law in the context of software development workflow.

Agentic coding is only speeding up or parallelising a small part of the workflow - the rest is still sequential and human-driven.

show 2 replies
zozbot234yesterday at 11:42 AM

You can ask the agent to reverse engineer its own design and provide a design document that can inform the code review discussion. Plus, hopefully human code review would only occur after several rounds of the agent refactoring its own one-shot slop into something that's up to near-human standards of surveyability and maintainability.