logoalt Hacker News

EastLondonCodertoday at 7:42 AM0 repliesview on HN

I don’t really find “can the model produce good code?” that interesting anymore. In the right workflow, it plainly can. I’ve gotten code out of LLMs that is not just faster than I’d write by hand, but often better in the ways that matter: tests actually get written, invariants get named, idempotency is considered, error conditions don’t get silently handwaved away because I’m tired or trying to get somewhere quickly.

When I code by hand under time pressure, I’m actually more likely to dig a hole. Not because I can’t write code, but because humans get impatient, bored, optimistic and sloppy in predictable ways. The machine doesn’t mind doing the boring glue work properly.

But that is not the real problem.

The real problem is what happens when an organisation starts shipping code it does not understand. That problem predates LLMs and it will outlive them. We already live in a world full of organisations that ship bad systems nobody fully understands, and the result is the usual quagmire: haunted codebases, slow change, fear-driven development, accidental complexity, and no one knowing where the actual load-bearing assumptions are.

LLMs can absolutely make that worse, because they increase the throughput of plausible code. If your bottleneck used to be code production, and now it’s understanding, then an organisation that fails to adapt will just arrive at the same swamp faster.

So to me the important distinction is not “spec vs code”. It’s more like:

• good local code is not the same thing as system understanding

• passing tests are not the same thing as meaningful verification

• shipping faster is not the same thing as preserving legibility

The actual job of a programmer was never just to turn intent into syntax anyway. Every few decades the field reinvents some story about how we no longer need programmers now: Flow-Matic, CASE tools, OO, RUP, low-code, whatever. It’s always the same category error. The hard part moves up a layer and people briefly mistake that for disappearance.

In practice, the job is much closer to iteratively solving a problem that is hard to articulate. You build something, reality pushes back, you discover the problem statement was incomplete, the constraints were wrong, the edge case was actually central, the abstraction leaks, the user meant something else, the environment has opinions, and now you are solving a different problem than the one you started with.

That is why I think the truly important question is not whether AI can write code.

It’s whether the organisation using it can preserve understanding while code generation stops being the bottleneck.

If not, you just get the same bad future as before, only faster, cleaner-looking, and with more false confidence.