logoalt Hacker News

antonvsyesterday at 4:42 PM1 replyview on HN

I’ve seen those quadrants too, because I’ve come into several companies to help clean up a mess they’ve gotten into with bad code that they can no longer ignore. It is a compete certainty that we’re going to start seeing a lot more of that.

One ironic thing about LLM-generated bad code is that churning out millions of lines just makes it less likely the LLM is going to be able to manage the results, because token capacity is neither unlimited nor free.

(Note I’m not saying all LLM code is bad; but so far the fully vibecoded stuff seems bad at any nontrivial scale.)


Replies

fookeryesterday at 5:16 PM

> because token capacity is neither unlimited nor free.

This is like dissing software from 2004 because it used 2gb extra memory.

In the last year, token context window increased by about 100x and halved in cost at the same time.

If this is the crux of your argument, technology advancement will render it moot.

show 1 reply