logoalt Hacker News

ai_critictoday at 1:44 PM1 replyview on HN

Counter-point: most developers have no idea or eagerness to actually do that debugging, so it doesn't really matter.


Replies

lionkortoday at 1:47 PM

It DOES matter, because the claim that LLMs are a layer of abstraction implies that it's somehow more than a random word generator. It does a great job at generating words in the right order, and often, given enough time, datacenter resources, money, and training, they can produce code that runs and does things as expected.

However, there is absolutely nothing stopping an LLM from "deciding" tomorrow that a fix it built a week ago is no longer real, because not only has that fix left its context, but also the bug was not obvious.

show 1 reply