logoalt Hacker News

simonwyesterday at 6:36 PM4 repliesview on HN

I don't know that the "humans learn, LLMs don't" argument holds any more with coding agents.

Coding agents look at existing text in the codebase before they act. If they previously used a pattern you dislike and you tell them how to do differently, the next time they run they'll see the new pattern and are much more likely to follow that example.

There are fancier ways of having them "learn" - self-updating CLAUDE.md files, taking notes in a notes/ folder etc - but just the code that they write (and can later read in future sessions) feels close-enough to "learning" to me that I don't think it makes sense to say they don't learn any more.


Replies

lunar_mycroftyesterday at 7:21 PM

In some ways these methods are similar to the model "learning", but it's also fundamentally different than how models are trained and how humans learn. If a human actually learns something, they're retain that even if they no longer have access to what they learned it from. And LLM won't (unless trained by the labs not to, which is out of scope). If you stop giving it the instructions, it won't know how to do the thing you were "teaching" it to do any more.

jlarcombetoday at 12:04 AM

If you think this is anything like working with a bright junior developer then i simply can't understand why.

show 1 reply
PessimalDecimalyesterday at 10:54 PM

That sounds more like mimicry without understanding, like playing the glass bead game.

show 1 reply
bigstrat2003yesterday at 7:13 PM

It is a matter of fact that LLMs cannot learn. Whether it is dressed up in slightly different packaging to trick you into thinking it learns does not make any difference to that fact.

show 1 reply