logoalt Hacker News

Why AI systems don't learn – On autonomous learning from cognitive science

161 pointsby aanetyesterday at 9:42 PM105 commentsview on HN

Comments

Animatstoday at 6:12 AM

Not learning from new input may be a feature. Back in 2016 Microsoft launched one that did, and after one day of talking on Twitter it sounded like 4chan.[1] If all input is believed equally, there's a problem.

Today's locked-down pre-trained models at least have some consistency.

[1] https://www.bbc.com/news/technology-35890188

show 6 replies
scotttaylortoday at 4:58 PM

[dead]

theptiptoday at 3:18 PM

It’s interesting, LeCun seems to have a blind spot around in-context learning. I didn’t find one mention in this paper (only skimmed the full paper so far so may have missed), which is odd as it is the way that agents come closest to autonomous learning in the real world.

I would say his core point does still apply; autonomous learning is not solved by ICL. But it seems a strawman to ignore the topic entirely and focus on training.

From what I see on the ground, some degree of autonomous learning is possible; Agents can already be set up to use meta-learning skills for skill authoring, introspection, rumination, etc - but these loops are not very effective currently.

I wonder if this is the myopic viewpoint of a scientist who doesn’t engage with the engineering of how these systems are actually used in the real world (ie “my work is done once Llama is released with X score on Y eval”) which results in a markedly different stance than the guys like Sutskever, Karpathy, Amodei who have built end-to-end systems and optimized for customer/business outcomes.

zhangchentoday at 1:38 AM

Has anyone tried implementing something like System M's meta-control switching in practice? Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.

show 2 replies
aanetyesterday at 9:42 PM

by Emmanuel Dupoux, Yann LeCun, Jitendra Malik

"he proposed framework integrates learning from observation (System A) and learning from active behavior (System B) while flexibly switching between these learning modes as a function of internally generated meta-control signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt to real-world, dynamic environments across evolutionary and developmental timescales. "

show 2 replies
utopiahtoday at 8:16 AM

I remember a joke from few years ago that was showing an "AI" that was "learning" on its "own" which meant periodically starting from scratch with a new training set curated by a large team of researchers themselves relying on huge teams (far away) of annotators.

TL;DR: depends where you defined the boundaries of your "system".

show 1 reply
krinnetoday at 8:51 AM

But doesnt existing AI systems already learn in some way ? Like the training steps are actually the AI learning already. If you have your training material being setup by something like claude code, then it kind of is already autonomous learning.

show 1 reply
logicchainstoday at 9:07 AM

There's already a model capable of autonomous learning on the small scale, just nobody's tried to scale it up yet: https://arxiv.org/abs/2202.05780

beernetyesterday at 9:50 PM

The paper's critique of the 'data wall' and language-centrism is spot on. We’ve been treating AI training like an assembly line where the machine is passive, and then we wonder why it fails in non-stationary environments. It’s the ultimate 'padded room' architecture: the model is isolated from reality and relies on human-curated data to even function.

The proposed System M (Meta-control) is a nice theoretical fix, but the implementation is where the wheels usually come off. Integrating observation (A) and action (B) sounds great until the agent starts hallucinating its own feedback loops. Unless we can move away from this 'outsourced learning' where humans have to fix every domain mismatch, we're just building increasingly expensive parrots. I’m skeptical if 'bilevel optimization' is enough to bridge that gap or if we’re just adding another layer of complexity to a fundamentally limited transformer architecture.

esttoday at 6:17 AM

"don't learn" might be a good feature from a business point of view

Imagine if AI learns all your source code and apply them to your competitor /facepalm

jdkeetoday at 12:29 AM

LeCun has been talking about his JEPA models for awhile.

https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/

show 1 reply
Garleftoday at 1:29 PM

I think restrcicting this discussion to LLMs - as it is often done - misses the point: LLMs + harnesses can actually learn.

That's why I think the term "system" as used in the paper is much better.

show 1 reply
tranchmstoday at 3:17 AM

We are rediscovering Cybernetics

show 3 replies
shevy-javatoday at 11:22 AM

The whole AI field is a misnomer. It stole so much from neurobiology.

However had, there will come a time when AI will really learn. My prediction is that it will come with a different hardware; you already see huge strides here with regards to synthetic biology. While this focuses more on biology still, you'll eventually see a bridging effort; cyborg novels paved the way. Once you have real hardware that can learn, you'll also have real intelligence in AI too.

himata4113today at 10:00 AM

Eh, honestly? We're not that far away from models training themselves (opus 4.6 and codex 5.3 were both 'instrumental' in training themselves).

They're capable enough to put themselves in a loop and create improvement which often includes processing new learnings from bruteforcing. It's not in real-time, but that probably a good thing if anyone remembers microsofts twitter attempt.

show 1 reply
followin_io82today at 9:45 AM

good read. thanks for sharing

BrianFHearntoday at 2:32 PM

[dead]

theLewisLutoday at 3:43 AM

[dead]

lock-lockutoday at 12:19 AM

[dead]

show 1 reply
seedpitoday at 12:17 PM

[flagged]

show 2 replies
Frannkytoday at 3:41 AM

Can I run it?

lovebite4u_aitoday at 9:11 AM

claude is learning very fast