logoalt Hacker News

Kim_Bruningyesterday at 11:28 PM0 repliesview on HN

When you have a next token predictor, you shouldn't be surprised to find an internal representation of prediction error.

Taking it one small step further and tagging for valence shouldn't be such a big surprise.

Pretty boring from a Fristonian perspective, really. People in neuroscience were talking about this in 2013. Not so boring for AI , of course ;-)

https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

(note: Friston is definitely considered a bit out there by ... everyone? But he makes some good points. And here he's getting referenced, so I guess some people grok him)