logoalt Hacker News

sothatsityesterday at 6:54 PM2 repliesview on HN

RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.

Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.

A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.


Replies

libraryofbabelyesterday at 8:11 PM

Thanks for saying this. It never ceases to amaze me how many people still talk about LLMs like it’s 2023, completely ignoring the RLVR revolution that gave us models like Opus that can one-shot huge chunks of works-first-time code for novel use cases. Modern LLMs aren’t just trained to guess the next token, they are trained to solve tasks.

show 1 reply
HarHarVeryFunnyyesterday at 9:10 PM

RL adds a lot of capability in the areas where it can be applied, but I don't think it really changes the fundamental nature of LLMs - they are still predicting training set continuations, but now trying to predict/select continuations that amount to reasoning steps steering the output in a direction that had been rewarded during training.

At the end of the day it's still copying, not learning.

RL seems to mostly only generalize in-domain. The RL-trained model may be able to generate a working C compiler, but the "logical reasoning" it had baked into it to achieve this still doesn't stop it from telling you to walk to the car wash, leaving your car at home.

There may still be more surprises coming from LLMs - ways to wring more capability out of them, as RL did, without fundamentally changing the approach, but I think we'll eventually need to adopt the animal intelligence approach of predicting the world rather than predicting training samples to achieve human-like, human-level intelligence (AGI).