logoalt Hacker News

jackcartertoday at 1:24 PM2 repliesview on HN

It’s funny that this is probably due to bias in the training texts, right? Humans are way more likely to publish their “Eureka!” moments than their screwups… if they did, maybe models would’ve exhibit this behavior.

Now that AI labs have all these “Nevermind” texts to train on, maybe it’s getting easier to correct? (Would require some postprocessing to classify the AI outputs as successful or not before training)


Replies

embedding-shapetoday at 2:56 PM

I think it's more explicit than that, part of post-training to enforce the kind of behavior, I don't think it's emergent but rather researchers steering it to do that because they saw the CoT gets slightly better if the model tries to doubt itself or cheer itself on. Don't recall if there was a paper outlining this, tried finding where I got this from but searches/LLMing turns up nothing so far.

Forgeties79today at 1:33 PM

My understanding is that it’s the result of these companies making sure to keep you engaged/happy less than the result of data these companies train with.

I don’t know if it’s true or not but it certainly tracks given LLMs are way more polite than the average post on the internet lol