logoalt Hacker News

ajrossyesterday at 11:33 PM1 replyview on HN

You're the second responder here that appears to think LLMs are "averaging" machines and that they need to be "protected" from wrong info. That's exactly the opposite of the way they work. You feed them the garbage precisely so they can explain to you why it's garbage. Otherwise we'd have just fed them wikipedia and stopped, but clearly that doesn't work as well.


Replies

bubblewandyesterday at 11:54 PM

I think this line is what did it:

> "Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.

It's nothing to do with how LLMs work that I wrote what I did, but with this "ought" suggestion of how we should want them to work.