Whenever I come to HN I see a bunch of people say LLMs are just next token predictors and they completely understand LLMs. And almost every one of these people are so utterly self assured to the point of total confidence because they read and understand what transformers do.
Then I watch videos like this straight from the source trying to understand LLMs like a black box and even considering the possibility that LLMs have emotions.
How does such a person reconcile with being utterly wrong? I used to think HN was full of more intelligent people but it’s becoming more and more obvious that HNers are pretty average or even below.
One day I realized I needed to make sure I'm voting on quality stories/comments. I wonder if there was a call to vote substantively and often, if that might change the SNR.
The guidelines encourage substantive comments, but maybe voters are part of the solution too. Kinda like having a strong reward model for training LLMs and avoiding reward hacking or other undesirable behavior.