logoalt Hacker News

andy99today at 1:14 PM2 repliesview on HN

I’ve heard this, I don’t automatically believe it nor do I understand why it would need to be true, I’m still caught on the old fashioned idea that the only “thinking” for autoregressive modes happens during training.

But I assume this has been studied? Can anyone point to papers that show it? I’d particularly like to know what the curves look like, it’s clearly not linear, so if you cut out 75% or tokens what do you expect to lose?

I do imagine there is not a lot of caveman speak in the training data so results may be worse because they don’t fit the same patterns that have been reinforcement learned in.


Replies

therealdrag0today at 4:50 PM

We’re years into the industry leaning into “chain of thought” and then “thinking models” that are based on this premise, forcing more token usage to avoid premature conclusions and notice contradictions (I sometimes see this leak into final output). You may remember in the early days users themselves would have to say “think deeply” or after a response “now check your work” and it would find its own “one shot” mistakes often.

So it must be studied and at least be proven effective in practice to be so universally used now.

Someone else posted a few articles like this in the thread above but there’s probably more and better ones if you search. https://news.ycombinator.com/item?id=47647907

conceptiontoday at 3:21 PM

I have seen a paper though I can’t find it right now on asking your prompt and expert language produces better results than layman language. The idea of being that the answers that are actually correct will probably be closer to where people who are expert are speaking about it so the training data will associate those two things closer to each other versus Lyman talking about stuff and getting it wrong.