logoalt Hacker News

johnfnyesterday at 2:30 PM1 replyview on HN

This is an interesting denial of reality.


Replies

aqfamnzcyesterday at 8:46 PM

A "reasoning" LLM is just an LLM that's been instructed or trained to start every response with some text wrapped in <BEGIN_REASONING></END_REASONING> or similar. The UI may show or obscure this part. Then when the model decides to give its "real" response, it has all that reasoning text in its context window, helping it generate a better answer.