logoalt Hacker News

sebastiennightlast Friday at 8:36 PM0 repliesview on HN

> It's still a big issue that the models will make up plausible sounding but wrong or misleading explanations for things,

Due to how LLMs are implemented, you are always most likely to get a bogus explanation if you ask for an answer first, and why second.

A useful mental model is: imagine if I presented you with a potential new recruit's complete data (resume, job history, recordings of the job interview, everything) but you only had 1 second to tell me "hired: YES OR NO"

And then, AFTER you answered that, I gave you 50 pages worth of space to tell me why your decision is right. You can't go back on that decision, so all you can do is justify it however you can.

Do you see how this would give radically different outcomes vs. giving you the 50-page scratchpad first to think things through, and then only giving me a YES/NO answer?