logoalt Hacker News

andrekandreyesterday at 11:50 PM1 replyview on HN

i think they are referring to statements that they have "solved" hallucinations and it wont be a problem anymore (which it obviously isn't yet anyways)

[1] https://news.ycombinator.com/item?id=44779198


Replies

runarbergyesterday at 11:58 PM

My guess is that post-training has gotten a lot better in the last couple of years and what people are attributing to better models are actually just traditional (non-LLM) models they place on top of the LLM which makes it appears that the model has increased in quality (including by seemingly fewer hallucination).

If this is the case it would be observed with different prompting strategies, when you find a prompt which puts more weight on the post-training models.