logoalt Hacker News

mentostoday at 11:56 AM1 replyview on HN

I assume it’s not possible to get the same results by fine tuning a model with the documents instead?


Replies

notglossytoday at 12:20 PM

You will still get hallucinations. With RAG you use the vectors to aid in finding things that are relevant, and then you typically also have the raw text data stored as well. This allows you to theoretically have LLM outputs grounded in the truth of the documents. Depending on implementation, you can also make the LLM cite the sources (filename, chunk, etc).

show 1 reply