logoalt Hacker News

dspillettyesterday at 5:14 PM0 repliesview on HN

> companies would not be so sanguine about letting people use Wikipedia

Are companies sanguine about using Wikipedia without verification. Maybe some, but they darn well shouldn't be. And I say this as someone who uses Wikipedia for many minor things (though for anything important, I verify elsewhere).

> Also, I do think that there are companies that do have policies against talking to known liars.

No doubt most/all. But such policies will always be caviated with exceptions if the information is properly validated afterwards.

> So allowing LLM use at all is a direct admission that seeking out the "truth" is not an important goal because it could never actually improve accuracy and could only worsen it through hallucinated, probable reporting.

I'm generaly anti-LLM, but this is… ad absurdum.

There is a huge difference between lazily accepting what an LLM spews out, and using that along with other sources for further research. No good reporter will trust a single source away from exceptional circumstances, wether that source is a person or an LLM, and what would be considered “exceptional circumstances” for trusting specific meat-sourced information won't apply for an LLM-sourced summary.

If you can trust Wikipedia as a starting point, you can trust a good LLM as a starting point. Both are offering a summary of what a bunch of people on the internet have written, neither should be trusted as a reliable source.

> I don't think it's insane to then have such companies or agencies say that AI shouldn't be used because it's been shown to be unreliable

If taking an absolutist approach. I would be a little more quakified and say that LLM output should never be used without verification of all details, rather than should not be used at all. It may be the case that this verification makes using LLMs no more efficient than doing research from other sources in the first place, and I suspect that this is the case often, if when using LLMs proper time is given to verifying the output.

The problem is people musunderstanding what an LLM is: a summariser, offering access to a compressed version of its sources. If you are using them as sources rather tham summarisers then you are using them wrongly. Unfortunately, that means a great many people are using them wrongly…