logoalt Hacker News

nicole_expressyesterday at 10:36 PM8 repliesview on HN

It's an odd thing here, because I don't really understand why this is LLM-specific at all. If someone came up to me and asked "who's the 6 Nimmt world champion?" I'd google it and probably find the same result, and have no reason not to believe it. I mean, for all I know the game is being made up too, though it has more sources at least.


Replies

pmontratoday at 5:03 AM

It is not LLM specific. The conclusion of the post states

> The web was already being poisoned for search and link ranking long before LLMs existed.

But it continues

> We are now plugging generative models directly into that poisoned pipeline and asking them to reason confidently about “truth” on our behalf.

So it's a shift from trust Google to trust the AI, which might be more insidious or not, depends on the individual attitude of each of us.

show 1 reply
SchemaLoadtoday at 12:44 AM

The difference imo is removing the information from the source. Previously you'd use the source of the information to gauge how much you trust it. If it's a reddit post or a no name website you'd likely be skeptical if it doesn't seem backed up by better sources. But now the info is coming from an LLM that you generally trust to be knowledgeable. And the language it uses backs up this feeling.

The OP post is highlighting how incredibly easy it is for a very small amount of information on the web to completely dictate the output of the LLM in to saying whatever you want.

latexrtoday at 10:29 AM

> I'd google it and probably find the same result, and have no reason not to believe it.

Have you truly looked at the website?

https://6nimmt.com

I’d say there’s obvious reason to not believe it, or at least check another source. The website just seems fishy. Why would a website exist for just that one post? Sure, they could’ve made the website more believable, but that takes more effort and has more chances for something to jump out at you.

And therein lies a major difference between searching the web and asking an LLM. When doing the former, you can pick up on clues regarding what to trust. For example, a website you’ve visited often and has proven reliable will be more trustworthy to you than one you’ve never been to before. When asking an LLM, every piece of information is provided in the same interface, with the same authoritative certainty. You lose a major important signal.

seanhuntertoday at 9:55 AM

It's not. He vandalised wikipedia and then talked about LLMs in his writeup to gain attention.

yen223yesterday at 11:37 PM

A lot of people seem to think this to be an LLM problem, but you're right.

This is a general epistemological problem with relying on the Internet (or really, any piece of literature) as a source of truth

show 1 reply
freakynittoday at 3:17 AM

Because outside of the tech community (in fact, many even inside of it), almost 100% of the folks consider what these chatgpt like tools answer as the truth without questioning it, or cross-verifying it even once.

show 1 reply
locallosttoday at 7:05 AM

You would also find other results (this assumes what you're searching for is not a random made up thing). The issue with LLMs is IMHO bigger because it will give you answers as a matter of fact without any other consideration.

refulgentisyesterday at 11:19 PM

Closed it after “This house of cards only needs a $12 domain!”, right under “Sorry, Wikipedia.”, right under their Wikipedia edit.

show 1 reply