logoalt Hacker News

lucumoyesterday at 6:50 AM2 repliesview on HN

Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.

This is a problem, because you can easily get stuck in a self-reinforcing loop. You feel strengthened in your convictions that you're good at ferreting out LLM-speak because you've found so much of it. And you find so much of it because you feel confident you're good at it. Nobody ever corrects you when you're wrong.

Combine that with general overconfidence and you get threads where every other post with correct grammar gets "called out" as AI generated. It's pretty boring.

There's a similar effect with contentious subject. You get reams and reams of posts calling the other side out for being part of a Russian/Israeli/Iranian/Chinese troll network. There's no independent falsification or verification for that, so people just get strengthened in their existing beliefs.


Replies

mold_aidyesterday at 12:30 PM

>Yes, and it's a detection loop without feedback. You can never verify that a piece of work in the wild is actually AI. The poster is the only one who really knows, and they'll always say it's not.

Yes. People keep saying, in response to points like this, "oh but you/I can tell pretty easily." But it's not the detection, it's the verification! (see what I did there)

Where I'd push back is the idea that the problem is the boring "call out" discourse that follows each accusation. The problem of verifying human provenance is fundamental to the discussion of trust and argumentation, but the simple "the zone is flooded" problem is also an ecological one. There's terrible air/water/soil quality in the metro area I live in; people have to live with it w/o regard to how invested they are in changing it.

grey-areayesterday at 8:43 AM

At this point it’s pretty easy to detect unaltered LLM output because it is such bad writing. That will change over time with training I would hope. At some point I imagine it will be hard to tell.

I honestly don’t know what sites like this will do when that happens and the only way of detecting LLMs is that they are subtly wrong or post too much, we’d be overrun with them.

Not sure if we should be hopefully or fearful that they will improve to be undetectable but I suspect they will.

show 4 replies