logoalt Hacker News

sethevyesterday at 4:21 PM5 repliesview on HN

LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...


Replies

afro88yesterday at 6:37 PM

I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)

The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.

show 1 reply
antonvsyesterday at 6:02 PM

Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:

> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.

mmoosstoday at 12:43 AM

Is there research showing if and under what conditions LLM output is detected accurately. What are the false positive and false negative rates?

computablyyesterday at 8:26 PM

You don't have to be good at identifying AI generated text to detect low-effort slop.

adi_kurianyesterday at 9:10 PM

Contractions