logoalt Hacker News

erentzyesterday at 11:42 PM5 repliesview on HN

Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.


Replies

anonymarstoday at 4:15 AM

Intellectual denial of service

scrupletoday at 1:55 AM

LLMs are Brandolini's Law taken to an entirely different plane of existence.

show 1 reply
jimbokuntoday at 3:26 AM

Calling it “information“is generous.

Bombthecattoday at 6:07 AM

You don't parse the information. You paste it back to AI to get the bullet points the first person put it.

trollbridgetoday at 12:54 AM

Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.

show 6 replies