logoalt Hacker News

serftoday at 12:24 AM1 replyview on HN

>Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance

it concerns you because you have a good authority over your spoken language, and like most people with those skills I presume the language flows easily from you.

that ability isn't guaranteed, for a lot of people expression is tough, and those people felt equally alienated when confronted with an essay of word salad about why their opinion is wrong.

An LLM is a tool. In the 90s I would read columns and editorials about the disgusting faux pas of replying to a wedding invitation via such a cheap trendy medium like internet e-mail , now you receive death certificates that way.

It's not all bad, simpletons can use LLMs to have the critiquing essays turned into 5 word ELI5 statements that they can become enraged over once all the nuance is stripped. That's fun!


Replies

advaeltoday at 1:20 AM

Sure, it's a tool that I think that's not a particularly compelling use of. Like I can at least see an endpoint of slop code where the right guardrails and model improvements create a means by which people can ask their computers to do things in natural language, and semantic search is genuinely a novel and powerful capability. Maybe we even get other nice translation protocols to structured forms of language. But in a context where the premise is that we're trying to communicate with other humans, using a model that generates plausible prose is a mechanism that obfuscates rather than clarifies. I don't think it's fit to purpose for that thing any more than a hammer makes a good screwdriver. If it helps you to bounce your ideas off an LLM, by all means do so, but this will mostly just serve to homogenize the writing of everyone doing that. Possibly of value to some people, but not to me