I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.
Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
My default is that I won't copy and paste anything that's AI generated in communications. I kind of think that's the line. Use whatever you want in the background, but I want to communicate with the synthesis of your thoughts.
I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.
You have to call it out when you see it, politely and charitably.
"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."
> She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.
LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.
For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.
If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!
For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.
I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.
you could always do this: https://marketoonist.com/wp-content/uploads/2023/03/230327.n...
I've run into a similar thing where I'll be cc'd on support tickets with one of our customer support agents and they'll then reply to me with what is clearly an ai summary of the single email from the customer that I can already read. I do think they're trying to be helpful, but it's hard to not feel like they think I'm a child or an idiot. Back in the day we agreed that Googling something for someone was rude (letmegooglethatforyou.com being a good example), I don't know why ai summaries and slop aren't understood in the same way.
Well no, you're supposed to copy-paste it into ChatGPT, ask for executive summary, and recover an approximation of the original input. Duh :)
And it’s too soon to have these norms. Employers today are willing to part with them at the hint of the slimmest efficiency gains, you’ll waste time. So I think the correct response today is wait for it to settle. Norms will form on their own.
My current bar is “if you know I’m expecting to hear from a person don’t paste unedited ChatGPT outputs and hit send.” Everybody wants to send out the efforts of their corner-cutting, but nobody wants to receive them.
Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.
Yeah I write prompts asking it to misspell a few words, break a few grammar rules, forget to capitalize once in a while, miss some punctuation once in a while. No one will ever catch on.
In an ideal workplace, one could sit down with the colleague and have her experience untangling the slop, perhaps by a process akin to pair programming.
Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.
I've seen manager obviously reading copilot's advises as his own thoughts on meetings.
You can use an LLM to fix spelling and grammar errors. You don't need to generate slop. (Cloud providers sell LLMs as "robot information workers" when they're actually "calculators for text".)
Well, sure, it's very new. Soon we'll adapt and it'll be just another tool we're using.
[dead]
Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.