See also:
Grammarly is using our identities without permission, https://www.theverge.com/ai-artificial-intelligence/890921/g..., https://archive.ph/1w1oO
> When asked if Superhuman considered notifying the people named in its AI feature, or requesting their permission, Gay said, “The experts in Expert Review appear because their published works are publicly available and widely cited.”
Big difference between "AI, rewrite this passage to sound more like Hunter S Thompson" and "Grammarly-brand unauthorized digital agent Hunter S Thompson, provide a critique of my writing"
Let's see what company values informed this decision [0].
> At Grammarly, it all starts with our EAGER values: Ethical, Adaptable, Gritty, Empathetic, and Remarkable. These values are guiding lights that keep the Grammarly experience compassionate and our business competitive.
The most interesting is the realization that if the LLM's input is only the output of a professional (human), then by definition the LLM cannot mimic the process the (human) professional applied to get from whatever input they had to produce the output.
In other words an LLM can spit out a plausible "output of X", however it cannot encode the process that lead X to transform their inputs into their output.
I know all press is good press... but there are limits.
If it feels like Grammarly does not respect your right to digital sovereignty, it is because it does not.
For the main link to the wired article as well: https://archive.is/2Qbdu
The weird part about tools like this isn't just the copyright question, it's the simulation of authority
Grammarly seemed pretty dead on arrival the moment they added AI features. They would have said a lot more relevant and kept the costs down if they were strictly no-ai imo.
I offer my expertise in tech writing to review your AI articles and docs.
I spent a great deal of time trying to do this at allofus.ai with a team of ex-googlers with our goal being to help creators eventually 'own' their personas and drive and compete to use them to help end users.
We believed this was coming and that the best way to handle it was give the real person control over their persona to grow/edit/change and train it as they see fit.
I actually own the patent on building an expert persona based on the context of the prompt plus the real persons learned information manifold...
A few things worth flagging: On GDPR: Using a named individual's identity to generate commercial AI output isn't obviously covered by "legitimate interest." Affected EU-based individuals likely have real grounds to object or request erasure. On IP/publicity rights: You can't copyright an editing style — but you absolutely can have a right of publicity claim when a company profits from your name and simulated judgment without consent. The Lanham Act's false endorsement provisions could also be in play here. The kicker: The "sources" cited by the feature were broken, spammy, or pointed to completely unrelated content. So the defense that suggestions are inspired by someone's actual work may not even hold up technically.
This feels like a desperate attempt to stay relevant in a post-LLM world. They’re basically wrapping an LLM in a "professional" skin and calling it an expert review. The problem is that once you start letting an AI "expert" dictate tone and logic, you effectively lobotomize the writer’s original intent. We’re reaching a point where AI is just reviewing other AI-generated text, creating a feedback loop of pure mediocrity. Copium for middle management, if you ask me.
Frankly, I am surprised this was not shut down by their legal counsel (assuming they have one and they actually asked). The legal exposure here is significant. This could be defamation, there are publicity rights issues, copyright, and maybe even criminal liability.
This feels illegal. Even if it's not, it further drives the perception that AI is only good for crime, like crypto.
I would be surprised if the living writers can't sue over this.
"We can do it because no one can stop us."
Man I really don't like this at all.
It really feels so wrong to spare nobody, not even dead writer/people.
All it's gonna do is something similar to em-dashes where people who use it are now getting called LLM when it was their writing which would've trained LLM (the irony)
If this takes off, hypothetically, we will associate slop with the writing qualities similar to how Ghibli art is so good but it felt so sloppy afterwards and made us less appreciate the Ghibli artstyle seeing just about anyone make it.
The sad part is that most/some of these dead writers/artists were never appreciated by the people of their time and they struggled with so many feelings and writing/art was their way of expressing that. Van Gogh is an example which comes to my mind.[0] Many struggled from depression and other feelings too. To take that and expression of it and turn it into yet another product feels quite depressing for a company to do
[0]: https://en.wikipedia.org/wiki/Health_of_Vincent_van_Gogh
that's so scummy. why they even needs "names"? it's a rhetorical question...
[dead]
Digital necrophilia. The living ones are the ones that are going to have to make the objections here.
This is revolting at so many levels.