> I think AI writing is better used for ideation
It shocks me when proponents of AI writing for ideation aren't concerned with *Metaphoric Cleansing* and *Lexical Flattening* (to use two of the terms defined in the article)
Doesn't it concern you that the explanation of a concept by the AI may represent only a highly distorted caricature of the way that concept is actually understood by those who use it fluently?
Don't get me wrong, I think that LLMs are very useful as a sort of search engine for yet-unknown terms. But once you know *how* to talk about a concept (meaning you understand enough jargon to do traditional research), I find that I'm far better off tracking down books and human authored resources than I am trying to get the LLM to regurgitate its training data.