Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.
Ah yes, take my single sentence, blow it up to 3 paragraphs with LLMs, and then the person reading it can have an LLM summarize it in a single sentence.
What the fuck are we even doing anymore?
LLMs are great at decompression [1]
[1] https://jabde.com/2026/02/02/utilizing-llms-as-a-data-decomp...
Might as well donate money to the AI companies at this point.
But now even this is just producing more information and requires more work both of you and of the original sender.
> and then I ask the for their original prompt.
Original prompt: "Please rewrite this information in a nice format for my insufferable asshole colleague".
This puts the LLM providers in a great position of:
They're getting paid to encode some inane prompt into paragraphs of text, and then they're getting paid again to summarize that back into something with even less value than the original prompt. And they're making money hand over fist because people are happier to play that game rather than just pushing back on the jerks sending them pages of generated garbage in the first place.