If they can't distinguish LLM text, then why should they care?
Anti-AI people like to bring up hallucination as if everything AI generates is false.
I can write pages of text, with my own content, and then use AI to improve my writing and clarity. Then I review and edit. It might have some LLM markers in there, which I remove sometimes because it's distracting. But the final, AI assisted writing is easier to read and better organized. But all the ideas are mine. Hallucinations are not remotely a problem in this case.
If you can’t distinguish between fake images and real ones why should you care?