I think the Oxide computer LLM guidelines are wise on this front:
> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers
The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.
Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.
That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.