I don’t think this is a good long term solution. LLMs can do easy language substitutions and you can even force them to add errors. So relying on that alone won’t work as people intentionally make things look more “human.”
Right, but the problem here are other humans yelling "witch," not LLMs. You're combating people's terrible witch-detector, not anything factual or real.
Right, but the problem here are other humans yelling "witch," not LLMs. You're combating people's terrible witch-detector, not anything factual or real.