>Humans make mistakes. LLMs do not. Anything “wrong” they do is them working exactly as designed.
This requires a redefinition of the term mistake, no?