Does this still work on newer models?
The reasoning on why it works is pretty interesting. A sort of moral/linguistic trap based on its beliefs or rules.
Works on humans as well I think.
> Works on humans as well I think.
Huh?
> Works on humans as well I think.
Huh?