To lie requires recognition of the truth and an intention to deceive. LLM’s don’t have such abilities. They are systems that generate plausible sequences of symbols based on training inputs, alignments, reinforcement, and inference. These systems don’t know or care what truth is and therefore cannot lie.
It’s already bad. I’m not looking forward to the future. These systems are terrible. It’s a future without people that they want for some reason. I’d rather deal with people incompetent, tired, annoyed people than an LLM.
Ill-thought out logic.
The company that deployed the LLM is lying to you. The people who made that decision are the ones who are culpable.
We both agree that it’s terrible.
I think it’s important to have an enforcement mechanism to force companies to do what they are responsible for doing. An Anti-Kafka Law, so to speak.