There is a fundamental assumption made about the ability of AI here that I believe is wrong.
It assumes that the outputs are lacking because of a limit of ability.
I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.
That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.
> I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
Are you asserting that an LLM could be NOT trained to answer when it knows it doesn’t know the answer, or if that’s not possible be trained to NOT answer when it knows it doesn’t know the answer?
If so, I would believe your thinking, but for some reason I have not yet seen a single LLM that behaves with that kind of self-knowledge.