The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.