I think the future of economically useful AI is to build efficient reasoners. A goal of model as oracle of truth or encylopedia for our world is orthagonal to a goal of a model that can reason about novel input. I think the focus on meme litmus tests is somewhat misguided. AI is not suited to be a spell checker, a news source, a history book, or an idependent developer. but if it can reasom about a prompt and augment human effort then that is useful. the idea of an ai without human like world model matching or exceeding our own world model or store of facts is misguided in my opinion.
The future of AI I would imagine is that AI acts purely as logic within constraints