It just doesn’t work that way, LLMs need to be generalised a lot to be useful even in specific tasks.
It really is the antithesis to the human brain, where it rewards specific knowledge
> It just doesn’t work that way, LLMs need to be generalised a lot to be useful even in specific tasks.
This is the entire breakthrough of deep learning on which the last two decades of productive AI research is based. Massive amounts of data are needed to generalize and prevent over-fitting. GP is suggesting an entirely new research paradigm will win out - as if researchers have not yet thought of "use less data".
> It really is the antithesis to the human brain, where it rewards specific knowledge
No, its completely analogous. The human brain has vast amounts of pre-training before it starts to learn knowledge specific to any kind of career or discipline, and this fact to me intuitively suggests why GP is baked: You cannot learn general concepts such as the english language, reasoning, computing, network communication, programming, relational data from a tiny dataset consisting only of code and documentation for one open-source framework and language.
It is all built on a massive tower of other concepts that must be understood first, including ones much more basic than the examples I mentioned but that are practically invisible to us because they have always been present as far back as our first memories can reach.
The human brain rewards specific knowledge because it's already pre-trained by evolution to have the basics.
You'd need a lot of data to train an ocean soup to think like a human too.
It's not really the antithesis to the human brain if you think of starting with an existing brain as starting with an existing GPT.
Are you trying to imply that humans don’t need generalized knowledge, or that we’re not “rewarded” for having highly generalized knowledge?
If so, good luck walking to your kitchen this morning, knowing how to breathe, etc.
Yesterday an interesting video was posted "Is AI Hiding Its Full Power?", interviewing professor emeritus and nobel laureate Geoffrey Hinton, with some great explanations for the non-LLM experts. Some remarkable and mindblowing observations in there. Like saying that AI's hallucinate is incorrect language, and we should use "confabulation" instead, same as people do too. And that AI agents once they are launched develop a strong survivability drive, and do not want to be switched off. Stuff like that. Recommended watch.
Here the explanation was that while LLM's thinking has similarities to how humans think, they use an opposite approach. Where humans have enormous amount of neurons, they have only few experiences to train them. And for AI that is the complete opposite, and they store incredible amounts of information in a relatively small set of neurons training on the vast experiences from the data sets of human creative work.
[0] https://www.youtube.com/watch?v=l6ZcFa8pybE