That's not what I mean, rather than humans cannot create a type of intelligence that supersedes what is roughly capable from human intelligence, because doing so would require us to be smarter basically.
Not to say we can't create machines that far surpass our abilities on a single or small set of axis.
Given SOTA models are Phd level in just about every subject this is clearly provably wrong.
Seems like if evolution managed to create intelligence from slime I wouldn't bet on there being some fundamental limit that prevents us from making something smarter than us.
Think hard about this. Does that seem to you like it's likely to be a physical law?
First of all, it's not necessary for one person to build that super-intelligence all by themselves, or to understand it fully. It can be developed by a team, each of whom understands only a small part of the whole.
Secondly, it doesn't necessarily even require anybody to understand it. The way AI models are built today is by pressing "go" on a giant optimizer. We understand the inputs (data) and the optimizer machine (very expensive linear algebra) and the connective structure of the solution (transformer) but nobody fully understands the loss-minimizing solution that emerges from this process. We study these solutions empirically and are surprised by how they succeed and fail.
We may find we can keep improving the optimization machine, and tweaking the architecture, and eventually hit something with the capacity to grow beyond our own intelligence, and it's not a requirement that anyone understands how the resulting model works.
We also have many instances in nature and history of processes that follow this pattern, where one might expect to find a similar "law". Mammals can give birth to children that grow bigger than their parents. We can make metals puter than the crucible we melted them in. We can make machines more precise than the machines that made those parts. Evolution itself created human intelligence from the repeated application of very simple rules.