Long term though, AI will win out. The thing is that you can improve capability. You can make the context window bigger. You can throw more compute at it. Improve efficiency of chips. Throw more power at it. And indeed, that has worked so far to turn the gpts of 2017 into the gpts of 2026 that can actually do stuff.
Meanwhile, human thoughtpower cannot really be improved. Once the tipping point is reached where computers exceed humans, humans will never be able to catch up by definition.
Humans can also only maintain so much contextual information and scope. They can only learn so much in the time scale they have to get up to speed. They can only do so much within the timescale of their own mental peak before they fall off and go senile or die. While these limits are bound by evolution, they change on the orders of thousands of generations, and require strong selection for these changes at that.
The turtle has marched far already, but the hare in the speeding car they continually improve is not far behind. Efficiency doesn't matter. What is inefficient now will be trivial to parallelize and scale in the future as its always been in the history of compute. We'd have to engage in something like the Bene Gesserit breeding program if we are to have human thoughtpower be competitive against compute in the future.
You’re pre-supposing that we can actually afford to just keep throwing more compute at the problem.
Moores law is long dead, leading edge nodes are getting ever more expensive, the most recent generation of tensor silicon is not significantly better in terms of flops/watt over the previous generation.
Given that model performance has consistently trended log linear with compute thrown at the problem, there must be a point at which it is no longer economically viable to throw more flops at the problem.
You are forgetting that the current approach to AI may lead to a flat asymptote that still lies well below human capabilities.
You seem to have a very one-dimensional perspective on "human thoughtpower".
You're presupposing an answer to what is actually the most interesting question in AI right now: does scaling continue at a sufficiently favorable rate, and if so, how?
The AI companies and their frontier models have already ingested the whole internet and reoriented economic growth around data center construction. Meanwhile, Google throttles my own Gemini Pro usage with increasingly tight constraints. The big firms are feeling the pain on the compute side.
Substantial improvements must now come from algorithmic efficiency, which is bottlenecked mostly by human ingenuity. AI-assisted coding will help somewhat, but only with the drudgery, not the hardest parts.
If we ask a frontier AI researcher how they do algorithmic innovation, I am quite sure the answer will not be "the AI does it for me."