Sadly I don’t see how our current social paradigm works for this. There is no history of any sort of long planning like this or long term loyalty (either direction) with employees and employers for this sort of journeyman guild style training. AI execs are basically racing, hoping we won’t need a Schwartz before they are all gone. But what incentives are in place to high a college grad, have them work without llms for a decade and then give them the tools to accelerate their work?
Some folks need to touch the hot stove before they learn but eventually they learn.
If AI output remains unreliable then eventually enough companies will be burned and management will reinstate proper oversight. All while continuing to pay themselves on the back.
> There is no history of any sort of long planning
Sure there is. Its the formal education system that produced the college grad.
Well, the astrophysics situation is special because, as the article notes, there aren't breakthroughs that can be externally verified.
Other projects' success will be proportional to their number of Schwartz' and so it seems unlikely they disappear. But they may disappear for areas in which there is no immediate money.
Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?
Last September, Tyler Austin Harper published a piece for The Atlantic on how he thinks colleges should respond to AI. What he proposes is radical—but, if you've concluded that AI really is going to destroy everything these institutions stand for, I think you have to at least consider these sorts of measures. https://www.theatlantic.com/culture/archive/2025/09/ai-colle...