The only way that this even vaguely works, best I can tell, would be on that decade-or-two timeline, but therein lies the problem: all this money getting pumped into data centers right now is going to produce data centers that are running old, inefficient, slow GPUs by 5-years-from-now standards. And GPUs are by far the most expensive part of these data centers… having the buildings is barely an asset. We’re investing all the money in right now’s technology in one of the fastest moving hardware segments and for some inexplicable reason, think that will lead to a sustainable advantage. What’s to stop someone 5 years from now, waiting for the dust to settle, then spending way less money for more compute and just mopping the floor with everybody in this sector… and that’s (unreasonably, IMO) assuming that local applications won’t become good enough to take too large a bite from their business before that.
And look at the difference in spending between their building out general-purpose-computing cloud data centers that even then, had potential use cases if the business failed. What are they going to do… start a massive, extremely expensive pre-rendered online gaming service? Only render Disney movies?
I dunno. None of this makes sense to me.
These datacenters are already running old, inefficient, slow GPUs from five years ago in addition to newly released cards, because anything newer than that is extremely bottlenecked and they need all the compute they can get. Why should it be any different in five years' time? Even nVidia is rumored to be about to bring back the RTX 3060 which is an Ampere architecture card that got released around 2021. It's just fine.