GPUs don't really have six year lifespans, though. The hardware itself lasts far longer than that, even hardware that's been used for cryptomining in terrible makeshift setups is absolutely fine for reuse.
In context of datacenter using AI workloads, it's cheaper to replace them after few years with faster, more energy efficient ones, because the power cost is major factor
Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.
In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.
If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.