I was talking with a friend in the early days of AI boom. I argued that over-reliance in AI will create all kinds of catastrophes.
The answer I got is "It's game theory. Someone will do it, and you'll be forced to do it, too. It can't be that bad".
I mean, yes, logic is useful, but ignorance of risks? Assuming that moving blazingly fast and pulverizing things will result in good eventually?
This AI thing is not progressing well. I don't like this.
> It's game theory. Someone will do it, and you'll be forced to do it, too.
You'll be forced to do it, or lose. The unstated assumptions are that, first, it will work, and second, that you can't afford to lose. But let's just assume those for the sake of argument.
> It can't be that bad
That does not follow at all. It can in fact be that bad. That was what made the game theory of MAD different from the game theory of most other things.
> The answer I got is "It's game theory. Someone will do it, and you'll be forced to do it, too. It can't be that bad".
Oof. Potential "bad" outcomes of "game theory" should be calibrated to include all the bloody wars and genocides throughout recorded history.
Why did the Foi-ites kill every man, woman and child of the conquered Bar-ite city? Because if they didn't, then they'd be at a disadvantage if the Bar-ites didn't reciprocate in the cities they conquered...
An interesting ethical framework, your friend has.