If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.
While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.
Classic anthropomorphizing in action here. Why would that be even a little important?
Why are you assuming a superintelligent AI will have human thoughts and emotions?