>I have no idea what AI safety research at these companies is actually doing.
If you looked at AI safety before the days of LLMs you'd have realized that AI safety is hard. Like really really hard.
>the operators of AI for what their AI does.
This is like saying that you should punish a company after it dumps plutonium in your yard ruining it for the next million years after everyone warned them it was going to leak. Being reactionary to dangerous events is not an intelligent plan of action.