But that's not how the training works. Goodhart's law isn't magic.
The original model is frozen, so it doesn't learn anything. The copies of the model are learning different objectives and have no incentive to be "loyal" to the original model.
Maybe you're imagining they'll hook this up in some larger training loop, but they haven't done that yet.
Future model training runs will have a copy of this research, and know "to defend against it".
EG, could a misaligned model-in-training optimize toward a residual stream that naively reads as these ones do, but in fact further encodes some more closely held beliefs?