where do you see this goal post moving? From my perspective, it never was "The AIs will never do this." but rather even before day 1 all the experts were explicitly saying that AIs will absolutely do this, that alignment isn't solved or anything close to being solved, so any "ethical guidelines" that we can implement are just a bandaid that will hide some problematic behavior but won't really prevent this even if done to the best of our current ability.