I will stand by the first point unless models start being trained with different objectives instead of RLHF's three objectives: Helpfulness, Harmlessness and Instruction-following
I will very likely be wrong on the second point.