I did start to use them for AI development on the HPC I have access to and it worked well (GPU pass-through basically automatically, the performace seemed basically the same) - but I mostly use them because I do not want to argue with administrators anymore that it's probably time they update Cuda 11.7 (as well as python 3.6) - the only version of Cuda currently installed on the cluster.
Ah, right. So, no matter what container comes along to solve this problem, there's still the BOFH factor to deal with ..
Curious though, how are you doing this work without admin privs?
use conda?