Do the compute instances not have hard disks? Because it seems like whoever's running these systems doesn't understand Linux or containers all that well.
If there's a hard disk on the compute nodes, then you just run the container from the remote image registry, and it downloads and extracts it temporarily to disk. No need for a network filesystem.
If the containerized apps want to then work on common/shared files, they can still do that. You just mount the network filesystem on the host, then volume-mount that into the container's runtime. Now the containerized apps can access the network filesystem.
This is standard practice in AWS ECS, where you can mount an EFS filesystem inside your running containers in ECS. (EFS is just NFS, and ECS is just a wrapper around Docker)
yes, nodes have local disks, but any local filesystem the user can write to is ofen wiped between jobs as the machines are shared resources.
there is also the problem of simply distributing the image and mounting it up. you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image.
on a compute node, / is maybe 500gb of nvme. thats all the disk it has.
the users mount their $home over nfs. and get whatever quota we assign. can be 100s of tb.
i actually allow rootless podman to run. but frown at it. its not very hard for a few jobs to use up all that 500gb if everyone is using podman.
i don't care if you run apptainer/singularity though. since it exists entirely within your own $home and doesnt use the local disk.
Scale of data we see on our HPC, it is way better performance per £/$ to use Lustre mounted over fast network. Would spend far too much time shifting data otherwise. Local storage should be used for tmp and scratch purposes.