logoalt Hacker News

aa-jvyesterday at 10:19 AM3 repliesview on HN

Very interesting .. I was recently tasked with getting a bespoke AI/ML environment ready to ship/deploy to, what can only be considered, foreign environments .. and this has proven to be quite a hassle, because, of course: python.

So I guess Apptainer is the solution to this use case - anyone had any experience with using it to bundle up an AI/ML application for redistribution? Thoughts/tips?


Replies

SirHumphreyyesterday at 10:26 AM

I did start to use them for AI development on the HPC I have access to and it worked well (GPU pass-through basically automatically, the performace seemed basically the same) - but I mostly use them because I do not want to argue with administrators anymore that it's probably time they update Cuda 11.7 (as well as python 3.6) - the only version of Cuda currently installed on the cluster.

show 2 replies
ethan_smithyesterday at 10:59 AM

Apptainer excels for AI/ML distribution because it handles GPU access and MPI parallelization natively, with better performance than Docker in HPC environments. The --fakeroot feature lets you build containers without sudo, and the SIF file format makes distribution simpler than managing Docker layers.

lazylizardtoday at 9:59 AM

conda