> You can absolutely mix and match lots of different binaries from different sources on one Linux system. That's exactly what we're doing now with TCL modules.
Doing this across different Linux distributions is inherently prone to failure. I don't know about your TCL modules specifically, but unless you have an identical and completely reproducible software toolchain across multiple linux distributions, it's going to end with problems.
Honestly, it sounds like you just don't understand these systems and how they work. TCL modules aren't better than containers, this is like comparing apples and organgutans.
> Doing this across different Linux distributions is inherently prone to failure.
Sure if you just take binaries from one distro that link with libraries on that distro and try and run it on a different one... But that's not what we're doing. All of our TCL modules are either portable binaries (e.g. commercial software) or compiled from source.
> Honestly, it sounds like you just don't understand these systems and how they work.
I do, but well done for being patronising.
> TCL modules aren't better than containers,
They are better for our use case.
> this is like comparing apples and organgutans.
If apples and orangutans were potential solutions to a single problem why couldn't you compare them?
The whole idea of maintaining module systems in a perfect sync on several systems as compared to i.e. just rsync-ing SIFs sounds strange to me. Often HPC systems (or rather their admins) are fairly (and for a good reason) conservative, keeping old system and libraries versions. Your mileage may vary, but in a small benchmark of a bioinformatics program called samtools depending on the version the fastest binaries were either run in a conda environment or inside singularity container using Clear Linux distro. Binaries compiled using either system's GCC or from a module were slower.
One would have to repeat it throwing in at least Spack to see if this still holds water.