This is the kind of misinformation that makes people more wary of floats than they should be.
The same series of operations with the same input will always produce exactly the same floating point results. Every time. No exceptions.
Hardware doesn't matter. Breed of CPU doesn't matter. Threads don't matter. Scheduling doesn't matter. IEEE floating point is a standard. Everyone follows the standard. Anything not producing indentical results for the same series of operations is *broken*.
What you are referring to is the result of different compilers doing a different series of operations than each other. In particular, if you are using the x87 fp unit, MSVC will round 80-bit floating point down to 32/64 bits before doing a comparison, and GCC will not by default.
Compliers doesn't even use 80-bit FP by default when compiling for 64 bit targets, so this is not a concern anymore, and hasn't been for a very long time.
There's just so many "but"s to this that I can't in good faith recommend people to treat floats as deterministic, even though I'd very much love to do so (and I make such assumptions myself, caveat emptor):
- NaN bits are non-deterministic. x86 and ARM generate different sign bits for NaNs. Wasm says NaN payloads are completely unpredictable.
- GPUs don't give a shit about IEEE-754 and apply optimizations raging from DAZ to -ffast-math.
- sin, rsqrt, etc. behave differently when implemented by different libraries. If you're linking libm for sin, you can get different implementations depending on the libc in use. Or you can get different results on different hardware.
- C compilers are allowed to "optimize" a * b + c to FMA when they wish to. The standard only technically allows this merge within one expression, but GCC enables this in all cases by default on some `-std`s.
You're technically correct that floats can be used right, but it's just impossible to explain to a layman that, yes, floats are fine on CPUs, but not on GPUs; fine if you're doing normal arithmetic and sqrt, but not sin or rsqrt; fine on modern compilers, but not old ones; fine on x86, but not i686; fine if you're writing code yourself, but not if you're relying on linear algebra libraries, unless of course you write `a * b + c` and compile with the wrong options; fine if you rely on float equality, but not bitwise equality; etc. Everything is broken and the entire thing is a mess.