Leading to my question: Ok keeping a zero and a minus-zero does make sense for some limits calculations... But when all you have is 4 bits, is this not quite wasteful? Would using the bits for eg. a 2.5 not improve the model?
Oh well that's a rabbit hole: NVIDIA Blackwell has this, also GGUFs sidestep this with Qi_j / Qi_K... Great article, spikes curiosity!
It might be useful. The Lion optimizer uses 1-bit values to represent forward or backward. NNs can pick up on patterns like that in very strange ways. Of course, those are 1's, not 0's, so maybe the benefit disappears when multiplying by zero. But it's important to challenge assumptions like "well, let's get rid of the negative half of 0" before you test experimentally whether it's useful or not. NNs are nothing if not shockingly weird when you try to make them.