> Programmers were grateful for the move from 32-bit floats to 64-bit floats. It doesn’t hurt to have more precision
Someome didn't try it on GPU...
There's an "Update:" note for a next post on NF4 format. As far as I can tell this is neither NVFP4 nor MXFP4 which are commonly used with LLM model files. The thing with these formats is that common information is separated in batches so not a singular format but a format for groups of values. I'd like to know more about these (but not enough to go research them myself).
There is a relevant Wikipedia page about minifloats [0]
> The smallest possible float size that follows all IEEE principles, including normalized numbers, subnormal numbers, signed zero, signed infinity, and multiple NaN values, is a 4-bit float with 1-bit sign, 2-bit exponent, and 1-bit mantissa.
FP4 1:2:0:1 (other examples: binary32 1:8:0:23, 8087 ep 1:15:1:63)
S:E:l:M
S = sign bit present (or magnitude-only absolute value)
E = exponent bits (typically biased by 2^(E-1) - 1)
l = explicit leading integer present (almost always 0 because the leading digit is always 1 for normals, 0 for denormals, and not very useful for special values)
M = mantissa (fraction) bits
The limitations of FP4 are that it lacks infinities, [sq]NaNs, and denormals that make it very limited to special purposes only. There's no denying that it might be extremely efficient for very particular problems.
If a more even distribution were needed, a simpler fixed point format like 1:2:1 (sign:integer:fraction bits) is possible.
> In ancient times, floating point numbers were stored in 32 bits.
I thought in ancient times, floating point numbers used to be 80 bit. They lived in a funky mini stack on the coprocessor (x87). Then one day, somebody came along and standardized those 32 and 64 bit floats we still have today.
9 years ago, I shared this as an April Fools joke here on HN.
It seems that life is imitating art.
https://github.com/sdd/ieee754-rrp