logoalt Hacker News

segmondyyesterday at 8:34 PM1 replyview on HN

each layer is made up of various weights, the weights are adjusted to quant it. a pure q8 will have all the weights as q8, or a q4 the same. but some are kept as f32, etc. here's an example of q3_k_xl - https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF/tree/ma... we can see certain weights are f32, q8, q5, q3, etc. They used mxfp4 in some weights and mxfp4 doesn't seem to place nicely in quants so that's why they are retiring it. read their publication again and it should make more sense.


Replies

jychangyesterday at 11:25 PM

I am aware of all that.

They literally never say “they used mxfp4 in some weights”. What you’re claiming they said doesn’t exist.

This isn’t a postmortem, it’s PR fluff without actually addressing the issue.

show 1 reply