It's right there https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks I looked at the weights before. It's not PR fluff, they made it clear by showing how it really affected various tensors terribly.
"MXFP4 is much worse on many tensors - attn_gate, attn_q, ssm_beta, ssm_alpha using MXFP4 is not a good idea, and rather Q4_K is better - also MXFP4 uses 4.25 bits per weight, whilst Q4_K uses 4.5 bits per weight. It's better to use Q4_K than MXFP4 when choosing between them."
The Q4 quants had a mixture of mxfp4 leading to worse outcomes.