5-10% accuracy is like the difference between a usable model, and unusable model.
yes, but the difference between one model and one 4x larger is usually a lot more than that.
It is not a question of do a run Qwen 8b at bf16 or a quantized version. It more of a question of do I run Qwen 8b at full precision or do I run a quantized version of Qwen 27b.
You will find that you are usually better off with the larger model.
Yes I was wondering why they mentioned those numbers without mentioning their practical significance.
[dead]
[dead]
Definitely could be, but in the time I spent talking to the 4-bit models in comparison to the 16-bit original it seemed surprisingly capable still. I do recommend benchmarking quantized models at the specific tasks you care about.