logoalt Hacker News

guilamuyesterday at 8:30 PM2 repliesview on HN

Yes those two models were tested on my own PC (local inference using my own CPU/GPU). So something my be bugged on my setup. gemma4-26b should be far better than gemma4-e4b.


Replies

data-ottawatoday at 12:24 PM

The early quants for Gemma4 26b had issues and needed to be updated, might be worth checking

embedding-shapeyesterday at 9:08 PM

Sounds like maybe using worse quantization on the bigger model? Quantization matters a lot for the quality, basically anything below Q8 is borderline unusable. If it isn't specified in a benchmark already it probably should.