> The finding I did not expect: model quality matters more than token speed for agentic coding.
I'm really surprised how that was not obvious.
Also, instead of limiting context size to something like 32k, at the cost of ~halving token generation speed, you can offload MoE stuff to the CPU with --cpu-moe.
In my experience if you're coding or doing something that requires precision, quantizing the kv cache is definitely not worth it.
If you're just chatting or doing less precise things it's 1000% worth it going down to Q8 or sometimes even Q4
I think it might be a good idea to make some kind of local-first harness that is designed to fully saturate some local hardware churning experiments on Gemma 4 (or another local model) 24/7 and only occasionally calls Claude Opus for big architectural decisions and hard-to-fix bugs.
Something like:
* Human + Claude Opus sets up project direction and identifies research experiments that can be performed by a local model
* Gemma 4 on local hardware autonomously performs smaller research experiments / POCs, including autonomous testing and validation steps that burn a lot of tokens but can convincingly prove that the POC works. This is automatically scheduled to fully utilize the local hardware. There might even be a prioritization system to make these POC experiments only run when there's no more urgent request on the local hardware. The local model has an option to call Opus if it's truly stuck on a task.
* Once an approach is proven through the experimentation, human works with Opus to implement into main project from scratch
If you can get a complex harness to work on models of this weight-class paired with the right local hardware (maybe your old gaming GPU plus 32gb of RAM), you can churn through millions of output tokens a day (and probably like ~100 million input tokens though the vast majority are cached). The main cost advantage compared to cloud models is actually that you have total control over prompt caching locally which makes it basically free, whereas most API providers for small LLM models ask for full price for input tokens even if the prompt is exactly repeated across every request.
"The reason I had not done this before is that local models could not call tools. "
Rubbish, we have been calling tools locally for 2 years, and it's very false that gemma3 scored under 7% in tool calling. Hell, I was getting at least 75% tool calling with llama3.3
I'm currently experimenting with running google/gemma-4-26b-a4b with lm studio (https://lmstudio.ai/) and Opencode on a M3 Ultra with 48Gb RAM. And it seems to be working. I had to increase the context size to 65536 so the prompts from Opencode would work, but no other problems so far.
I tried running the same on an M3 Max with less memory, but couldn't increase the context size enough to be useful with Opencode.
It's also easy to integrate it with Zed via ACP. For now it's mostly simple code review tasks and generating small front-end related code snippets.
Related: I have upgraded my M4 Pro 24GB to M5 Pro 48GB yesterday. The same Gemma 4 MoE model (Q4) runs about 8x more t/s on M5 Pro and loads 2x times faster from disk to memory.
Gonna run some more tests later today.
For coding it makes no sense to use any quantization worse than Q6_K, from my experience. More quantized models make more mistakes and if for text processing it still can be fine, for coding it's not.
I would have liked to see quality results between the different quantization methods - Q4_K_M, Q_8_0, Q_6_K rather than tok/s
I don't really have the hardware to try it out, but I'm curious to see how Qwen3.5 stacks up against Gemma 4 in a comparison like this. Especially this model that was fine tuned to be good at tool calling that has more than 500k downloads as of this moment: https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...
I've been playing with this for the last few days. The model is fast, pretty smart, and I am hitting the same tool use issues. This blog post is unusually pertinent. The model speed isn't an issue on my dual 4090s, the productivity is mainly limited by the intelligence (while high it's still not high enough for some tasks) and getting stuck in loops.
What I would like is for it to be able detect when these things happen and to "Phone a Friend" to a smarter model to ask for advice.
I'm definitely moving into agent orchestration territory where I'll have an number of agents constantly running and working on things as I am not the bottleneck. I'll have a mix of on-prem and AI providers.
My role now is less coder and more designer / manager / architect as agents readily go off in tangents and mess that they're not smart enough to get out of.
I did this with qwen 3.5 - tool calling was the biggest issue but for getting it to work with vllm and mlx I just asked codex to help. The bulk of my the time was waiting on download. For vllm it created a proxy service to translate some codex idioms to vllm and vice versa. In practice I got good results on my first prompt but followup questions usually would fail due to the models trouble with tool calling - I need to try again with gemma4
I laughed when I saw the .md table rendering as a service. Blows my mind what people will use
Does the large system prompt work fine for this model? If needed, you could use a lightweight CLI like Pi, which only comes with 4 tools by default
With a nvidia spark or 128gb+ memory machine, you can get a good speed up on the 31B model if you use the 26B MoE as a draft model. It uses more memory but I’ve seen acceptance rate at around 70%+ using Q8 on both models
Hey - I use the same, w/ both gemma4 and gpt-oss-*; some things I have to do for a good experience:
1) Pin to an earlier version of codex (sorry) - 0.55 is the best experience IME, but YMMV (see https://github.com/openai/codex/issues/11940, https://github.com/openai/codex/issues/8272).
2) Use the older completions endpoint (llama.cpp's responses support is incomplete - https://github.com/ggml-org/llama.cpp/issues/19138)
I've been VERY impressed with Gemma4 (26B at the moment). It's the first time I've been able to use OpenCode via a llamacpp server reliably and actually get shit done.
In fact, I started using it as a coding partner while learning how to use the Godot game engine (and some custom 'skills' I pulled together from the official docs). I purposely avoided Claude and friends entirely, and just used Gemma4 locally this week... and it's really helped me figure out not just coding issues I was encountering, but also helped me sift through the documentation quite readily. I never felt like I needed to give in and use Claude.
Very, very pleased.
Nice walkthrough and interesting findings! The difference between the MoE and the dense models seems to be bigger than what benchmarks report. It makes sense because a small gain in toll planning and handling can have a large influence on results.
I also tried Gemma 4 on a M1 Macbook Pro. It worked but it was too slow. Great to know that it works on more advanced laptops!
I think local models are not yet that good or fast for complex things, so I am just using local Gemma 4 for some dummy refactorings or something really simple.
Using Gemma4-31B-q4_NL in open code with a 128k context and it’s been great.
Amazing. Thanks for your detailed posts on the bake-off between the Mac and GB10, Daniel, and on your learnings. I had trying similar on both compute platforms on my to-do list. Your post should save me a lot of debugs, sweat, and tears.
This is genuinely very helpful. I'm planning a MacBook pro purchase with local inference in mind and now see I'll have to aim for a slightly higher memory option because the Gemma A4 26B MoE is not all that!
You can also try speculative decoding with the E2B model. Under some conditions it can result in a decent speed up
Nothing about omlx?
Ollama is the worst engine you could use for this. Since you are already running on an Nvidia stack for the dense model, you should serve this with vLLM. With 128GB you could try for the original safetensors even though you might need to be careful with caches and context length.
Gemma 4 is a strongly censored model, so much so that it refused to answer medical and health related questions, even basic ones. No one should be using it, and if this is the best that Google can do, it should stop now. Other models do not have such ridiculous self-imposed problems.
I recently spun up Gemma 4 26B-A4B on my local box and pointed OpenCode at it, and it did reasonably well! My machine is 8 years old, though I had the foresight to double the RAM to 32 GiB before the RAMpocalypse, and I can get a little bit of GPU oomph but not a lot, not with a mere GTX 1070. So it's slow, and nowhere near frontier model quality, but it can generate reasonable code and is good for faffing with!
I'm suprised folks are having such great coding experiences. Using Gemma-4 on a moderately complex code base, it utterly flailed and gave a half baked implementation.
The setup allots around 4k of context after system prompt lol
[dead]
[dead]
Gemma 4 26B really is an outlier in its weight class.
In our little known, difficult to game benchmarks, it scored about as well as GPT 5.2 and Gemini 3 Pro Preview on one-shot coding problems. It had me re-reviewing our entire benchmarking methodology.
But it struggled in the other two sections of our benchmark: agentic coding and non-coding decision making. Tool use, iterative refinement, managing large contexts, and reasoning outside of coding brought the scores back down to reality. It actually performed worse when it had to use tools and a custom harness to write code for an eval vs getting the chance to one-shot it. No doubt it's been overfit on common harnesses and agentic benchmarks. But the main problem is likely scaling context on small models.
Still, incredible model, and incredible speed on an M-series Macbook. Benchmarks at https://gertlabs.com