I created "apfel" https://github.com/Arthur-Ficial/apfel a CLI for the apple on-device local foundation model (Apple intelligence) yeah its super limited with its 4k context window and super common false positives guardrails (just ask it to describe a color) ... bit still ... using it in bash scripts that just work without calling home / out or incurring extra costs feels super powerful.
LLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.
Good to see Ollama is catching up with the times for inference on Mac. MLX powered inference makes a big difference, especially on M5 as their graphs point out. What really has been a game changer for my workflow is using https://omlx.ai/ that has SSD KV cold caching. No longer have to worry about a session falling out of memory and needing to prefill again. Combine that with the M5 Max prefill speed means more time is spend on generation than waiting for 50k+ content window to process.
Why are people still using Ollama? Serious.
Lemonade or even llama.cpp are much better optimised and arguably just as easy to use.
What is the cheapest usable local rig for coding ? I dont want fancy agents and such, but something purpose built for coders, and fast-enough for my use, and open-source, so I can tweak it to my liking. Things are moving fast, and I am hesitant to put in 3-4K now in the hope that it would be cheaper if i wait.
I have an M4 Max with 48GB RAM. Anyone have any tips for good local models? Context length? Using the model recommended in the blog post (qwen3.5:35b-a3b-coding-nvfp4) with Ollama 0.19.0 and it can take anywhere between 6-25 seconds for a response (after lots of thinking) from me asking "Hello world". Is this the best that's currently achievable with my hardware or is there something that can be configured to get better results?
Already running qwen 70b 4-bit on m2 max 96gb through llama.cpp and it's pretty solid for day to day stuff. The mlx switch is interesting because ollama was basically shelling out to llama.cpp on mac before, so native mlx should mean better memory handling on apple silicon. Curious to see how it compares on the bigger models vs the gguf path
How does it compare to some of the newer mlx inference engines like optiq that support turboquantization - https://mlx-optiq.pages.dev/
This is excellent news!
What I'm waiting for next is MLX supported speech recognition directly from Ollama. I don’t understand why it should be a separate thing entirely.
On a M4 Pro MacBook Pro with 48GB RAM I did this test:
ollama run $model "calculate fibonacci numbers in a one-line bash script" --verbose
Model PromptEvalRate EvalRate
------------------------------------------------------
qwen3.5:35b-a3b-q4_K_M 6.6 30.0
qwen3.5:35b-a3b-nvfp4 13.2 66.5
qwen3.5:35b-a3b-int4 59.4 84.4
I can't comment on the quality differences (if any) between these three.Two things: 1) MLX has been available in LM Studio for a long time now, 2) I found that GGUF produced consistently better results in my benchmarking. The difference isn't big, but it's there.
is local llm inference on modern macbook pros comfortable yet? when i played with it a year or so ago, it worked fairly ok but definitely produced uncomfortable levels of heat.
(regarding mlx, there were toolkits built on mlx that supported qlora fine tuning and inference, but also produced a bunch of heat)
still waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram
How does Ollama help with Claude Code? Claude code runs in terminal but AFAIK connects back to anthropic directly and cannot run locally. I hope I'm missing something obvious.
What are significant differences between Ollama and LM Studio now? I haven’t used Ollama because it was missing MLX when I started using LLM GUIs.
What would be the non Mac computer to run these models locally at the same performance profile? Any similar linux ARM based computers that can reach the same level?
> Please make sure you have a Mac with more than 32GB of unified memory. Time for an upgrade I guess. If I can run Qwen3.5 locally than it is time to switch over to local first LLM usage.
Does that mean they are now finally a bit faster than llama.cpp? Cannot believe that.
Get turboquant 4 bit implemented and this would be game changer.
> Please make sure you have a Mac with more than 32GB of unified memory.
Yeah, I can still save money by buying a cheaper device with less RAM and just paying my PPQ.AI or OpenRouter.com fees .
I used today, working nicely.
Much of the discussion here is local versus remote. I like seeing things as "and" and "or." There will be small things I don't want to burn my Claude tokens on and other things that I want to access larger compute resources. And along the way checking results from both to understand comparative advantage on an ongoing basis.
As being on the market for a new mac and comparing refub M4 Max vs M5 _Pro_, I am interested in how much faster the neural engines are -- compared to marketing claims.
Finally! My local infra is waiting for it for months!
Works really great with https://swival.dev and qwen3.5.
Really nice to see this!
What is the difference between Ollama, llama.cpp, ggml and gguf?
[dead]
[dead]
We've been using MLX-LM directly (not via Ollama) for a desktop coding agent project and the performance on M-series chips has been genuinely impressive. Qwen 4B at full MLX speed is fast enough to be useful in an interactive loop — not instant, but not painful either.
The thing I've found with MLX vs llama.cpp is that the memory efficiency story is much better on unified memory machines. With llama.cpp you're fighting the CPU/GPU split; with MLX it just uses the whole pool. Made a meaningful difference for us running 4B models alongside an Electron app.
Curious whether the Ollama MLX backend exposes any controls for cache management or whether it's abstracted away entirely. That's been one of the trickier parts of tuning for our use case.
[dead]
[dead]
[dead]
"We can run your dumbed down models faster":
#The use of NVFP4 results in a 3.5x reduction in model memory footprint relative to FP16 and a 1.8x reduction compared to FP8, while maintaining model accuracy with less than 1% degradation on key language modeling tasks for some models.
The Foundation Model point is real. As an iOS developer, what excites me most isn't the performance — it's what on-device inference does to the app architecture.
When you're not making network calls, you stop thinking in "loading states" and start thinking in "local state machines." The UX design space opens up completely. Interactions that felt too fast to justify a server round-trip are suddenly viable.
The backporting issue is painful though. I've been shipping features wrapped in #available(iOS 26, *) and the fallback UX is basically a different product. It forces you to essentially maintain two app experiences.
Still think this is the right direction — especially for junior devs just learning to ship. Fewer moving parts, less infrastructure to debug.
On-device models are the future. Users prefer them. No privacy issues. No dealing with connectivity, tokens, or changes to vendors implementations. I have an app using Foundation Model, and it works great. I only wish I could backport it to pre macOS 26 versions.