Yes. I've spent months running Qwen2.5-8B on my barebones 16gb ram M4 Mac mini to handle identifying sites from google search results. It has been rock solid. I'm not even running this MLX-powered improvement on it yet.
Your idea of what people need from Local LLMs and others are different. Not everybody needs a /r/myboyfriendisai level performance.