logoalt Hacker News

bwfan123today at 3:09 PM3 repliesview on HN

What is the cheapest usable local rig for coding ? I dont want fancy agents and such, but something purpose built for coders, and fast-enough for my use, and open-source, so I can tweak it to my liking. Things are moving fast, and I am hesitant to put in 3-4K now in the hope that it would be cheaper if i wait.


Replies

KerrickStaleytoday at 4:58 PM

I think (without having done extensive research) that some sort of Apple hardware is your best bet right now. Apple hasn’t raised RAM upgrade prices [1] (although to be fair their RAM upgrades were hugely inflated before the crunch) and their high memory bandwidth means they do inference faster than most consumer GPUs.

I have an M4 MacBook Air with 24 GB RAM and it doesn’t feel sufficient to run a substantial coding model (in addition to all my desktop apps). I’m thinking about upgrading to an M5 MacBook Pro with much more RAM, but I think the capabilities of cloud-hosted models will always run ahead of local models and it might never be that useful to do local inference. In the cloud you can run multiple models in parallel (e.g. to work on different problems in parallel) but locally you only have a fixed amount of memory bandwidth so running multiple model instances in parallel is slower.

[1] https://9to5mac.com/2026/03/03/apple-macbook-price-increase-...

victordstoday at 7:29 PM

As mentioned before, I think Apple hardware is the best alternative right now.

Mac Studio, Mac Mini, MacBook Pro, you can find even some used ones with enough RAM that will run models like Qwen reasonably well.

I'm using a M1 Max MacBook Pro and it runs Qwen 3.5 on Ollama (without MLX) at a decent speed.

xiphias2today at 3:18 PM

It doesn't look like RAM, CPU GPU or bandwidth is getting cheaper if that helps you, quite the opposite.