logoalt Hacker News

foursideyesterday at 4:51 PM3 repliesview on HN

Maybe for folks who are deep into this, but it’s not exactly accessible. I tried reading up on it a couple of months ago, but parsing through what hardware I needed, the model and how to configure it (model size vs quantization), how I’d get access to the hardware (which for decent results in coding, new hardware runs $4k-$10k last I checked)—it had a non trivial barrier of entry. I was trying to do this over a long weekend and ran out of time. I’ll have to look into it again because having the local option would be great.

Edit: the replies to my comment are great examples of what I’m talking about when I say it’s hard to determine what hardware I’d need :).


Replies

jonaustinyesterday at 6:09 PM

Just get a decent macbook, use LM Studio or OMLX and the latest qwen model you can fit in unified ram.

Hooking up Claude Code to it is trivial with omlx.

https://github.com/jundot/omlx

imetatrollyesterday at 9:35 PM

For me the big hangup is the hardware. If I could find a simple guide to putting together a machine that I can run off an outlet in my home, I am sold. The problem is that I haven't found this yet (though I suppose I haven't looked very hard either).

root_axisyesterday at 5:18 PM

> new hardware runs $4k-$10k last I checked

Starting closer to 40k if you want something that's practical. 10k can't run anything worthwhile for SDLC at useful speeds.

show 1 reply