logoalt Hacker News

simonwtoday at 2:39 AM1 replyview on HN

I expect this to be my main machine for the next 3-4 years (which is how I justified the 128GB one). It's a beast of a machine - I love that I can run an 80GB model and still have 48GB left for everything else.

Can't say that it wouldn't be a better idea to spend that cash on tokens from the frontier hosted models though.

I'm an LLM nerd so running local models is worth it from a research perspective.


Replies

simpaticodertoday at 5:06 AM

An M5 Max MBP with 128G of RAM costs ~$5k. An Nvidia RTX 5090 with 32G RAM is $4-5k, and RTX PRO 6000 with 96GB RAM $10k. Do you have any data on which is the best price/performance for local inference? Do you know what the big OpenAI/Anthropic/Google datacenters are running?

show 2 replies