logoalt Hacker News

msbhogaviyesterday at 11:54 PM3 repliesview on HN

The hardware situation is way better than you think, and quantization is a huge part of why.

Take Qwen 3.5 27B, which is a solid coding model. At FP16 it needs 54GB of VRAM. Nobody's running that on consumer hardware. At Q4_K_M quantization, it needs 16GB. A used RTX 3090 has 24GB and goes for about $900. That model runs locally with room for context.

For 14B coding models at Q4, you're looking at about 10GB. A used RTX 3060 12GB handles that for under $270.

The gap between "needs a datacenter" and "runs on my desk" is almost entirely quantization. A 27B model at Q4 loses surprisingly little quality for most coding tasks. It's not free, but it's not an RTX 7090 either. A used 3090 is probably the most recommended card in the local LLM community right now, and for good reason.


Replies

rdostoday at 10:46 AM

14B even at Q4 isn't realistic for coding on a single 12GB RTX 3060. Token speed is too slow. After all they are dense models. You aren't getting a good MoE model under 30B. You can do OCR, STT, TTS really well and for LLMs, good use cases are classification, summarization and extraction with <10B models.

show 1 reply
faangguyindiatoday at 9:33 AM

U are better off just buying their coding plan.

Running LLM makes no sense whatsoever

show 1 reply
AbanoubRodolftoday at 7:09 AM

[dead]