logoalt Hacker News

zozbot234last Wednesday at 10:05 PM6 repliesview on HN

The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago. That's not "nothing" and is plenty good enough for everyday work.


Replies

Aurornisyesterday at 2:07 AM

> The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago

Kimi K2.5 is a trillion parameter model. You can't run it locally on anything other than extremely well equipped hardware. Even heavily quantized you'd still need 512GB of unified memory, and the quantization would impact the performance.

Also the proprietary models a year ago were not that good for anything beyond basic tasks.

reilly3000last Wednesday at 10:11 PM

Which takes a $20k thunderbolt cluster of 2 512GB RAM Mac Studio Ultras to run at full quality…

show 6 replies
corysamalast Wednesday at 11:38 PM

The article mentions https://unsloth.ai/docs/basics/claude-codex

I'll add on https://unsloth.ai/docs/models/qwen3-coder-next

The full model is supposedly comparable to Sonnet 4.5 But, you can run the 4 bit quant on consumer hardware as long as your RAM + VRAM has room to hold 46GB. 8 bit needs 85.

paxyslast Wednesday at 10:24 PM

LOCAL models. No one is running Kimi 2.5 on their Macbook or RTX 4090.

show 2 replies
0xbadcafebeeyesterday at 12:43 AM

Kimi K2.5 is fourth place for intelligence right now. And it's not as good as the top frontier models at coding, but it's better than Claude 4.5 Sonnet. https://artificialanalysis.ai/models

teaearlgraycoldlast Wednesday at 10:12 PM

Having used K2.5 I’d judge it to be a little better than that. Maybe as good as proprietary models from last June?