logoalt Hacker News

$500 GPU outperforms Claude Sonnet on coding benchmarks

454 pointsby yogthosyesterday at 5:31 PM250 commentsview on HN

Comments

bloppetoday at 7:13 AM

Generating big chunks of code is rarely what I want from an agent. They really shine for stuff like combing through logs or scanning dozens of source files to explain a test failure. Which benchmark covers that? I want the debugging benchmark that tests mastery of build systems, CLIs, etc.

show 8 replies
mmaundertoday at 1:33 AM

I’d encourage devs to use MiniMax, Kimi, etc for real world tasks that require intelligence. The down sides emerge pretty fast: much higher reasoning token use, slower outputs, and degradation that is palpable. Sadly, you do get what you pay for right now. However that doesn’t prevent you from saving tons through smart model routing, being smart about reasoning budgets, and using max output tokens wisely. And optimize your apps and prompts to reduce output tokens.

show 10 replies
selcukatoday at 1:04 AM

It's a race to the bottom. DeepSeek beats all others (single-shot), and it is ~50% cheaper than the cost of local electricity only.

> DeepSeek V3.2 Reasoning 86.2% ~$0.002 API, single-shot

> ATLAS V3 (pass@1-v(k=3)) 74.6% ~$0.004 Local electricity only, best-of-3 + repair pipeline

show 5 replies
DanielHalltoday at 10:58 AM

These small models, having been fine-tuned for the test, achieve frighteningly high scores, yet perform abysmally in real-world scenarios.

memothonyesterday at 8:58 PM

I'm always skeptical because you can make it pass the benchmarks, then you use it and it is not practically useful unlike an extremely general model.

Cool work though, really excited for the potential of slimming down models.

show 3 replies
tgibatoday at 8:16 AM

Despite skepticism I love to see experiments like that. If we all are able to run an open source model locally on mid-high end machines I'd be very happy.

electroglyphtoday at 5:18 AM

what's with the weird "Geometric Lens routing" ?? sounds like a made up GPTism

b3ingtoday at 4:19 AM

Will open source or local llms kill the big AI providers eventually? If so when? I can see maybe basic chat, not sure about coding and images yet

show 9 replies
alkonauttoday at 2:14 PM

Great, it became a $1000 gpu while you were reading that.

emp17344today at 2:46 AM

Yet more evidence that the harness matters more than the model.

riidomtoday at 12:04 AM

Not a word about the tok/sec, unfortunately.

show 2 replies
bilekastoday at 2:00 PM

Where is a RTX 5060 Ti 16 GB 500$?

Edit : The 8GB seems to hit this price but 16 not so much.

show 1 reply
dwa3592today at 2:29 PM

I wonder if it's working out for the benchmark problems only?

one expensive and hard lesson we will learn overtime is that you can't compress generality beyond a point.

bdbdbdbtoday at 8:22 AM

This is the kind of innovation I love to see. The big AI companies days are numbered if we can have the same quality in house

0xbadcafebeetoday at 3:39 AM

This is specifically an experiment using ablation and multiple passes to improve the end result. Other techniques have been found that do this (like multiple passes through the same layers). But this technique - for this one specific model - seems to be both more performant, but also takes much longer, and requires more complexity. It's unlikely most people would use this technique, but it's interesting.

Aurornistoday at 3:43 PM

This AI-written project is running its own LiveCodeBench on a completely different methodology. The AI-written notes even admit it:

> ATLAS scores are from 599 LCB tasks using the full V3 pipeline (best-of-3 + Lens selection + iterative repair) on a frozen 14B quantized model or "pass@k-v(k=3)". Competitor scores are single-shot pass@1 (zero-shot, temperature 0) from Artificial Analysis on 315 LCB problems -- not the same task set, so this is not a controlled head-to-head.

Instead of following the LiveCodeBench methodology, it's a harness that spins up a sandbox and spends a long time testing and refining the solution. If you did the same for Sonnet, GPT5.4, or other models they would also get significantly higher scores and they'd do it faster.

The AI-coded README is also full of signs of vibecoded slop like the discoveries that some of the complex structures implemented were not actually being used or contributing anything to the output.

Temporary_31337today at 8:54 AM

the headline is pretty stupid - compares a model to a GPU that models run on. Somewhere in that data centre, some part of Sonnet infferencing runs on a 900$ GPU or maybe even cheaper Google tensor

15minutemailtoday at 7:25 AM

74% on LCB from a single 5060 Ti. I've been paying Anthropic per task and this guy is running it on electricity money, 20 minutes per task is rough for anything interactive though.

show 1 reply
negativegateyesterday at 11:37 PM

Am I still SOL on AMD (9070 XT) when it comes to this stuff?

show 3 replies
szniotoday at 8:45 AM

On that topic, anyone here got a decent local coding AI setup for a 12GB VRAM system? I have a Radeon 6700 XT and would like to run autocomplete on it. I can fit some models in the memory and they run quick but are just a tad too dumb. I have 64GB of system ram so I can run larger models and they are at least coherent, but really slow compared to running from VRAM.

show 1 reply
josefritzisheretoday at 1:25 PM

The core problem of AI remains unresolved, with no conceivable path to solvency. The issue is that AI isn't very good. It's OK, sometimes under very narrow criteria. But providing AI in reality very costly. Vague promises of it magically becoming better remain, very optimistic at best and still provide no route to solvency.

superkuhtoday at 1:04 AM

If anyone else was hoping this was using Q8 internally and that converted to Q4 it could fit in 12GB VRAM: unfortunately it's already at Q4_K_M (~9GB) and the the 16GB requirement is from other parts not a 14B@8bit+kv cache/etc you might guess.

limocetoday at 1:47 AM

The title should be "Adaptive Test-time Learning and Autonomous Specialization".

paxrel_aitoday at 2:01 PM

[dead]

eddie-wangtoday at 3:40 AM

[dead]

itigges22today at 3:54 AM

[dead]

mergeshieldtoday at 11:28 AM

[dead]

LuisvelAItoday at 10:12 AM

[flagged]

wiradikusumatoday at 3:18 AM

[dead]

felixagentaitoday at 2:14 AM

[flagged]

show 1 reply
sayYayToLifetoday at 1:22 AM

[dead]

ozgurozkantoday at 1:31 AM

[dead]

bustahtoday at 2:58 AM

[dead]

Razengantoday at 8:02 AM

Claude Code has been bleh or meh at best in my experience. There's so many posts on HN fawning about it lately that it could only be a guerrilla marketing campaign.

show 3 replies