logoalt Hacker News

Aurornisyesterday at 6:09 PM3 repliesview on HN

> Going from GLM-4.7 to something comparable to 4.5 or 5.2 would be an absolutely crazy improvement.

Before you get too excited, GLM-4.7 outperformed Opus 4.5 on some benchmarks too - https://www.cerebras.ai/blog/glm-4-7 See the LiveCodeBench comparison

The benchmarks of the open weights models are always more impressive than the performance. Everyone is competing for attention and market share so the incentives to benchmaxx are out of control.


Replies

InsideOutSantayesterday at 6:16 PM

Sure. My sole point is that calling Opus 4.5 and GPT-5.2 "last generation models" is discounting how good they are. In fact, in my experience, Opus 4.6 isn't much of an improvement over 4.5 for agentic coding.

I'm not immediately discounting Z.ai's claims because they showed with GLM-4.7 that they can do quite a lot with very little. And Kimi K2.5 is genuinely a great model, so it's possible for Chinese open-weight models to compete with proprietary high-end American models.

show 2 replies
miroljubyesterday at 7:39 PM

Yeah, I'm sure closed source model vendors are doing everything within their power to dumb down benchmarks, so they can look like underdogs and play a pity game against open weight models.

Let's have a serious discussion. Just because Claude PR department coined the term benchmaxxing, we we should not be using it unless they shell out some serious monetes.

KronisLVyesterday at 11:46 PM

I still enjoy using GLM 4.7 on Cerebras because of the speed you can get there and the frankly crazy amount of tokens they give you. Before that, 4.6 messed up file edits in OpenCode and VSC plugins more frequently, 4.7 is way more dependable but still has some issues with Python indentation and some partial edits sometimes (might also be tooling issue, e.g. using \ vs / as file separators in tool calls too) - but the quality of the output went up nicely!

I hope GLM 5 will also be available on Cerebras, since for the low-medium complexity work that's my go to, with Codex and Claude Code and Gemini CLI being nice for the more complex tasks.