logoalt Hacker News

Show HN: How I Topped the HuggingFace Open LLM Leaderboard on Two Gaming GPUs

167 pointsby dnhkngtoday at 1:18 PM54 commentsview on HN

Comments

dnhkngtoday at 1:20 PM

Author here. I found that duplicating a specific block of 7 middle layers in Qwen2-72B, without modifying any weights, improved performance across all Open LLM Leaderboard benchmarks and took #1. As of 2026, the top 4 models on that leaderboard are still descendants.

The weird finding: single-layer duplication does nothing. Too few layers, nothing. Too many, it gets worse. Only circuit-sized blocks of ~7 layers work. This suggests pretraining carves out discrete functional circuits in the layer stack that only work when preserved whole.

The whole thing was developed on 2x RTX 4090s in my basement. I'm now running current models (GLM-4.7, Qwen3.5, MiniMax M2.5) on a dual GH200 rig (see my other post). Code and new models coming soon.

Happy to answer questions.

show 3 replies
Lerctoday at 6:35 PM

I have had broadly the same intuitions on the use of middle layers, but haven't had much luck with the tiny models that I can run on my hardware.

There's a video on YouTube https://www.youtube.com/watch?v=pDsTcrRVNc0

about a looping layer models, after watching that I poured some thoughts off the top of my head into a comment which, of course, promptly sunk without a trace. I'll repost the gist of them here.

If you gain benefit from looping layers, at some level every layer of parameters is in front of and behind every other, the conclusion must be that the order of the layers does not need to be fixed at all.

If you cycle through the layers multiple times, are you doing so for the benefit of a particular layer on a particular problem. If so, can you skip the other layers that don't add on repetition. If you can skip (and you can know when to skip), and you can repeat (and know when to repeat)

What you would need is a mechanism which can decide which layer is needed next. Is that then not a looping single layer MOE model? Storing the layers as a wide set of selectable options rather than a deep set of unconditional layers. You would be picking what the next layer should be (or exit the loop) the threshold for exit drops each iteration so it always eventually exits. With a tunable 'how hard to think' knob to adjust the threshold.

show 1 reply
Havoctoday at 4:26 PM

Crazy writeup.

Author is right about the base64 part. Does seem weird that it can decode and understand it at same time. And I guess what makes it weird that we just sorta accept that for say English and German this works ie normal use but when framed as base64 then it suddenly stops feeling intuitive

show 1 reply
hex4def6today at 6:36 PM

I've gotta say, this writeup gives me an itchy feeling. It really does feel like poking around a synthetic brain at this point.

You could make the argument it's closer to the blocks of a CPU compared with a brain, and it's no different to copy-pasting some IP block for eg, HW JPEG decoding. But I feel like the difference here is we're 'discovering' these blocks / organs. They weren't designed, they were evolved.

show 1 reply
hmokiguesstoday at 4:13 PM

I really enjoyed reading this. I feel like generalists intuitively experience this exact thing so much throughout their lives because they must have this neuroanatomy you describe. There’s a certain geometry to knowledge that makes possible for this orthogonal movement and it is really fascinating to me. Thank you for publishing this, you made my day!

show 1 reply
d0100today at 6:45 PM

I wonder if joining layers from the "organs" of different models could further enhance the results

WithinReasontoday at 3:16 PM

Here is a paper that made a similar observation recently:

https://www.alphaxiv.org/abs/2512.19941

show 1 reply
tgw43279wtoday at 2:12 PM

That was a fun read! The base64 decoding and encoding is quite interesting. A parallel: these models are surprisingly robust to heavy word mangling, back in 2023 people used this trick to jailbreak the models very often, but what was more surprising is that they even understand it. I always thought of it this way there must be some circuitry in the model that maps these almost unrecognizable words/sentences into their rectified versions. But what your base64 also shows is the fact thy can also encode them back as well! (However models are known to not be able to produce mangled output that looks convincingly random. I think the base64 transformation is more mechanical in this regard and hence it‘s easier to do the reverse for them.) So your layer circuit hypothesis aligns pretty well with my mental model of how these models work based on the interpretability work I am familiar with! I really also like the way you used the heatmaps as a tool to derive layer insights, very intuitive! But it’s really surprising that you can simply duplicate layers and achieve better results that generalize! This is some research grade effort! I’m confident you could publish this in NeurIPS or ICML if you put it into a paper! I‘m quite impressed! Great work!

dnhkngtoday at 5:25 PM

Here's an extract, the core TL;DR for a feel of the article.

"And now for the weirdness: There was never the case where any Transformer layer would have seen the output from a future layer!

Layer 10 is trained on layer 9’s output distribution. Layer 60 is trained on layer 59’s. If you rearrange them — feeding layer 60’s output into layer 10 — you’ve created a distribution the model literally never saw during training.

The astounding thing about Goliath wasn’t that is was a huge leap in performance, it was that the damn thing functioned at all. To this day, I still don’t understand why this didn’t raise more eyebrows.

Experimentally, this proved that layers were far more interchangeable than anyone had reason to expect. The internal representations were homogenous enough that the model could digest out-of-order hidden states without collapsing. The architecture was far more flexible than a rigid pipeline.

Between the Base64 observation and Goliath, I had a hypothesis: Transformers have a genuine functional anatomy. Early layers translate input into abstract representations. Late layers translate back out. And the middle layers, the reasoning cortex, operate in a universal internal language that’s robust to architectural rearrangement. The fact that the layer block size for Goliath 120B was 16-layer block made me suspect the input and output ‘processing units’ sized were smaller that 16 layers. I guessed that Alpindale had tried smaller overlaps, and they just didn’t work.

If that was true, maybe I didn’t need to teach a model new facts to make it smarter. I didn’t need fine-tuning. I didn’t need RLHF. I just needed to give it a more layers to think with."

dongeckotoday at 5:31 PM

What a great read! You got me at the base64 oddity. I also stumbled over this, while trying to dodge some LLM limitation. (was trying to generate images in a time before multimodal was a thing. it only worked to a degree).

cootsnucktoday at 3:50 PM

Super cool. Love seeing these writeups of hobbyists getting their hands dirty, breaking things, and then coming out on the other side of it with something interesting.

kovektoday at 4:51 PM

Is this similar to send 48656c6c6f2c20686f772061726520796f753f in the prompt? As done here: https://youtu.be/GiaNp0u_swU?si=m7-LZ7EYxJCw0k1-

show 1 reply
Aditya_Gargtoday at 5:00 PM

Wild stuff and great read

Do you think karpathy's autoresearch would be useful here?

show 1 reply
goodmythicaltoday at 4:04 PM

Isn't this similar to models that have "double check the answer"?

First pass runs your input through, second pass runs it's output as input?

Just, in double check it presumably runs the entire stack while you're trying to skip the translation steps and only double check the logic?

show 2 replies
tjweitoday at 3:45 PM

Really interesting discovery, especially the part about base64. Reminds me of this: Transformer Layers as Painters https://arxiv.org/abs/2407.09298

blourvimtoday at 2:01 PM

I am not really an ml dev so I don't understand most of it. It does sound ridiculous how it would even work work. Brilliant work and great article I enjoyed reading it

This sounds similar to the Kimi's mixture of experts architecture if I understood it correctly(likely I have not), can you comment on this ?

lordmathistoday at 4:27 PM

That's cool. I tried the b64 thing on my local qwen3.5 27b without access to tools and it did it.

patchnulltoday at 4:05 PM

This lines up with what I have seen doing CKA (centered kernel alignment) analysis on transformer internals. The middle layers in most large models have surprisingly similar representations to their neighbors, so duplicating them is basically giving the model extra compute cycles in a region where it is already doing useful refinement without messing up the input/output encoding stages. Curious whether picking layers by representation similarity instead of just a contiguous block would do even better.

show 1 reply
GaggiXtoday at 4:06 PM

This reminds me when people were doing crazy stuff to improve the first Stable Diffusion model by swapping layers, interpolating weights, documenting which layer was most responsible for the quality of the hands etc. At the end the final models had dozens of different ancestors.

seeknotfindtoday at 3:35 PM

Did you ever try multiple copies?

show 1 reply
rob_ctoday at 4:21 PM

very awesome writeup, glad to see someone with access to hw actually playing with this.

Hopefully the cost per GPU will kick-it soon and we'll see people properly play, but frankly the "middle section" layers 2(ish) to (n-1)(ish) of a model can be shuffled up/down and left/right and still perform well.

The fun one will be an LLM router for LLM layers to apply the best reasoning to the best input so far, but frankly that would need the years and years of training that the author hints at.

The one that's still out of grasps is still how to combine/manipulate per-layer k,v caches into a globally coherent state. i.e. if layers can be moved up/down why can't the cached k,v be swapped/combined with different projections? global k,v caches work, but they have to be _huge_ in order to prevent model collapse even on something as simple as owt.

priowisetoday at 4:36 PM

[flagged]

show 1 reply