Already saw it on Mastodon yesterday. A better presentation is in the other link. (https://competition.sair.foundation/competitions/mathematics...)
I understand the idea of a distillation challenge but it really feels like the poor man's solution to a problem that could be better solved by training a LLM and analysing the layers. In the end, as I value a condensed "cheat sheet", if the goal is to improve open-source model. A better approach seems to recreate the AlphaProof system, longer to do, but more efficient. The path taken by mathematicians now is agentic system with general LLM.
- Masto: https://mathstodon.xyz/@tao/116225525978210807
- AlphaProof: https://deepmind.google/blog/ai-solves-imo-problems-at-silve...
- AlphaProof description: https://www.nature.com/articles/s41586-025-09833-y
The goal isn't to improve the AI performance using the cheat sheet, it's to produce a cheat sheet at all that efficiently distills intuition about these 22 million results.
Presumably if it's written in plain text and useful to the AI, there may be some relevant information in there that will be interesting for humans too.
[dead]
It seems like the lift in the open-source models is being used as a proxy metric, and the core goal is a human understandable yoga [1] for approaching these kinds of equational proofs in universal algebra.
[1] https://mathoverflow.net/questions/64071/what-does-the-term-...