logoalt Hacker News

CSMastermindtoday at 5:00 AM3 repliesview on HN

Worth mentioning, though, that people have already tried running all of them through LLMs at this point.

So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).


Replies

Tarq0ntoday at 6:02 AM

Not definitively. LLMs are stochastic with respect to input, temperature and the exact prompt. It's possible that the model was already capable of it but never received the exact right conditions to produce this output.

show 1 reply
imirictoday at 6:04 AM

> So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).

No, it's not.

While I don't dispute that new models may perform better at certain tasks, the fact that someone was able to use them to solve a novel problem is not proof of this.

LLM output is nondeterministic. Given the same prompt, the same LLM will generate different output, especially when it involves a large number of output tokens, as in this case. One of those attempts might produce a correct output, but this is not certain, and is difficult if not impossible for a human not expert in the domain to determine this, as shown in this thread.

show 1 reply
jb1991today at 6:08 AM

Minor aside, these models do not return the same answer every time you prompt it. Makes it harder to reason over their effectiveness.

show 1 reply