I'm a big LLM sceptic but that's… moving the goalposts a little too far. How could an average Joe even understand the conjecture enough to write the initial prompt? Or do you mean that experts would give him the prompt to copy-paste, and hope that the proverbial monkey can come up with a Henry V? At the very least posit someone like a grad student in particle physics as the human user.
That's kinda the whole point.
SpaceX can use an optimization algorithm to hoverslam a rocket booster, but the optimization algorithm didn't really figure it out on its own.
The optimization algorithm was used by human experts to solve the problem.
hey, GPT, solve this tough conjecture I've read about on Quanta. make no mistakes
I would interpret it as implying that the result was due to a lot more hand-holding that what is let on.
Was the initial conjecture based on leading info from the other authors or was it simply the authors presenting all information and asking for a conjecture?
Did the authors know that there was a simpler means of expressing the conjecture and lead GPT to its conclusion, or did it spontaneously do so on its own after seeing the hand-written expressions.
These aren't my personal views, but there is some handwaving about the process in such a way that reads as if this was all spontaneous involvement on GPTs end.
But regardless, a result is a result so I'm content with it.