logoalt Hacker News

jedbergtoday at 6:33 AM1 replyview on HN

> But these LLMs are like Happy Gilmore. They get to the green in one shot then they orbit the hole with an extremely dubious short game.

Except that he got good at his short game by the end. LLMs will get there sooner than we think.


Replies

maccardtoday at 10:58 AM

I don’t think we will though. Because the “short game” is match the requirements of the agent operator. If we don’t care about the finer details that we let the LLMs infer, then we shouldn’t care if a human infers them (but yet we do).

I think LLMs are great, and I think people who can use them to get to the green in one and take it from there will soar, just like people who could identify a problem and solve it themselves did in the past.