logoalt Hacker News

hansmayeryesterday at 5:14 PM2 repliesview on HN

> That's exactly what they do for me - especially since the November model releases (GPT-5.1, Opus 4.5).

I mean it's inherently impossible, given the statistical nature of LLMs, so I am not sure are you claiming this out of ignorance or other interests, but again, what you claim is impossible due to the very nature of LLMs.


Replies

lunar_mycroftyesterday at 5:48 PM

It's impossible for human developers too. Natural language descriptions of a program are either much more painful and time consuming to write than the code they describe, or contain some degree of ambiguity which the thing translating the description into code has to resolve (and the probability of the resolution the entity writing the description would have chosen and the one entity translating it into code chose matching perfectly approach zero).

It can make sense to trade off some amount of control for productivity, but the tradeoff is inherent as soon as a project moves beyond a single developer hand writing all of the code.

show 1 reply
simonwyesterday at 6:46 PM

It's impossible, and yet I experience it on a daily basis.

YMMV. I've had a lot of practice at prompting.

show 2 replies