logoalt Hacker News

johnfntoday at 1:40 AM1 replyview on HN

My statement is that if your only hallucinations are type errors, that can be solved by simply wrapping the LLM in a harness that says "Please continue working until `yarn run tsc` is clean". Yes, the LLM still hallucinates, but it doesn't affect me, because by the time I see the code, the hallucinations are gone.

This is something I do every day; to be quite honest, it's a fairly mundane use of AI and I don't understand why it's controversial. To give context, I've probably generated somewhere on the order of 100k loc of AI generated code and I can't remember the last time I have seen a hallucination.


Replies

parliament32today at 2:03 AM

Well of course it'll eventually work, just a random text generator will eventually produce code that passes your tests if you run it hard enough.

The problem is it's devouring your tokens as it does so. While you're on a subsidized plan that seems like a non-issue, but once the providers start charging you actual costs for usage.. yeah, the hallucinations will be a showstopper for you.

show 1 reply