logoalt Hacker News

tovejyesterday at 8:11 PM0 repliesview on HN

You're just strawmanning now. I've prompted extremely well-specced, contained features, and the LLM has failed nonetheless.

In fact, the more details I give it about a specific problem, the more it seems to hallucinate. Presumably because it is more outside the training set.