logoalt Hacker News

jack_ppyesterday at 4:16 AM5 repliesview on HN

Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?

Somehow I doubt at this point in time they can even fail at something so simple.

Like at some point, for some stuff we have to trust LLMs to be correct 99% of the time. I believe summaries, translate, code docs are in that category


Replies

blharryesterday at 6:52 AM

The above post is an example of the LLM providing a bad description of the code. "Local first" with its default support being for OpenAI and Anthropic models... that makes it local... third?

Can you provide examples in the wild of LLMs creating good descriptions of code?

fauigerzigerkyesterday at 9:17 AM

>Somehow I doubt at this point in time they can even fail at something so simple.

I think it depends on your expectations. Writing good documentation is not simple.

Good API documentation should explain how to combine the functions of the API to achieve specific goals. It should warn of incorrect assumptions and potential mistakes that might easily happen. It should explain how potentially problematic edge cases are handled.

And second, good API documentation should avoid committing to implementation details. Simply verbalising the code is the opposite of that. Where the function signatures do not formally and exhaustively define everything the API promises, documentation should fill in the gaps.

aforwardslashyesterday at 6:01 AM

This happens to me all the time. I always ask claude to re-check the generated docs and test each example/snippet, sometimes more than once; more often than not, there are issues.

halfcatyesterday at 5:04 AM

> Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?

Yes. Docs it produces are generally very generic, like it could be the docs for anything, with project-specifics sprinkled in, and pieces that are definitely incorrect about how the code works.

> for some stuff we have to trust LLMs to be correct 99% of the time

No. We don’t.