logoalt Hacker News

bogzzyesterday at 10:56 PM2 repliesview on HN

ChatGPT 5.2 kept gaslighting me yesterday telling me that LLMs were explainable with Shapley values, and it kept referencing papers which talk about LLMs, and about SHAP, but talk about LLMs being used to explain the SHAP values of other ML models.

I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.


Replies

johnfnyesterday at 11:34 PM

I mean to say that code generation never hallucinates. I suppose that was unclear.