logoalt Hacker News

sumenoyesterday at 9:21 PM1 replyview on HN

LLMs don't understand what they are doing, they can't explain it to you, it's just creating a reasonable sounding response


Replies

codethiefyesterday at 10:38 PM

But that response is grounded in the training data they've seen, so it's not entirely unreasonable to think their answer might provide actual insights, not just statistical parroting.

show 1 reply