logoalt Hacker News

stainlesswolftoday at 11:10 AM0 repliesview on HN

Many people here point out that LLMs WILL be anthropomorphised, and I think that’s not a surprise, because it’s the most human-like thing other that humans themselves.

However, I think we should follow “do not anthropomorphise” by acknowledging that while LLMs have quite some reasoning skills, and might resemble some level of intent depending on what’s in their context, they don’t have “understanding” like humans do.

They are absurdly good, statistical next-token-predictors. Keeping that in mind is really helpful for coding, learning, advice, conversation or whatever else you use them for.

Anthropomorphising LLMs is inevitable, but we should do it somehow responsibly.