logoalt Hacker News

Rohansiyesterday at 9:21 PM0 repliesview on HN

> LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.

Yes, but how does that help it capture the nuances of an individual? It can try to infer but it will not have enough information to always be correct, where correctness is what the actual individual would do.