logoalt Hacker News

drieseyesterday at 6:17 AM2 repliesview on HN

For the same reason that a human who is fluent in five languages can probably express themselves better in either one compared to human that only speaks one, while also having a more nuanced understanding of general grammar. From what I know, learning on a more diverse set makes a model better overall.


Replies

kelnosyesterday at 9:20 PM

Humans brains and LLMs are not the same, though. I don't think your analogy is remotely applicable, even if your conclusion may be correct.

ameliusyesterday at 7:29 AM

This might be an interesting research question: can you train a model on many languages, and then extract a much smaller model that knows only one language without much loss of quality?