logoalt Hacker News

ethmarkslast Wednesday at 9:55 PM0 repliesview on HN

If encoding more learned languages and grammars and dictionaries makes the model size bigger, it will also increase latency. Try running a 1B model locally and then try to run a 500B model on the same hardware. You'll notice that latency has rather a lot to do with model size.