logoalt Hacker News

throawayontheyesterday at 10:36 AM2 repliesview on HN

it's not going to happen with LLMs unless ram + storage gets several orders of magnitude cheaper like, yesterday

informatics aren't magic, you'll never be able to compress """knowledge""" into a small model in a way equivalent to the 1.5 TB model


Replies

kilroy123yesterday at 10:44 AM

I agree. But I also think the future is some kind of hybrid approach where agents run locally, what they can, and then call out to the cloud for what they can't.

acidhousemcnabyesterday at 10:38 AM

This will happen, but reconfiguring the infrastructure of the entire planet to train LLMs and run them over networks might be the "bubble", the megalomania.