logoalt Hacker News

impulser_yesterday at 9:17 PM1 replyview on HN

Local models are always going to be useless unless compute get significantly cheaper, and it's not. TSMC might literally run out of capacity to build any consumer compute product.

Once computer constraints ease up, you will see much larger models. The reason LLM seems to have stalled a bit is because there just not enough compute.

You have more people using AI which requires more compute, and you want to build larger models which requires more compute and you have limited compute. What do you do?


Replies

23rfyesterday at 11:06 PM

Right.. and computers were once the size of a large room vs now fit into a pocket.

" The reason LLM seems to have stalled a bit is because there just not enough compute."

lol okay mate.

show 1 reply