logoalt Hacker News

borsch_not_souptoday at 1:36 AM2 repliesview on HN

Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.

If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.


Replies

dpe82today at 2:01 AM

  YOU> hey
  C64> HELLO! RE SOUNDS ME. MEFUL!
60s per token for that doesn't strike me as genuinely useful.

Very, very cool project though!

show 1 reply
numpad0today at 5:27 AM

Next-word prediction features always existed for flip phones...