logoalt Hacker News

u8080yesterday at 3:42 PM1 replyview on HN

No, AI inference is mainly RAM/RAM speed constrained, we need more fast RAM to make local AI thrive.


Replies

downrightmikeyesterday at 4:44 PM

Lol. Thanks to someone buying all the ram platters, before they became modules, that won't happen.