No, AI inference is mainly RAM/RAM speed constrained, we need more fast RAM to make local AI thrive.
Lol. Thanks to someone buying all the ram platters, before they became modules, that won't happen.
Lol. Thanks to someone buying all the ram platters, before they became modules, that won't happen.