Just like the previous generation of AI PC, consumers just need a usb/pcie NPU,
Mass adoption won't happen until we get those cheap, because there are no mass prosumers making software for them that is massively popular.
No, AI inference is mainly RAM/RAM speed constrained, we need more fast RAM to make local AI thrive.
No, AI inference is mainly RAM/RAM speed constrained, we need more fast RAM to make local AI thrive.