logoalt Hacker News

g947oyesterday at 5:53 PM1 replyview on HN

From their own GitHub:

> If you intend to do LLM inference on your local machine, we recommend a 3000-series NVIDIA graphics card with at least 6GB of VRAM, but actual requirements may vary depending on the model and backend you choose to use.

Also, please be respectful when discussing technical matters.

P.S. I didn't say "local chat sucks".


Replies

BoredomIsFunyesterday at 6:07 PM

> we recommend a 3000-series NVIDIA graphics card with at least 6GB of VRAM

...which is not by any means a powerful GPU, and besides the AMD Ryzen AI CPUs in question have a plenty enough capacity to run local LLMs esp. MoE ones; with 3b active MoE parameters miniPC equipped with these CPUs dramatically outperform any "3000-series NVIDIA graphics card with at least 6GB of VRAM".

> please be respectful when discussing technical matters.

That is more applicable to your inappropriately righteous attitude than to mine.