logoalt Hacker News

operatingthetantoday at 7:07 AM4 repliesview on HN

An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram.


Replies

throwa356262today at 1:16 PM

You can run nullclaw etc on a Pi zero. People who are paying big $ are mostly trying to run local LLMs.

Personally, I would rather pay a few bucks for Qwen or just use gemma4 which runs on a potato. But I guess we are all different.

apexalphatoday at 8:44 AM

When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself.

hparadiztoday at 7:11 AM

The shortage is for the 512, 256, and 128 models.

show 2 replies
ameliustoday at 7:32 AM

People are running openclown on microcontrollers.