An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram.
When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself.
People are running openclown on microcontrollers.
You can run nullclaw etc on a Pi zero. People who are paying big $ are mostly trying to run local LLMs.
Personally, I would rather pay a few bucks for Qwen or just use gemma4 which runs on a potato. But I guess we are all different.