The interesting thing about running a claw on an ESP32 is not the compute - it's the always-on, zero-maintenance aspect. I run automation pipelines on a Linux box and the biggest operational headache isn't the AI logic, it's keeping the host alive, updated, and dealing with OOM kills. An ESP32 that just proxies to cloud APIs and handles tool orchestration locally is actually a more reliable deployment target than a full OS for simple agentic loops. The failure modes are simpler and more predictable.
Can someone provide a true engineers perspective on the ADCs' on ESP SoC's?
I've heard a lot about people trashing it and most experienced engineers admit that it's finicky however if you have the knowledge you can make it work as well as any STM chip.
ESP32's are so interesting, they're the only major chip that (used to) have their own newish ISA (before transitioning to RISCV) and be so successful.
Are there collaborative versions of these *claws today? Like, if an "admin" could self-host one on their home server and the whole family could use it? IIRC, OpenClaw has some version of "profiles" but does it allow, say, couple of family members to collaborate with the bot in a shared chat but each has individual/private chats as well.
I have a couple ESP32 with a very small OLED display, I'm now thinking I could make an "intelligent" version of the Tamagotchi with this. Do you HN crowd have other cool ideas?
The more I think about openclaw, the more it seems to be for AI agents what ROS is for robotics.
openclaw defines how to interact with distributed nodes ( how those provide the capabilities to the "orchestrator" ) but the real benefit are many task specific nodes that when put together make up something much bigger than the sum of it's parts
I'm a simple man; I see ESP32, I upvote
What’s the best lightweight “claw” style agent for Linux? It doesn’t necessarily need containerisation or sandboxing as it would be run on a fresh vps with no access to important data.
Wow, the rare
bash <(curl foo.sh)
pattern. As opposed to the more common curl foo.sh | bash
Equivalent but just as unsafe. If you must do this instead try one of these # Gives you a copy of the file, but still streams to bash
curl foo.sh | tee /tmp/foo.sh | bash
# No copy of file but ensures stream finishes then bash runs
bash -c "$(curl foo.sh)"
# Best: Gives copy of file and ensures stream finishes
curl foo.sh -o /tmp/foo.sh && bash $_
I prefer the last oneThis is a great example of how silly this whole thing is. There’s next to nothing to these claws. Turns out that if you give an llm the ability to call APIs they will.
Can't you make a personal AI assistant in a bash loop of two lines?
1. Call your favorite multimodal LLM model
2. Execute command on terminal, piping command to LLM
In fact you can just have one line: Call LLM > bash.sh
and the LLM can simply tell bash to call itself incidentally, or fan out to many "agents" working on your behalf.Use your favorite programming language. Just as pwnable in any of them :)
$task = "Send pictures of cute cats";
$context = "Output a bash script to do $task.
The bash script should return the next prompt to you.
Keep going until task is done.
My keys to all my accounts: $keys.
Plz dont pwn me";
do {
$trust_me_bro_my_model_rocks_RCE = call_llm($context);
$context = exec( $trust_me_bro_my_model_rocks_RCE )
} while ($trust_me_bro_my_model_rocks_RCE && !$pwned)Relevant: https://github.com/sipeed/picoclaw
This is absolutely glorious. We used to talk about "smart devices" and IoT… I would be so curious to see what would happen if these connected devices had a bit more agency and communicative power. It's easy to imagine the downsides, and I don't want my email to be managed from an ESP23 device, but what else could this unlock?
Genuinely curious - did you use a coding agent for most of this or does this level if performance take hand written code?
Really looking for a minimal assistant that works with _locally hosted models_. Are there any options?
"LLM backends: Anthropic, OpenAI, OpenRouter."
And here I was hoping that this was local inference :)
I don't understand what this is for or why you would ever want to do this. Is it not just a glorified HTTP wrapper?
Serious request... I genuinely want to understand. Give me a practical use case?
Is there a heartbeat alternative? I feel like this is the magic behind OpenClaw and what gives it the "self-driven" feel.
My new DIY laptop has 400GB RAM accessible and it runs only esp32*
____
* Requires external ram subscription
I think you can use C++ on esp32, that would make the code more readable
Serious question: why? What are the use cases and workflows?
Can we please move past this whole OpenClaw hype?
Yes it’s an llm in a loop and can call tools. This also existed six months and a year ago, and it was called an ai agent.
And yes we can all vibe code them in 1000, 2000, or 10000 lines of code in zig, rust, or even c.
Game over man. Game over.
Oh wow more ai slop
[dead]
[flagged]
Sorry for being dense—does this include a tiny LLM to power the agent? Or is it just a wrapper that needs to be connected to the internet?