logoalt Hacker News

Show HN: Pollen – distributed WASM runtime, no control plane, single binary

117 pointsby sambigearalast Thursday at 1:15 PM55 commentsview on HN

Comments

samantptoday at 5:27 AM

This is so nice. Encouraging to see such persistent serious efforts in the local-first, control-resistant tech space, even knowing it is a long uphill climb. Hope all the fragmented efforts help move toward something really formidable one day.

show 1 reply
dbalateroyesterday at 1:56 PM

I suspect you have something cool, but I think if you told a clearer example story that solves a real-world problem on the homepage it might alleviate some questions I'm seeing (and also having) in the thread here!

show 2 replies
sambigearalast Thursday at 1:15 PM

Hi everyone, I'm Sam. I started Pollen as an experiment last summer, got carried away, and have landed here.

It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.

I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).

It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!

Very happy to answer anything in the thread!

Cheers.

Docs: https://docs.pln.sh

show 6 replies
monster_truckyesterday at 1:49 PM

This is neat, what does the actual throughput look like though?

Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling

show 1 reply
kaoDyesterday at 1:45 PM

I know the individual words in the description but I'm a bit confused about what this is.

What would I use Pollen for?

I'm not sure I understand the "seed" metaphor.

show 2 replies
ivere27today at 3:09 AM

nice project. can I use it like a microservice? in a small company, and installing Pollen in all computers. then, running a business logic in there? instead of 'centeral servers'? good to have WASM sandboxing everywhere.

"Use idle company machines as a decentralized, sandboxed microservice cluster"

show 1 reply
sambigearayesterday at 1:54 PM

No idea why this post has picked up traction 2 days later, I’m out and about right now but will endeavour to respond thoughtfully when I’m back at my keyboard later on!

show 1 reply
jitlyesterday at 1:56 PM

Wow, this is super cool. It almost feels like a DIY pocket-Cloudflare. I’m curious how a WASM binary gets mapped to HTTP endpoints that take JSON, how much of that is Pollen vs Extism? Are the routes encoded in the WASM binary somehow?

show 1 reply
m_ramdhanyesterday at 6:24 PM

Really neat project. The idea of a fully decentralized leaderless WASM runtime is bold. My main question is around failure modes -- how does it handle network segmentation or split-brain scenarios? Does the gossip protocol deal with this gracefully, or is there an eventual consistency aspect that workloads need to be aware of?

show 1 reply
evacchiyesterday at 5:06 PM

I am a simple man, I see wazero, I upvote :)

(I am one of the maintaners, interesting work!)

show 1 reply
hsaliakyesterday at 2:03 PM

Using CRDT gossip to inform scaling is a clever idea. You are on to something there. Perhaps extract it as a core library/concept from the runtime? I feel that would be generally useful!

show 1 reply
omginternetstoday at 12:25 AM

Oh wow, this hits close to home. Hope it’s not bad form to plug an adjacent project on your post — felt too aligned not to, and it seems like we've converged on quite similar ideas! I’ve been working on Wetware [0], which has significant overlap on the substrate (P2P WASM, content-addressed, single binary, no control plane).

Different design center, though. Wetware is aimed at people building multi-tool agent products who’ve hit Simon Willison’s lethal trifecta [1], so the design pressure goes elsewhere: cells are fully async WASM/WASI procs (cheap to suspend, parallel by default); inter-cell calls go over object-capability RPC (Cap’n Proto); and there’s a tiny Clojure-inspired Lisp (“Glia”) that doubles as an LLM-facing or human-facing shell. It’s pure by default w/ content-addressable, immutable data structures planned, and an algebraic effect system gating every impure operation exists today. An agent (human or otherwise) can list, attenuate, and invoke just the caps it’s been granted, and you can see at a glance which fragment of code can actually touch the world.

The cap-vs-ACL bit seems to be the main point of divergence, AFAICT. Pollen’s grant docs show capabilities as cert-baked properties the callee inspects in user code (closer to attribute-based access control than to invocation-time cap tokens, and a clean fit for trusted-cluster ops like delegating admin or roles -- very sensible and def don't want to knock it!). Wetware leans the other way on the spectrum: caps are unforgeable references to specific methods (Cap’n Proto), the runtime enforces that nobody can call a method they don’t hold, and attenuation happens by grafting a strict subset of those references to a child cell with per-method granularity. So tool-calls-tool composes naturally, and the worst case of pulling a sketchy MCP server off GitHub becomes “the call fails,” not “depends whether the seed wrote its property check correctly.”

It's a les polished compared to what Sam has shipped, but moving fast, and this post has jolted me into sharing a bit before I had planned! Sam, would love to compare notes if you're open to it. And I'd also love to talk to anyone who’s shipped a multi-tool agent and gotten bitten : pwned in eval, legal blocking a third-party integration, can’t audit every MCP server you depend on. We’re in the first 100 conversations.

Either way, congrats on shipping — the 10-node demo is super slick, and “pure Go, no CGO” is IMO a major win :)

[0] https://github.com/wetware/ww [1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

docheinestagesyesterday at 3:35 PM

Even after looking at the homepage and the GitHub README, I don't really understand how this could help.

show 1 reply
esafakyesterday at 3:57 PM

Did you have any applications in mind when you were designing this? Any weakness in precedents that you wanted to rectify? Are you familiar with Lunatic (https://lunatic.solutions/), and wasmCloud (https://wasmcloud.com/) ?

show 1 reply
Remi_Etienlast Thursday at 2:19 PM

[flagged]

Huzziyesterday at 3:19 PM

[flagged]