Hi everyone, I'm Sam. I started Pollen as an experiment last summer, got carried away, and have landed here.
It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.
I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).
It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!
Very happy to answer anything in the thread!
Cheers.
Docs: https://docs.pln.sh
Interesting project.
In a potential modern cloud, having a globally named primitives (computer, store, messaging) can unlock very wider applications. Have you come across any such?
You have some workload demos which all definitely try out but could you paint us an example use-case of the technology?
What are the workloads in the runtime capable of?
This is very interesting! I agree about the operational complexity of many systems, cough Kubernetes.
For most systems, state storage is the toughest problem, have you considered adding some form of storage layer over the top or would you recommend another solution that allows all the workloads to share state?
From someone who definitely doesn't fully understand what you made, this looks really cool!
I'm seeing some functionality that seems like it could replace some personal services I currently host via my tailscale network. Am I understanding this correctly? If so, do you have a feel for what the performance implications would be?
this is a great direction - self organizing service meshes that don't require a infinite tower of manually configured turtles to rest upon. state management is really going to be an intersting and I encourage you to post back here with your thoughts. strong consistency kind of gets you back to turtle-land pretty quickly, and free-for-all eventual turns out not be a good foundation. the sane middle ground seems to be monotonic eventual consistency, which I think is what CRDTs get you.
I wish more projects were conceived this way, instead of assuming that there is a kuberenetes cluster another database or two, and some message queues to lash it all together.
This is incredible.
We’re building an AWS analogue catalogue of services (Databases, Compute, Auth, etc.) for distributed systems.
Want a job do Pollen-like dev full time?
[email protected]
Either way, would be great to compare notes!