logoalt Hacker News

sambigearayesterday at 4:41 PM1 replyview on HN

OK bear with me on this, it'll probably be a idle thought-stream because I don't have a concrete answer right now.

My intention is for Pollen to become a "generic blob of computational capability" into which you idly `pln seed` a workload and do not have to worry about ANY aspects of managing locality, scale, redundancy etc. You seed a workload onto any node, and you call it from any (other?) node. If you want to add more computational power to the cluster, you fire up Pollen on another machine and `pln invite` -> `pln join`.

Every node also has it's own ed25519 cert. The root key pair (the "don't lose this or you're in trouble" key pair) is used to delegate admin certs to other nodes. I'm also working on a mechanism which allows you to bake any arbitrary properties into a cert (as it stands, these are lifted into the WASM guest code for, say, in-application authz purposes). I have more ideas about how this can be extended in the future.

The root authority can invalidate a participating peer's cert at any point, currently just via a `pln deny` command which is eagerly gossiped around the cluster so other nodes stop talking to the denied node, too. I think this offers some opportunities for some fairly novel applications. Perhaps, in the future, you'll provision a node with a certain level or capability or authority to run on some external infrastructure. It'll have all of the (allowed) capabilities of your cluster, but will act like it's local to the external system. Plus, you can revoke it's access or re-set it's capabilities at any point; `pln grant` eagerly applies across the cluster, too.

The workloads, at the moment, are just anything you can compile to WASM via the Extism PDK. Stateless, for now, but with a view to add shared state and persistence in the near future!

Sorry this was rambly, hopefully it offered something useful.


Replies

imcritictoday at 3:21 AM

Splitting a big task (like anything ML-related) into a set of smaller ones and distribute them across the "fleet" of workers. Then reap the results, stitching it back into a single artifact at the end. This could be commercially viable. This could even become a p2p platform/market where some people basically buy computation while the others offer their hardware for temporary rent to earn a few bucks. You become the coordinator that just connects the demand with the supply and become rich from just commissions alone.

show 1 reply