This is neat, what does the actual throughput look like though?
Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling
It’s a really interesting question.
The real challenge is gating and reserving “slots” for downstream calls. If seed A on one node calls seed B on another, as it stands, Pollen holds that seed A instance up and waiting (with the memory overhead etc) until the response finds its way back across the cluster.
You can probably imagine how latencies then start impacting this (espesh when a node in USW is generating traffic that needs to ultimately land on my laptop in the UK), not to mention all of the contention from other nodes elsewhere generating load too.
In the demo, I see about 2500rps land on my laptop with 4k-5k generated across 4-5 nodes globally, but this is a multi hop scenario. If a call is only invoking a single, light WASM function, I see much higher throughout.
The project is in its infancy, no doubt I’ll have lots of fun figuring out how to optimise as it progresses!
In the first scenario above, memory seems to be the ceiling, in the latter, CPU.
(Edit: these numbers are really quite meaningless but I wanted to give something tangible)