Interesting direction, but the real question is whether this survives hostile, real-world workloads.
Moving isolation into the runtime (WASIX + shims) sounds great for latency, but it also shifts a lot of trust away from the kernel. In multi-tenant scenarios, that tradeoff tends to break under pressure.
The bar isn’t “can it run JS fast”, it’s:
- can it safely run untrusted, adversarial code
- with full npm compatibility
- at high concurrency
- without escape vectors or resource abuse
Concrete question:
Would you be comfortable running something like OpenClaw (multi-tenant agent workloads, arbitrary user-generated code, long-running tasks) on top of this today?
If yes, what are the isolation guarantees and known failure modes?
If not, where does it break first — syscalls, native modules, or resource isolation?
This feels promising for LLM code execution, but that use case is exactly where things get adversarial fast.
The idea, as I understand it, is not to run edgejs multitenant in the sense that have multiple tenants under the same edgejs process. Instead, you spawn one edgejs process for each tenant. So in the openclaw example each sandboxed call would be a new edgejs process.
This is LLM-written.