Author here.
In my opinion, the main driver here is how fast models have evolved in the past 12 months. It makes the architecture of everything around them obsolete, very fast.
We went from using models as a building block, wrapping them in heavy workflow code, to now models being smart enough to drive their own workflows and planning.
Really enjoyed your post, by the way. The idea of putting skills and memories in a database while keeping the file shaped interface for the agent is clean. One read/write surface, two backends, invisible to the modle that's a nice piece of design, and the candor in the "what's still hard" section made me trust the rest of the post. My comment above was meant as a joke, not about your architecture. If this pattern becomes the standard, I'll happily migrate my workflow again.
One thing I wonder about is whether path routing alone is enough.
If `/workspace` goes to the sandbox and `/memory` or `/skills` goes to the database, the path tells you where to send the request. But it does not tell you whether this user, session, or agent is allowed to access it.
When I built something similar with an MCP filesystem, I found that I needed a scope check before actually running the operation. In my case, I was using GPT dev mode through a Cloudflare tunnel to control my local environment/model, so this kind of boundary became important.
So I like the path-routing idea, but I wonder if it eventually needs a scope or permission layer as well.