logoalt Hacker News

stavrosyesterday at 8:39 PM1 replyview on HN

So basically there's a /chat endpoint that goes to the LLM (a Pi agent), which has access to call specific tools (web search, SQL execution, cron) but doesn't have filesystem access, so the only thing it can do is exfiltrate data it can see (pretty big, but you can't really avoid that, and it doesn't have access to anything on the host system). There's a Signal bridge that runs on another container to connect to Signal, a Telegram webhook, and the other big component is a coding agent and a tool container. The coding agent can write files to a directory that's also mounted in the tool container, and the tool container can run the tools. That way you separate the coder from everything else, and nothing has access to any of your keys.

You can't really avoid the coder exfiltrating your tool secrets, but at least it's separated. I also want to add a secondary container of "trusted" tool that the main LLM can call but no other LLM can change.

This way you're assured that, for example, the agent can't contact anyone that you don't want it contact, or it can read your emails but not send/delete, things like that. It makes it very easy to enforce ACLs for things you don't want LLM-coded, but also enables LLM coding of less-trusted programs.


Replies

stavrostoday at 3:29 AM

And now it can even make private (and public!) dynamic websites that have access to data from your database, while exposing only the data you want exposed.

I'm really liking it, I created a page to show my favorite restaurants per city, for example:

https://stavrobot.home.stavros.io/pages/restaurants

That's dynamic, loading from the database, and updating live when the assistant creates new entries.

This page was created just by telling the assistant "make me a page to show my favorite restaurants, with their ratings, groupped by city".