logoalt Hacker News

rellfyyesterday at 7:26 AM5 repliesview on HN

The lethal trifecta is the most important problem to be solved in this space right now.

I can only think of two ways to address it:

1. Gate all sensitive operations (i.e. all external data flows) through a manual confirmation system, such as an OTP code that the human operator needs to manually approve every time, and also review the content being sent out. Cons: decision fatigue over time, can only feasibly be used if the agent only communicates externally infrequently or if the decision is easy to make by reading the data flowing out (wouldn't work if you need to review a 20-page PDF every time).

2. Design around the lethal trifecta: your agent can only have 2 legs instead of all 3. I believe this is the most robust approach for all use cases that support it. For example, agents that are privately accessed, and can work with private data and untrusted content but cannot externally communicate.

I'd be interested to know if you have reached similar conclusions or have a different approach to it?


Replies

ryanrastiyesterday at 7:38 AM

Yeah, those are valid approaches and both have real limitations as you noted.

The third path: fine-grained object-capabilities and attenuation based on data provenance. More simply, the legs narrow based on what the agent has done (e.g., read of sensitive data or untrusted data)

Example: agent reads an email from [email protected]. After that, it can only send replies to the thread (alice). It still has external communication, but scope is constrained to ensure it doesn't leak sensitive information.

The basic idea is applying systems security principles (object-capabilities and IFC) to agents. There's a lot more to it -- and it doesn't solve every problem -- but it gets us a lot closer.

Happy to share more details if you're interested.

show 2 replies
veganmosfetyesterday at 3:21 PM

Imho a combination of different layers and methods can reduce the risk (but it's not 0): * Use frontier LLMs - they have the best detection. A good system prompt can also help a lot (most authoritative channel). * Reduce downstream permissions and tool usage to the minimum, depending on the agentic use case (Main chat / Heartbeat / Cronjob...). Use human-in-the-loop escalation outside the LLM. * For potentially attacker controlled content (external emails, messages, web), always use the "tool" channel / message role (not "user" or "system"). * Follow state of the art security in general (separation, permission, control...). * Test. We are still in the discovery phase.

trenchgunyesterday at 7:38 AM

You could have a multi agent harness that constraints each agent role with only the needed capabilities. If the agent reads untrusted input, it can only run read only tools and communicate to to use. Or maybe have all the code running goin on a sandbox, and then if needed, user can make the important decision of effecting the real world.

show 2 replies
eek2121yesterday at 9:36 AM

Someone above posted a link to wardgate, which hides api keys and can limit certain actions. Perhaps an extension of that would be some type of way to scope access with even more granularity.

Realistically though, these agents are going to need access to at least SOME of your data in order to work.

show 1 reply
sumitkumaryesterday at 10:46 AM

One more thing to add is that the external communication code/infra is not written/managed by the agents and is part of a vetted distribution process.