logoalt Hacker News

andybakyesterday at 11:42 PM2 repliesview on HN

> solved

Have you considered that it's unsolvable? Or - at least - there is an irreconcilable tension between capability and safety. And people will always choose the former if given the choice.


Replies

zmmmmmtoday at 5:02 AM

in a pure sense no, it's probably not solvable completely. But in a practical sense, yes, I think it's solvable enough to support broad use cases of significant value.

The most unsolvable part is prompt injection. For that you need full tracking of the trust level of content the agent is exposed to and a method of linking that to what actions it has accessible to it. I actually think this needs to be fully integrated to the sandboxing solution. Once an agent is "tainted" its sandbox should inherently shrink down to the radius where risk is balanced with value. For example, my fully trusted agent might have a balance of $1000 in my AWS account, while a tainted one might have that reduced to $50.

So another aspect of sanboxing is to make the security model dynamic.

skybriantoday at 2:37 AM

I don't know about solved, but I've seen some interesting ideas for making it safer, so I think it could be improved.

One idea is to have the coding agent write a security policy in plan mode before reading any untrusted files:

https://dystopiabreaker.xyz/fsm-prompt-injection