I am still amazed that people so easily accepted installing these agents on private machines.
We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.
Plain old Unix permissions can get it done. One account for you, one account for AI. A shared folder belonging to a group that both are in. umask and setgid to get the story right for new files. https://apostrophecms.com/blog/how-to-be-more-productive-wit...
From the home page:
> Stop trusting blindly
> One-line installer scripts,
Here are the manual install instructions from the "Install / Build page:
> curl -L https://aur.archlinux.org/cgit/aur.git/snapshot/jai.tar.gz | tar xzf -
> cd jai
> makepkg -i
So, trust their jai tool, but not _other_ installer scripts?
"jai is free software, brought to you by the Stanford Secure Computer Systems research group and the Future of Digital Currency Initiative"
I guess the "Future of Digital Currency Initiative" had to pivot to a more useful purpose than studying how Bitcoin is going to change the world.
This looks great and seems very well thought out.
It looks both more convenient and slightly more secure than my solution, which is that I just give them a separate user.
Agents can nuke the "agent" homedir but cannot read or write mine.
I did put my own user in the agent group, so that I can read and write the agent homedir.
It's a little fiddly though (sometimes the wrong permissions get set, so I have a script that fixes it), and keeping track of which user a terminal is running as is a bit annoying and error prone.
---
But the best solution I found is "just give it a laptop." Completely forget OS and software solutions, and just get a separate machine!
That's more convenient than switching users, and also "physically on another machine" is hard to beat in terms of security :)
It's analogous to the mac mini thing, except that old ThinkPads are pretty cheap. (I got this one for $50!)
I'm wondering if the obvious (and stated) fact that the site was vibe-coded - detracts from the fact that this tool was hand written.
> jai itself was hand implemented by a Stanford computer science professor with decades of C++ and Unix/linux experience. (https://jai.scs.stanford.edu/faq.html#was-jai-written-by-an-...)
I've been reviewing Agent sandboxing solutions recently and it occurred to me there is a gaping vector for persistent exploits for tools that let the agent write to the project directory. Like this one does.
I had originally thought this would ok as we could review everything in the git diff. But, it later occurred to me that there are all kinds of files that the agent could write to that I'd end up executing, as the developer, outside the sandbox. Every .pyc file for instance, files in .venv , .git hook files.
ChatGPT[1] confirms the underlying exploit vectors and also that there isn't much discussion of them in the context of agent sandboxing tools.
My conclusion from that is the only truly safe sandboxing technique would be one that transfers files from the sandbox to the dev's machine through some kind of git patch or similar. I.e. the file can only transfer if it's in version control and, therefore presumably, has been reviewed by the dev before transfer outside the sandbox.
I'd really like to see people talking more about this. The solution isn't that hard, keep CWD as an overlay and transfer in-container modified files through a proxy of some kind that filters out any file not in git and maybe some that are but are known to be potentially dangerous (bin files). Obviously, there would need to be some kind of configuration option here.
1: https://chatgpt.com/share/69c3ec10-0e40-832a-b905-31736d8a34...
This is a cool solution... I have a simpler one, though likely inferior for many purposes..
Run <ai tool of your choice> under its own user account via ssh. Bind mount project directories into its home directory when you want it to be able to read them. Mount command looks like
sudo mkdir /home/<ai-user>/<dir-name>
sudo mount --bind <dir to mount> --map-groups $(id -g <user>):$(id -g <ai-user>):1 --map-users $(id -u <user>):$(id -u <ai-user>):1 /home/<ai-user>/<dir-name>
I particularly use this with vscode's ssh remotes.The examples in the article are all big scary wipes, But I think the more common damage is way smaller and harder to notice.
I've been using claude code daily for months and the worst thing that happened wasnt a wipe(yet). It needed to save an svg file so it created a /public/blog/ folder. Which meant Apache started serving that real directory instead of routing /blog. My blog just 404'd and I spent like an hour debugging before I figured it out. Nothing got deleted and it's not a permission problem, the agent just put a file in a place that made sense to it.
jai would help with the rm -rf cases for sure but this kind of thing is harder to catch because its not a permissions problem, the agent just doesn't know what a web server is.
Everyone talks about sandboxing the filesystem but nobody talks about what happens when the agent's work outlives the container. Reset happens, state is gone, you start over. I've lost more agent work to session timeouts than to any security issue. Isolation without persistence just means you lose progress safely.
Excellent project, unfortunate title. I almost didn't click on it.
I like the tradeoff offered: full access to the current directory, read-only access to the rest, copy-on-write for the home directory. With stricter modes to (presumably) protect against data exfiltration too. It really feels like it should be the default for agent systems.
the safety concerns compound significantly when you move from interactive to unattended execution. in interactive mode you can catch a bad command before it completes. run the same agent on a schedule at 3am with no one watching and there's no fallback.i built something that schedules claude code jobs to run in the background (openhelm.ai). the layered approach we use: separate OS user account with only project directory write access, claude's native seatbelt/bubblewrap sandboxing, and a mandatory plan review step before any job's first run. you can't approve every individual action at runtime, but you can approve the shape of the plan upfront - which catches most of the scary stuff.the paper's point about clean agent-specific filesystem abstractions resonates. the scope definition problem (what exactly should this agent be able to touch?) is actually the hard part - enforcement is relatively mechanical once you've answered that. and for scheduled workloads, answering that question explicitly at job creation time forces the kind of thinking that prevents the 3am disasters.
I may be paranoid but only run my ai cli tools in a vps only. I have them installed locally but never use them. In a vps I go full yolo mode bc I do not care about it. It is a slightly more cumbersome workload, bit if you have a dev + staging envs, then you never have to develop and run stuff locally, which brings the local hardware requirements and costs down too (bc you can develop with a base macbook neo).
Docker is hard to setup. The author made a nice solution but not sure if he know devcontainer and what he can do. You do the setup once and you roll in most dev tools. I'm still surprised the effort people put in such solution ignore the dev's core requirements, like sharing the env they use in a simple way. You used it to have custom env and isolate the agent. You want to persist your credentials? Mount the target folder from home or sl into a sub folder. Might be knowledge. But for Linux or even Windows/Mac as long you don't need desktop fully. Devcontainer is simple. A standard that works. And it's very mature.
I work on a sandboxing tool similarly based on an idea to point the user home dir to a separate location (https://github.com/wrr/drop). While I experimented with using overlayfs to isolate changes to the filesystem and it worked well as a proof-of-concept, overlayfs specification is quite restrictive regarding how it can be mounted to prevent undefined behaviors.
I wonder if and how jai managed to address these limitations of overlayfs. Basically, the same dir should not be mounted as an overlayfs upper layer by different overlayfs mounts. If you run 'jai bash' twice in different terminals, do the two instances get two different writable home dir overlays, or the same one? In the second case, is the second 'jai bash' command joining the mount namespace of the first one, or create a new one with the same shared upper dir?
This limitation of overlays is described here: https://docs.kernel.org/filesystems/overlayfs.html :
'Using an upper layer path and/or a workdir path that are already used by another overlay mount is not allowed and may fail with EBUSY. Using partially overlapping paths is not allowed and may fail with EBUSY. If files are accessed from two overlayfs mounts which share or overlap the upper layer and/or workdir path, the behavior of the overlay is undefined, though it will not result in a crash or deadlock.'
Is there already some more established setup to do "secure" development with agents, as in, realistically no chance it would compromise the host machine?
E.g. if I have a VM to which I grant only access to a folder with some code (let's say open-source, and I don't care if it leaks) and to the Internet, if I do my agent-assistant coding within it, it will only have my agent credentials it can leak. Then I can do git operations with my credentials outside of the VM.
Is there a more convenient setup than this, which gives me similar security guarantees? Does it come with the paid offerings of the top providers? Or is this still something I'd have to set up separately?
And for the macos users, I can’t recommend nono enough. (Paying it forward, since it was here on HN that I learned about it.)
Good DX, straightforward permissions system, starts up instantly. Just remember to disable CC’s auto-updater if that’s what you’re using. My sandbox ranking: nono > lima > containers.
It's full VM or nothing.
I want AI to have full and unrestricted access to the OS. I don't want to babysit it and approve every command. Everything that is on that VM is a fair game and the VM image is backed up regularly from outside.
This is the only way.
Installation is a bit... unsupported unless you're on Arch. Here's a Nix derivation I came up with:
https://github.com/pkulak/nix/blob/main/common/jai.nix
Arg, annoying that it puts its config right in my home folder...
EDIT: Actually, I'm having a heck of a time packaging this properly. Disregard for now!
It's always struck me that agents should be operated via `systemd-run` as a transient scope unit with the necessary security properties set
So couldn't this be done with an appropriate shell alias - at least under linux.
I would have to be very inebriated to give a bot/agent access to my files and all security clearance should be revoked but should I do that it would have to be under mandatory access controls that my unprivileged user has no influence over, not even with sudo or doas. The LSM enforced rules (SELinux, AppArmor, TOMOYO, other newer or simpler LSM's) would restrict all by default and give explicit read, write, execute permissions to specific files or directories.
The bot should also be instructed that it gets 3 strikes before being removed meaning it should generate a report of what it believes it wants to access to and gets verbal approval or denial. That should not be so difficult with today's bots. If it wants to act like a human then it gets simple rules like a human. Ask the human operator for permission. If the bot starts "doing it's own thing, aka going rogue" then it gets punished. Perhaps another bot needs to act as a dominatrix to be a watcher over the assistant bot.
This is very cool - I try to have a container-centric setup but sometimes YOLOcal clauding is too tempting.
My biggest question skimming over the docs is what a workflow for reviewing and applying overlay changes to the out-of-cwd dirs would be.
Also, bit tangential but if anyone has slightly more in-depth resources for grasping the security trade-offs between these kind of Linux-leveraging sandboxes, containers, and remote VMs I'd appreciate it. The author here implies containers are still more secure in principle, and my intuition is that there's simply less unknowns from my perspective, but I don't have a firm understanding.
Anyhow, kudos to the author again, looks useful.
I've been using podman, and for me it is good enough. The way I use it I mount current working directory, /usr/bin, /bin, /usr/lib, /usr/lib64, /usr/share, then few specific ~/.aspnet, ~/.dotnet, ~/.npm-global etc. I use same image as my operating system (Fedora 43).
It works pretty well, agent which I choose to run can only write and see the current working directory (and subdirectories) as well as those pnpm/npm etc software development files. It cannot access other than the mounted directories in my home directory.
Now some evil command could in theory write to those shared ~/.npm-global directories some commands, that I then inadvertently run without the container but that is pretty unlikely.
I'd really like to try this, but building it is impossible. C++ is such a pain to build with the "`make`; hunt for the dependency that failed; `apt-get install whatever-dev`; goto make" loop...
Please release binaries if you're making a utility :(
Looks good, but only Linux is supported. I like spinning up VPS’s and then discarding them when I am done. On macOS, something I haven/t tried yet but plan to: create a separate user account.
Where is the network isolation? I want to be able to be able to limit what external resources the agent can access and also inject secrets at request time so the agent does have access to them.
File system isolation is easy now, it’s not worth HN front page space for the n’th version. It’s a solved problem (and now included in Claude clCode).
Sandboxing and verification are two different things. Sandboxing answers what can this agent touch. Verification answers what does it actually do with what it touches. Even inside a perfect jail, the agent can still hallucinate, exfiltrate data over the network, or fold the second you push back on its answer.
I've been building an independent benchmarking platform for AI agents. The two approaches are complementary. Sandbox the environment, verify the agent.
I’m using https://github.com/torarnv/claude-remote-shell for this, which runs Claude’s Bash tool on a remote machine but leaves Claude running locally otherwise.
I’ve found it to be a good balance for letting Claude loose in a VM running the commands it wants while having all my local MCPs and tools still available.
Are there any similar ways of isolating environment variables, secrets, and credentials? Everyone is thinking about the file system but I haven't seen as much discussion about exposing secrets and account access.
Should be named Jia
More seriously, I'm not a heavy agent user, but I just create a user account for the agent with none of my own files or ssh keys or anything like that. Hopefully that's safe enough? I guess the risk is that it figures out a local privilege escalation exploit...
Most of what we're doing with Ai today, we've been doing it pretty just fine without any confusion.
I've been struggling to find what Ai has intrinsically solved new that gives us the chance to completely change workflows, other these weird things occuring.
For jailing local agents on a Mac, I made Agent Safehouse - it works for any agent and has many sane default for developers https://agent-safehouse.dev
Well, I'm on Windows (+ Cygwin) and wrote a Dockerfile. It wasn't that hard. git branch + worktree + a docker container per project and I can work with copilot in --yolo mode (or claude --dangerously-skip-permissions, whichever). vscode is pretty smooth at installing the VS Code Server on first connection to a docker container, too, and I just open up the workspace in a minute.
There's nothing wrong with an AI-designed website, but I wish when describing their own projects that HN contributors wrote their own copy. As HN posters are wont to say, writing is thinking...
Sorry if this question is stupid, (I'm not even using Claude*), but why can't people run Claude/other coding agent in a container and only mount the project directory to the container?
*I played with codex a few months ago, but I don't even work in IT.
I've been running GPT5.x fully unconstrained with effective local admin shell for over $500 worth of API tokens. Not once has it done something I'd consider "naughty".
It has left my project in a complete mess, but never my entire computer.
git reset --hard && git clean -fd
That's all it takes.I think this is turning into a good example of security theatrics. If the agent was actually as nefarious as the marketing here suggests, the solution proposed is not adequate. No solution is. Not even a separate physical computer. We need to be honest about the size of this problem.
Alternatively, maybe Claude is unusually violent to the local file system? I've not used it at all, so perhaps I am missing something here.
Would like to see something more comprehensive built on zfs and freebsd jails. Namely snapshot/checkpoint before each prompt, quick undo for changes made by agent, auto delete old snapshots etc
Just use DevContainers. Can't understand people letting AI go wild on their systems...
This is a great time for Apple to relaunch their Time Machine devices, have a history of everything in your file system because sooner or later some AI is going to delete it...
This still is running in an isolated container, right?
Ignoring the confidentiality arguments posed here, I can’t help to think about snapshotting filesystems in this context. Wouldn’t something like ZFS be an obvious solution to an agent deleting or wildly changing files? That wouldn’t protect against all issue the authors are trying to address, but it seems like an easy safeguard against some of the problems people face with agents.
Inspired by this tool I wrote something that fits macOS better. It uses the native sandbox-exec from Apple and can wrap other apps as well, like VSCode in which you usually run AI stuff. https://github.com/holtwick/bx-mac
Interesting take on the same problem
I created https://github.com/jrz/container-shell which basically launches a persistent interactive shell using docker, chrooted to the CWD
CWD is bind mounted so the rest is simply not visible and you can still install anything you want.
Filesystem containment solves one half of the blast radius problem. The other half is external state - agent hits a payment API, writes to a database, sends an email. Copy-on-write overlays can't roll that back. I've seen agents make 40 duplicate API calls because they crashed mid-task and retried from scratch with no deduplication. The filesystem was fine. The downstream systems were not. The hard version of this problem is making agent operations idempotent across external calls, not just safe locally.
Claude's stock unprompted / uninspired UI code creates carbon clone components. That "jai is not a promise of perfect safety" callout box is like the em dash of FE code. The contrast, or lack thereof, makes some of the text particularly invisible.
I wonder if shitty looking websites and unambitious grammar will become how we prove we are human soon.
Idk, just feels so counter sometimes to build and refine these (seemingly non-deterministic) tools to build deterministic workflows & get the most productivity out of them.
Suggestion for the FAQ page: does this work on a Mac?
Are mass file deletions as result of some plausible “I see why it would have done that” or will it just completely randomly execute commands that really have nothing to do with the immediate goal?
Its a bit annoying that there are so many solutions to run agents and sandbox them but no established best practice. It would be nice to have some high level orchestration tools like docker / podman where you can configure how e.g. claude code, opencode, codex, openclaw run in open Shell, OCI container, jai etc.
Especially because everybody can ask chatgpt/claude how to run some agents without any further knowledge I feel we should handle it more like we are handling encryption where the advice is to use established libraries and don't implement those algorithms by yourself.
Add this to .claude/settings.json:
You can change the read part if you're ok with it reading outside. This feature was only added 10 days ago fwiw but it's great and pretty much this.