logoalt Hacker News

puttycattoday at 4:30 AM18 repliesview on HN

I am still amazed that people so easily accepted installing these agents on private machines.

We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.


Replies

fc417fc802today at 4:33 AM

People were also dismissing concerns about build tooling automatically pulling in an entire swarm of dependencies and now here we are in the middle of a repetitive string of high profile developer supply chain compromises. Short term thinking seems to dominate even groups of people that are objectively smarter and better educated than average.

show 4 replies
closeparentoday at 7:50 PM

Seems most relevant in a hobbyist context where you have personal stuff on your machine unrelated to your projects. Employee endpoints in a corporate environment should already be limited to what’s necessary for job duties. There’s nothing on my remote development VMs that I wouldn’t want to share with Claude.

michaelcampbelltoday at 1:42 PM

> We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.

These are generally (but not always) 2 different sets of people.

nuneztoday at 6:25 AM

Tbf, Docker had a similar start. “Just download this image from Docker Hub! What can go wrong?!”

Industry caught on quick though.

show 2 replies
lxgrtoday at 10:58 AM

Not in unknown ways, but as part of its regular operation (with cloud inference)!

I think the actual data flow here is really hard to grasp for many users: Sandboxing helps with limiting the blast radius of the agent itself, but the agent itself is, from a data privacy perspective, best visualized as living inside the cloud and remote-operating your computer/sandbox, not as an entity that can be "jailed" and as such "prevented from running off with your data".

The inference provider gets the data the instant the agent looks at it to consider its next steps, even if the next step is to do nothing with it because it contains highly sensitive information.

nazgul17today at 4:37 AM

Agree with the sentiment! But "securing ... in all ways possible"? I know many people who would choose "password" as their password in 2026. The better of the bunch will use their date of birth, and maybe add their name for a flourish.

/rant

monster_trucktoday at 5:23 PM

I got bad news about all of the other software you're running

raincoletoday at 6:12 AM

It's never about security. It's security vs convenience. Security features often ended up reduce security if they're inconvenience. If you ask users to have obscure passwords, they'll reuse the same one everywhere. If your agent prompts users every time it's changing files, they'll find a way to disable the guardrail all together.

mjmastoday at 8:29 AM

My testing/working with agents has been limited to a semi-isolated VM with no permissions apart from internet access. I have a git remote with it as the remote (ssh://machine/home/me/repo) so that I don't have to allow it to have any keys either.

tempaccount5050today at 1:37 PM

I don't understand why file and folder permissions are such a mystery. Just... don't let it clobber things it shouldn't.

bigstrat2003today at 4:51 AM

I am too. It is genuinely really stupid to run these things with access to your system, sandbox or no sandbox. But the glaring security and reliability issues get ignored because people can't help but chase the short term gains.

show 1 reply
globular-toasttoday at 7:41 AM

Not all of us. Figuring out bwrap was the first thing I did before running an agent. I posted on HN but not a single taker https://news.ycombinator.com/item?id=45087165

I have noticed it's become one of my most searched posts on Google though. Something like ten clicks a month! So at least some people aren't stupid.

show 2 replies
puttycattoday at 11:52 AM

Forgot to mention the craziness of trusting an AI software company with your private AI codebase (think Uber's abuse of ride data).

eximiustoday at 7:26 AM

Eh, depending on how you're running agents, I'd be more worried about installing packages from AUR or other package ecosystems.

We've seen an increase in hijacked packages installing malware. Folks generally expect well known software to be safe to install. I trust that the claude code harness is safe and I'm reviewing all of the non-trivial commands it's running. So I think my claude usage is actually safer than my AUR installs.

Granted, if you're bypassing permissions and running dangerously, then... yea, you are basically just giving a keyboard to an idiot savant with the tendency to hallucinate.

theendisneytoday at 4:37 AM

Some day soom they will build a cage that will hold the monster. Provided they dont get eaten in the meantime. Or a larger monster eats theirs. :)

deadbabetoday at 2:29 PM

Trusting AI agents with your whole private machine is the 2020s equivalent of people pouring all their information about themselves into social networks in 2010s.

Only a matter of time before this type of access becomes productized.

xpetoday at 2:16 PM

CONVENIENCE > SECURITY : until no convenience b/c no system to run on