logoalt Hacker News

londons_exploretoday at 11:35 AM4 repliesview on HN

Does this actually work?

I assume an AI which wanted to read a secret and found it wasn't in .env would simply put print(os.environ) in the code and run it...

That's certainly what I do as a developer when trying to debug something that has complex deployment and launch scripts...


Replies

andaitoday at 2:25 PM

Your concerns are not entirely unfounded.

https://www.reddit.com/r/ClaudeAI/comments/1r186gl/my_agent_...

I have noticed similar behavior from the latest codex as well. "The security policy forbid me from doing x, so I will achieve it with a creative work around instead..."

The "best" part of the thread is that Claude comes back in the comments and insults OP a second time!

show 2 replies
ctmnttoday at 4:44 PM

It doesn't even have to change the code to get the secret. If you're using env variables to pass secrets in, they're available to any other process via `/proc/<pid>/environ` or `ps -p <pid> -Eww`. If your LLM can shell out, it can get your secrets.

PufPufPuftoday at 11:47 AM

Good point. You would need to inject the secrets in an inaccessible part of the pipeline, like an external proxy.

show 1 reply
snowhaletoday at 1:14 PM

yeah the threat model matters a lot here. this is useful protection against accidental leaks -- logs, CI output, exceptions that print env context. an AI agent running arbitrary code can definitely just do os.environ, so this isn't stopping intentional exfiltration. for that you'd want actual sandbox isolation with no env passthrough. different problems.