[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
this solves a real problem. i run coding agents that have access to my workspace and the .env files are always the scariest part. even with .gitignore, the agent can still read them and potentially include secrets in context that gets sent to an API.
the approach of encrypting at rest and only decrypting into environment variables at runtime means the agent never sees the raw secrets even if it reads every file in the project. much better than the current best practice of just hoping your .gitignore is correct and your AI tool respects it.
one suggestion: it would be useful to have a "dry run" mode that shows which env vars would be set without actually setting them. helps verify the config is correct before you realize three services are broken because a typo in the key name.
[dead]
[dead]
I'm using https://www.litellm.ai/ as a proxy
I prefer waiting till it gets me in trouble. So far, it having access to all my .env secrets seems to work out okay.
This works by obfuscating the keys in memory with a root-access risk model. It will work but as I've been told when I tried the same thing for another purpose, this is security by annoyance. It sounds harsh but the same gatekeepers mentioned that this was only a psychological trick.
I dislike the gatekeepers so I will follow this implementation and see where it goes. Maybe they like you better.