logoalt Hacker News

jackfranklyntoday at 1:19 PM3 repliesview on HN

The real problem isn't just the .env file — it's that secrets leak through so many channels. I run a Node app with OAuth integrations for multiple accounting platforms and the .env is honestly the least of my worries. Secrets end up in error stack traces, in debug logs when a token refresh fails at 3am, in the pg connection string that gets dumped when the pool dies.

The surrogate credentials + proxy approach mentioned above is probably the most robust pattern. Give the agent a token that maps to the real one at the boundary. That way even if the agent leaks it, the surrogate token is scoped and revocable.

For local dev with AI coding assistants, I've settled on just keeping the .env out of the project root entirely and loading from a path that's not in the working directory. Not bulletproof but it means the agent has to actively go looking rather than stumbling across it.


Replies

gortrontoday at 5:00 PM

I've had similar concerns with letting agents view any credentials, or logs which could include sensitive data.

Which has left me feeling torn between two worlds. I use agents to assist me in writing and reviewing code. But when I am troubleshooting a production issue, I am not using agents. Now troubleshooting to me feels slow and tedious compared to developing.

I've solved this in my homelab by building a service which does three main things: 1. exposes tools to agents via MCP (e.g. 'fetch errors and metrics in the last 15min') 2. coordinates storage/retrieval of credentials from a Vault (e.g. DataDog API Key) 3. sanitizes logs/traces returned (e.g. secrets, PII, network topology details, etc.) and passes back a tokenized substitution

This sets up a trust boundary between the agent and production data. The agent never sees credentials or other sensitive data. But from the sanitized data, an agent is still very helpful in uncovering error patterns and then root causing them from the source code. It works well!

I'm actively re-writing this as a production-grade service. If this is interesting to you or anyone else in this thread, you can sign up for updates here: https://ferrex.dev/ (marketing is not my strength, I fear!).

Generally how are others dealing with the tension between agents for development, but more 'manual' processes for troubleshooting production issues? Are folks similarly adopting strict gates around what credentials/data they let agents see, or are they adopting a more 'YOLO' disposition? I imagine the answer might have to do with your org's maturity, but I am curious!

AMARCOVECCHIO99today at 4:11 PM

This matches what I've seen. The .env file is one vector, but the more common pattern with AI coding tools is secrets ending up directly in source code that never touch .env at all.

The ones that come up most often:

  - Hardcoded keys: const STRIPE_KEY = "sk_live_..."
  - Fallback patterns: process.env.SECRET || "sk_live_abc123" (the AI helpfully provides a default)
  - NEXT_PUBLIC_ prefix on server-only secrets, exposing them to the client bundle
  - Secrets inside console.log or error responses that end up in production logs
These pass type-checks and look correct in review. I built a static analysis tool that catches them automatically: https://github.com/prodlint/prodlint

It checks for these patterns plus related issues like missing auth on API routes, unvalidated server actions, and hallucinated imports. No LLM, just AST parsing + pattern matching, runs in under 100ms.

show 1 reply
salil999today at 2:30 PM

Can't say it's a perfect solution but one way I've tried to prevent this is by wrapping secrets in a class (Java backend) where we override the toString() method to just print "***".

show 1 reply