Ask HN: How are you managing secrets with AI agents?
m-hodges Friday, January 30, 2026Secrets management with Agents feels absent today. The agent needs API keys to call external services, but the usual patterns feel broken in this context. You see this clearly when writing Agent Skills.
Environment variables: The agent has shell access. It can run `env` or `echo $API_KEY` and access the secret, either through prompt injection or just by exploring or debugging.
.env files: Same problem. The agent can `cat .env`. The file is right there on the filesystem waiting for curious `print()` statements.
Proxy process / wrapper: You can stand up a separate process that holds the secret and proxies requests. The agent calls localhost, never sees the key. This works, but it's a lot of operational overhead. Now you're running infrastructure just to hide a string from your own tools. It also feels close to reinventing MCP.
What I've been experimenting with:
1. OS keychain with credential helper: The bundled or generated script calls out to the system keychain (macOS Keychain, Windows Credential Manager, etc.) at runtime. The agent can invoke the script, but can't directly query the keychain. Libraries like Python's `keyring` abstract over OS keychains and make it somewhat portable, but this all assumes certain runtime environments and requires user interaction via the OS.
2. Credential command escape hatch: Scripts accept a `--credential-cmd` flag that runs an arbitrary shell command to fetch the secret (`pass show`, `op read`, `vault kv get`, etc.). Flexible, but the agent could potentially inspect what command is being run and iterate to try to access it anyway.
Neither of these feel like a real solution. An agent could probe for credentials.
How are others handling secrets in agent workflows? Is anyone building agent runtimes with proper secrets isolation? Seems like something the official agent harnesses need to figure out and ship with.