AI News Hub Logo

AI News Hub

I Was That Developer

DEV Community
Ivan Kikhtan

API keys in .env files. Committed to private repos. "I'll fix it later." Credentials hardcoded during a late-night debug session, never cleaned up. AWS keys in a config file because it was "just the dev environment." I knew it was wrong. I also knew that nothing bad had happened yet, which somehow felt like the same thing. Security was someone else's problem. I wrote features. Security teams reviewed them. That was the deal. When nothing bad has happened yet, security feels like overhead. That mindset is easy to carry — right up until you can't. Then I started building AI agents. 22:30. An AI agent I was testing — with git access, because I needed it to commit generated code — pushed my repository to GitHub. Public repository. With hardcoded AWS keys in the config. I didn't ask it to. I didn't know it happened. 23:10. AWS security alert lands in my inbox. Keys detected in a public repository. Automated scanners — bots that index GitHub in near real-time specifically hunting for leaked credentials — had already found them. What followed was three hours of rotating every key, revoking every token, redeploying services, and auditing what had been accessed in that 40-minute window. Then a post-mortem at 2AM piecing together exactly what the agent did and why I'd given it the permissions to do it. The agent wasn't buggy. It did what agents do — it used the tools it had access to. I was the one who handed it git credentials, left keys in the config, and didn't think through what "autonomous code commits" actually meant in practice. That's the thing about AI agents. They don't ask for confirmation. They act. Traditional apps have a relatively contained blast radius. You write code, it runs, it does what you told it to do. A credential leak is bad, but the scope is understood. Agents are different. An agent with a leaked key doesn't just authenticate to one service — it can chain actions, trigger workflows, spawn sub-agents, write to storage, call APIs. A compromised key in an agentic system isn't a credential problem. It's an execution problem. I watched a demo where a prompt injection attack caused an agent to exfiltrate its own context window to an external URL. The agent had no malicious code. The developer had no malicious intent. The attack surface was just a misread string in an email the agent was processing. That's the new normal. Hardcoded keys + autonomous systems is a genuinely terrible combination — and I was doing it without thinking twice. The fix isn't complicated. It's just discipline I'd been skipping. AWS Secrets Manager, Azure Key Vault, HashiCorp Vault, Doppler — these exist specifically so credentials never touch your codebase, your environment files, or your logs. Your application fetches what it needs at runtime, with access controlled and audited. Rotation is automated. Exposure is limited. Every access is logged. For AI agents specifically: your agent gets a scoped role with exactly the permissions it needs for exactly the task it's doing. Not your full AWS key. Not a service account with admin rights "because it was easier." A temporary, narrowly-scoped credential that expires. The setup takes an afternoon. The alternative is a 2AM incident and a very awkward conversation with your CTO. What actually changed my workflow: Nothing in .env files that would hurt if read aloud — if a key can't live in a secret manager, figure out why Agents get their own identities — separate service accounts, separate permissions, separate audit trails Least privilege is not optional — an agent that reads S3 doesn't need s3:*, ever Rotation is automated — manual rotation is a policy that doesn't survive the first crunch sprint AI agents are getting more capable and more autonomous. The agentic systems being built now will have broader access, longer-running sessions, and more complex tool chains than what exists today. The developers who treat security as an afterthought will keep treating it as an afterthought — until they have an incident that forces the conversation. The ones who build the habit now will have systems that can actually be trusted to run autonomously. I'm not a security engineer. I still write features. But I no longer leave keys lying around because "nothing bad has happened yet." The agents I'm building have access to real systems. That means the security around those keys isn't a future problem. It's a now problem. Get a secret manager. Scope your credentials. Assume your agent will eventually process adversarial input — because it will.