Show HN: Sentinel – Zero-trust governance for AI Agents
azdhril Sunday, January 25, 2026Hi HN,
I’m a software engineer and I’ve been building agentic workflows lately. Like many of you, I got concerned about giving LLMs "write access" to tools. Whether it's a payment API, a database deletion, or a simple email send, the risk of a hallucination causing a $5k mistake is real.
I wanted a way to keep a human in the loop without rewriting my entire agent logic every time. So I built Sentinel.
It’s a lightweight Python layer that wraps any tool/function. The key principles I focused on:
Fail-secure by default: If the rules engine or the network fails, the action is blocked. Most systems fail-open, which is a nightmare for security.
Zero-trust Decorator: You just add @protect to your functions. It’s agnostic to the framework (works with LangChain, CrewAI, or raw OpenAI calls).
Semantic Anomaly Detection: Beyond static JSON rules (like "max $100"), it uses a Z-score analysis of historical audit logs to flag unusual behavior—like an agent suddenly trying to call a function 100 times in a minute.
Context-Aware Approvals: The approver sees exactly what the AI saw (the state before the action) to make an informed decision.
It’s open source (MIT) and just launched on PyPI: pip install agentic-sentinel
I’d love to hear your thoughts on the "Fail-secure" approach and how you guys are currently handling AI tool governance in production. I'm here to answer any questions!