Show HN: Rigour – Open-source quality gates for AI coding agents
erashu212 Saturday, February 21, 2026Hey HN,
I built Rigour, an open-source CLI that catches quality issues AI coding agents introduce. It runs as a quality gate in your workflow — after the agent writes code, before it ships.
v4 adds --deep analysis: AST extracts deterministic facts (line counts, nesting depth, method signatures), an LLM interprets what the patterns mean (god classes, SRP violations, DRY issues), then AST verifies the LLM didn't hallucinate.
I ran it on PicoClaw (open-source AI coding agent, ~50 Go files):
- 202 total findings - 88 from deep analysis (SOLID violations, god functions, design smells) - 88/88 AST-verified (zero hallucinations) - Average confidence: 0.89 - 120 seconds for full codebase scan
Sample finding: pkg/agent/loop.go — 1,147 lines, 23 functions. Deep analysis identified 5 distinct responsibilities (agent init, execution, tool processing, message handling, state management) and suggested specific file decomposition.
Every finding includes actionable refactoring suggestions, not just "fix this."
The tool is local-first — your code never leaves your machine unless you explicitly opt in with your own API key (--deep -k flag).
Tech: Node.js CLI, AST parsing per language, structured LLM prompts with JSON schema enforcement, AST cross-verification of every LLM claim.
GitHub: https://github.com/rigour-labs/rigour
Would love feedback, especially from anyone dealing with AI-generated code quality in production.