Story

Show HN: IntentusNet – Deterministic Execution and Replay for AI Agent Systems

balachandarmani Saturday, December 27, 2025

Hi HN,

I’ve been working on an open-source project called IntentusNet. It focuses on a narrow but persistent problem in AI systems:

AI executions are observable, but not reproducible.

When a production issue happens:

the model may already be upgraded

fallback logic may have changed

retries may be implicit

routing decisions are no longer recoverable

Logs tell you something happened, but they don’t let you replay the execution itself.

What IntentusNet does

IntentusNet is not a planner, prompt framework, or model wrapper.

It’s an execution runtime that enforces deterministic semantics around models:

explicit intent routing

deterministic fallback behavior

ordered agent execution

transport-agnostic agents (local, HTTP, ZeroMQ, WebSocket, MCP-style)

In the latest release, I added execution recording and deterministic replay.

Each intent execution can be:

recorded as an immutable artifact

replayed later without re-running models

explained even after models or agents change

The core invariant is simple:

The model may change. The execution must not.

Why I built this

Most AI systems implicitly trust the model to drive control flow. That makes failures hard to reason about and almost impossible to reproduce.

IntentusNet takes the opposite approach:

models are treated as unreliable but useful

routing and fallback are explicit and deterministic

executions are facts, not logs

This is closer to how distributed systems treat requests than how most LLM stacks work today.

Demo (what it actually proves)

There’s a small demo that shows:

A live execution with “model v1”

The same execution with “model v2” (different output)

A deterministic replay of the original execution, even after the model changes

Routing and execution order stay the same. Only the model behavior changes.

No debugger UI, no dashboards — just execution semantics.

What this is not

Not a replacement for MCP

Not a prompt-engineering framework

Not a monitoring system

Not trying to be “smart”

It’s infrastructure for making AI systems operable.

Repo

GitHub: https://github.com/Balchandar/intentusnet

I’m especially interested in feedback from people who’ve had to debug LLM-related production incidents or explain AI behavior after the fact. Happy to answer questions or criticism.

1 0
Read on Hacker News