Story

Show HN: Making Codex stop rediscovering the same repository over and over

oldskultxo Monday, March 09, 2026

I've been using Codex quite a lot for programming tasks lately, and I kept running into the same issue.

Even when working in the same repository, every task basically starts from scratch. The model has to rediscover things like the project structure, where certain pieces of logic live, what decisions were already made, etc.

In larger repos this quickly turns into a lot of repeated exploration and unnecessary context loading.

So I started experimenting with a small layer around Codex that tries to treat context as something persistent instead of rebuilding it every time.

The idea is pretty simple: put a context engine between the task and Codex. Before sending the prompt, it decides what parts of the repository are likely relevant, trims the context down, and after execution it stores a few signals about what happened.

Right now the system does a few things:

– A planner that decides what parts of the repo are relevant for a task – A context optimizer that deduplicates and trims context before sending it to the model – A small failure memory so the system doesn't keep repeating the same dead ends – Some task-specific memory for recurring task types – A graph that links tasks, files and decisions together over time

The goal isn't to build an agent or anything like that. It's more about letting the system gradually accumulate some understanding of the project instead of rediscovering it over and over.

I originally built this while working on a fairly large interactive narrative project I've been developing over the last ~40 days. Codex kept navigating the same repository structure again and again, which made the problem pretty obvious.

After a few sessions I ran a small internal "savings report" just to get a rough sense of the effect. The numbers aren't scientific, but they were roughly:

Estimated context reduction: ~30–45% Estimated token reduction: ~25–40% Estimated latency improvement: ~15–30%

The biggest difference wasn't even the token savings — it was that Codex stopped wandering around the repo as much.

Still very experimental, but it's already been useful for repeated tasks in a medium-sized codebase.

Repo: https://github.com/oldskultxo/codex_context_engine

Curious if other people building coding workflows around LLMs have run into the same issue or tried something similar.

1 1
Read on Hacker News Comments 1