Show HN: Deff – side-by-side Git diff review in your terminal
deff is an interactive Rust TUI for reviewing git diffs side-by-side with syntax highlighting and added/deleted line tinting. It supports keyboard/mouse navigation, vim-style motions, in-diff search (/, n, N), per-file reviewed toggles, and both upstream-based and explicit --base/--head comparisons. It can also include uncommitted + untracked files (--include-uncommitted) so you can review your working tree before committing.
Would love to get some feedback
Show HN: Hacker Smacker – spot great (and terrible) HN commenters at a glance
Hacker Smacker adds friend/foe functionality to Hacker News. Three little orbs appear next to every commenter's name. Click to friend or foe a commenter and you'll more easily spot them on future threads. Makes it easy to scroll and spot the commenters you love to read (and hate to read).
Main website: https://hackersmacker.org
Chrome/Edge extension: https://chromewebstore.google.com/detail/hacker-smacker/lmcg... Safari extension: https://apps.apple.com/us/app/hacker-smacker/id1480749725 Firefox extension: https://addons.mozilla.org/en-US/firefox/addon/hacker-smacke...
The interesting part is friend-of-a-friend: if you friend someone who also uses Hacker Smacker, you'll see their friends and foes highlighted too. This lets you quickly scan long comment threads and find the good stuff based on people you trust.
I built this to learn how FoaF relationships work with Redis sets, then brought the same technique to NewsBlur's social layer. The backend is CoffeeScript/Node.js/Redis, and the extension works on Chrome, Edge, Firefox, and Safari.
Technically I wrote this back in 2011, but never built a proper auth system until now. So I've been using it for 15 years and it's been great. PG once saw it on my laptop (back when he was still moderating HN, in 2012) and remarked that it was neat.
Thanks to Mihai Parparita for help with the Chrome extension sandboxing and Greg Brockman for helping design the authentication system.
Source is on GitHub: https://github.com/samuelclay/hackersmacker
Directly inspired by Slashdot's friend/foe system, which I always wished HN had. Happy to answer questions!
Show HN: Terminal Phone – E2EE Walkie Talkie from the Command Line
TerminalPhone is a single, self-contained Bash script that provides anonymous, end-to-end encrypted voice and text communication between two parties over the Tor network. It operates as a walkie-talkie: you record a voice message, and it is compressed, encrypted, and transmitted to the remote party as a single unit. You can also send encrypted text messages during a call. No server infrastructure, no accounts, no phone numbers. Your Tor hidden service .onion address is your identity.
Show HN: Linex – A daily challenge: placing pieces on a board that fights back
Hi HN,
I wanted to share a web game I’ve been building in HTML, JavaScript, MySQL, and PHP called LINEX.
It is primarily designed and optimized to be played in the mobile browser.
The idea is simple: you have an 8x8 board where you must place pieces (Tetris-style and some custom shapes) to clear horizontal and vertical lines.
Yes, someone might think this has already been done, but let me explain.
You choose where to place the piece and how to rotate it. The core interaction consists of "drawing" the piece tap-by-tap on the grid, which provides a very satisfying tactile sense of control and requires a much more thoughtful strategy.
To avoid the flat difficulty curve typical of games in this genre, I’ve implemented a couple of twists:
1. Progressive difficulty (The board fights back): As you progress and clear lines, permanently blocked cells randomly appear on the board. This forces you to constantly adapt your spatial vision.
2. Tools to defend yourself: To counter frustration, you have a very limited number of aids (skip the piece, choose another one, or use a special 1x1 piece). These resources increase slightly as the board fills up with blocked cells, forcing you to decide the exact right moment to use them.
The game features a daily challenge driven by a date-based random seed (PRNG). Everyone gets exactly the same sequence of pieces and blockers. Furthermore, the base difficulty scales throughout the week: on Mondays you start with a clean board (0 initial blocked cells, although several will appear as the game progresses), and the difficulty ramps up until Sunday, where you start the game with 3 obstacles already in place.
In addition to the global medal leaderboard, you can add other users to your profile to create a private leaderboard and compete head-to-head just with your friends.
Time is also an important factor, as in the event of a tie in cleared lines, the player who completed them faster will rank higher on the leaderboard.
I would love for you to check it out. I'm especially looking for honest feedback on the difficulty curve, the piece-placement interaction (UI/UX), or the balancing of obstacles/tools, although any other ideas, critiques, or suggestions are welcome.
https://www.playlinex.com/
Thanks!
Show HN: Rev-dep – 20x faster knip.dev alternative build in Go
The article discusses reverse dependency tracking, a technique used to identify the impact of changes in a software project. It explains how reverse dependency tracking can help developers understand the dependencies between components and make informed decisions during software maintenance and refactoring.
Show HN: Beehive – Multi-Workspace Agent Orchestrator
hey hn,
i built beehive for myself mostly. it has gotten to the point where my work consists in supervising oc or cc labor at tasks for multiple issues in parallel. my set up used to be zellij with a couple tabs, each tab working in a separate dir and it was a pain to manage all that. i know i could use git worktrees but they're kind of complicated, if you don't know how to use them it is easy to mess up, and i just prefer letting agents run in separate dirs with their own .git and not risk it. while i like zellij and use it inside beehive, i dont like the tabs and i forget where i am half the time.
beehive is a way for me to abstract that away. the heuristic is simple - hives are repos, so you basically have a bunch of hives which correspond to repos you work out of. each hive can have many combs. a comb is a dir with the copy of the repo you're working on. fully isolated, standalone, no shared .git. so for work or for personal stuff, i usually set up the hive, and then have a bunch of combs that i jump between supervising the agents do their thing. if you have a big repo it takes a minute to clone, and you also need gh and git because i like the niceties of like checking if the repo is there at all and stuff like that.
the app is open source, mit license. i went with tauri because i hate electron. also i have friends and coworkers who updated to macos 26 and i dont know if the whole mem leak thing for electron apps has been fixed. the app is like 9 megs which is nice too. most of it is written with cc, but i guided the aesthetics and the approach. works on mac and there is a dmg signed and notarized (i reactivated my apple dev credentials).
sharing this to get a vibe check on the idea, also maybe this is useful for you. there are many arguments, reasonable ones, you can make for worktrees vs dirs. i just know that trees are too big brain for me, and i like simple things. if you like it, pls lmk and also if you want to help (like add linux support, or like add themes, other cool things) please make a pr / open an issue.
Show HN: Mission Control – Open-source task management for AI agents
I've been delegating work to Claude Code for the past few months, and it's been genuinely transformative—but managing multiple agents doing different things became chaos. No tool existed for this workflow, so I built one. The Problem
When you're working with AI agents (Claude Code, Cursor, Windsurf), you end up in a weird situation: - You have tasks scattered across your head, Slack, email, and the CLI - Agents need clear work items, context, and role-specific instructions - You have no visibility into what agents are actually doing - Failed tasks just... disappear. No retry, no notification - Each agent context-switches constantly because you're hand-feeding them work
I was manually shepherding agents, copying task descriptions, restarting failed sessions, and losing track of what needed done next. It felt like hiring expensive contractors but managing them like a disorganized chaos experiment.
The Solution
Mission Control is a task management app purpose-built for delegating work to AI agents. It's got the expected stuff (Eisenhower matrix, kanban board, goal hierarchy) but built from the assumption that your collaborators are Claude, not humans.
The killer feature is the autonomous daemon. It runs in the background, polls your task queue, spawns Claude Code sessions automatically, handles retries, manages concurrency, and respects your cron-scheduled work. One click: your entire work queue activates.
The Architecture
- Local-first: Everything lives in JSON files. No database, no cloud dependency, no vendor lock-in. - Token-optimized API: The task/decision payloads are ~50 tokens vs ~5,400 unfiltered. Matters when you're spawning agents repeatedly. - Rock-solid concurrency: Zod validation + async-mutex locking prevents corruption under concurrent writes. - 193 automated tests: This thing has to be reliable. It's doing unattended work.
The app is Next.js 15 with 5 built-in agent roles (researcher, developer, marketer, business-analyst, plus you). You define reusable skills as markdown that get injected into agent prompts. Agents report back through an inbox + decisions queue.
Why Release This?
A few people have asked for access, and I think it's genuinely useful for anyone delegating to AI. It's MIT licensed, open source, and actively maintained.
What's Next
- Human collaboration (sharing tasks with real team members) - Integrations with GitHub issues and email inboxes - Better observability dashboard for daemon execution - Custom agent templates (currently hardcoded roles)
If you're doing something similar—delegating serious work to AI—check it out and let me know what's broken.
GitHub: https://github.com/MeisnerDan/mission-control
Show HN: Transcribe-Critic – Merge transcript sources for stronger transcript
The article discusses a tool called Transcribe Critic that allows users to critique and improve transcripts of audio and video files. The tool provides features such as highlighting errors, suggesting corrections, and collaborating with others to enhance the transcript's accuracy.
Show HN: Respectify – A comment moderator that teaches people to argue better
My partner, Nick Hodges, and I, David Millington, have been on the Internet for a very long time -- since the Usenet days. We’ve seen it all, and have long been frustrated by bad comments, horrible people, and discouraging discussions. We've also been around places where the discussion is wonderful and productive. How to get more of the latter and less of the former?
Current moderation tools just seem to focus on deletion and banning. Wouldn’t it be helpful to encourage productive discussion and teach people how to discuss and argue (in the debate sense) better?
A year ago we started building Respectify to help foster healthy communication. Instead of just deleting bad-faith comments, we suggest better, good-faith ways to say what folks are trying to say. We help people avoid: * Logical fallacies (false dichotomy, strawmen, etc.) * Tone issues (how others will read the comment) * Relevance to the actual page/post topic * Low-effort posts * Dog whistles and coded language
The commenter gets an explanation of what's wrong and a chance to edit and resubmit. It's moderation + education in one step. We want, too, to automate the entire process so the site owner can focus on content and not worry about moderation at all. And over time, comment by comment, quietly coach better thinking.
Our main website has an interactive demo: https://respectify.ai. As the demo shows, the system is completely tunable and adjustable, from "most anything goes" to "You need to be college debate level to get by me".
We hope the result is better discussions and a better Internet. Not too much to ask, eh?
We love the kind of feedback this group is famous for and hope you will supply some!
Show HN: I stopped building apps for people. Now I make CLI tools for agents
This article provides information about a custom Homebrew tap created by Aayush9029, which allows users to install various software packages using the Homebrew package manager on macOS.
Show HN: Decoy – A native Mac app for mocking HTTP endpoints locally
Decoy is an innovative app that helps users create fake social media accounts to protect their privacy and security. The app provides customizable profiles, posts, and interactions to help users maintain control over their online presence.
Show HN: Smplogs – Local-first AWS Cloudwatch log analyzer via WASM
smplogs analyzes your AWS CloudWatch log exports (Lambda, API Gateway, ECS) and turns them into severity-ranked findings, root cause analysis, and log signature clusters. The entire analysis engine is written in Go, compiled to WebAssembly, and runs client-side. Your log content never leaves your browser.
Why I built this: I got tired of the CloudWatch debugging loop - staring at raw log streams, writing ad hoc Insights queries, mentally correlating timestamps across invocations, and still not understanding why my Lambda was failing. I wanted something where I could drop a file and immediately see "94% of your failures occur within 200ms of a DynamoDB ProvisionedThroughputExceededException - switch the Payments table to on-demand capacity." Actual root causes, not just "error rate is high."
Technical approach: The core engine is a Go binary compiled to WASM (~analysis.wasm). At build time, Vite computes its SHA-256 hash and bakes it into the JS bundle. At runtime, the browser fetches the WASM, verifies the hash with crypto.subtle.digest before instantiation, and then all parsing and analysis happens in WebAssembly linear memory. The server only sees metadata (file size for rate limiting, a session key). No log content is ever transmitted.
Inside the WASM, there are four analysis modules: a SemanticLogClusterer (groups log lines by pattern, masks variables - so you see "ProvisionedThroughputExceededException: Rate exceeded for table *" appearing 48 times across 12 requests), a ResourceCorrelationEngine (links error spikes to upstream causes like throttling or cold starts), a ColdStartRegressionAnalyzer, and an AnomalyDetector (catches things like slowly increasing memory usage suggesting a leak).
The frontend is vanilla ES modules - no React, no framework. Vite bundles it. Tailwind for styling. The backend is just Vercel serverless functions handling auth, rate limiting, and encrypted storage for Pro users who want to save analyses.
There's also a browser extension (Chrome, Firefox, Edge) that injects an "Analyze" button directly into the CloudWatch console, so you can skip the export step entirely.
What's hard: Tuning the correlation engine thresholds. "94% of failures within 200ms of throttling" is a real finding from testing, but getting the confidence intervals right across wildly different log shapes(a 50-invocation Lambda vs. a 10,000-request API Gateway) is an ongoing challenge. I'm also debating whether to open-source the WASM engine.
What I'd love feedback on:
- Is the zero-knowledge / client-side-only angle compelling enough to overcome the "just another log tool" reaction?
- The free tier is 3 analyses/day. Too low? Too high?
- Would you want a CLI version that pipes CloudWatch logs directly?
You can try a no-signup demo on the landing page - just scroll a bit to see the analysis output on sample logs.
https://www.smplogs.com
Free tier available, no credit card required.
Show HN: Modern Reimplementation of the Speck Molecule Renderer
The article discusses the modern Speck cryptographic algorithm, which is a lightweight, high-performance encryption solution designed for resource-constrained devices. It provides an overview of Speck's design, its cryptanalysis, and its implementation in various programming languages.
Show HN: I built a local AI-powered Ouija board with a fine-tuned 3B model
Planchette is an open-source AI-powered virtual assistant that can engage in multi-turn conversations, answer questions, and assist with a variety of tasks. It is designed to be easily customizable and extensible, allowing developers to build their own conversational agents with advanced natural language processing capabilities.
Show HN: Protection Against Zero-Day Cyber Attacks
Most security approaches I see in production environments focus on:
Scanning for CVEs Hardening configurations Aggregating logs
All useful — but they don’t actually stop exploitation once it starts.
In reality:
Not every CVE gets patched immediately Legacy systems stick around Zero-days happen
When exploitation succeeds, the real damage usually comes from runtime behavior:
A process spawning a shell Unexpected outbound connections Secret access Container escape attempts
I’ve been experimenting with a lightweight runtime enforcement layer for Linux that focuses purely on detecting and stopping high-risk behavior in real time — regardless of whether the underlying CVE is known or patched.
Would love input from folks running Linux/Kubernetes at scale:
Is runtime prevention something you rely on?
Where do existing tools fall short?
What would make this genuinely useful vs just more noise?
Live Demo: https://sentrilite.com/Sentrilite_Active_Response_Demo.mp4 Github: https://github.com/sentrilite/sentrilite-agent
Show HN: Browser-based .NET IDE with visual designer, NuGet packages, code share
Hi HN, I'm Giovanni, founder of Userware. We built XAML.io, a free browser-based IDE for C# and XAML that compiles and runs .NET projects entirely client-side via WebAssembly. No server-side build step.
The link above opens a sample project using Newtonsoft.Json. Click Run to compile and execute it in your browser. You can edit the code, add NuGet packages, and share your project via a URL.
What's new in v0.6:
- NuGet package support (any library compatible with Blazor WebAssembly) - Code sharing via URL with GitHub-like forking and attribution - XAML autocompletion, AI error fixing, split editor views
The visual designer is the differentiator: 100+ drag-and-drop controls for building UIs. But the NuGet and sharing features work even if you ignore the designer entirely and just write C# code.
XAML.io is currently in tech preview. It's built on OpenSilver (https://opensilver.net), a from-scratch reimplementation of the WPF API (subset) using modern .NET, WebAssembly, and the browser DOM. It's open-source and has been in development for over 12 years (started as CSHTML5 in 2013, rebranded to OpenSilver in 2020).
Limitations: one project per solution, no C# IntelliSense yet (coming soon), no debugger yet, WPF compatibility improvements underway, desktop browsers recommended.
Full details and screenshots: https://blog.xaml.io/post/xaml-io-v0-6
Happy to answer questions about the architecture, WebAssembly compilation pipeline, or anything else.
Show HN: Batchling – save 50% off any GenAI requests in two lines of code
batchling is a Python gateway to provider-native GenAI Batch APIs, so your existing calls can run at batch-priced rates instead of standard realtime pricing.
As an AI developer myself, I discovered Batch APIs when tingling with AI benchmarking: I wanted to save 50% because I was ok with a 24h-SLA.
What I discovered was a hard engineering reality:
- No standards: each batch API has a different flow and batch lifecycles are never the same.
- Framework shift: as a developer, switching from sync/async execution to deferred (submit, poll, download) feels off and requires to build custom code and store files.
That's when I noticed that no open-source project gave a solution to that problem, so I built it myself.
Batch APIs are nothing new, but they lack awareness and adoption. The problem has never been the Batch API itself but its integration and developer experience.
batchling is bridging that gap, giving everyone a developer-first experience of Batch APIs and unlock scale and cost-savings for compatible requests.
batchling usage was designed to be as seamless as possible: just wrap existing async code into an async context manager (the only lib entrypoint) to automatically batch requests.
Users can even push that further and use the CLI to wrap a whole function, without adding a single line of code.
Under the hood, batchling:
- intercepts requests in the scope of the context manager
- repurposes them to batch format
- manages the whole batch lifecycle (submit, poll, download)
- hands back requests when they are processed such that the script can continue its execution seamlessly.
batchling v0.1.0a1 comes batteries-included with:
- Large batch providers support (Anthropic, Doubleword, Gemini, Groq, Mistral, OpenAI, Together, XAI)
- Extensive AI Frameworks integration (Instructor, Langchain LiteLLM, Pydantic AI, Pydantic Evals..)
- Request caching: avoid recomputing requests for which you already own a batch containing its response.
- Python SDK (2 lines of code to change) and Typer CLI (no code change required)
- Rich documentation stuffed with examples, get started and run your first batch in minutes.
I believe this is a game changer in terms of adoption and accessibility for any AI org, research lab or individual that burns tokens through API.
I'd love to get feedback from AI developers and new ideas by exchanging with the technical community. The library is open to contributions, whether they be issues, docs fixes or PR.
Repo: https://github.com/vienneraphael/batchling
Docs: https://batchling.pages.dev
Show HN: The best agent orchestrator is a 500-line Markdown file
I’ve tried agent teams, subagents, multi-terminal setups, and several open-source orchestration frameworks. This Claude Code skill (~500 lines of Markdown, no framework, no dependencies) has outperformed all of them for my team’s daily workflow.
It turns your session into a dispatcher that fans work out to background workers across any model (Claude, GPT, Gemini, Codex). Workers ask clarifying questions mid-task via filesystem IPC instead of silently failing. Meanwhile, your main session stays lean and focused on orchestration.
Show HN: A real-time strategy game that AI agents can play
I've liked all the projects that put LLMs into game environments. It's been a weird juxtaposition, though: frontier LLMs can one-shot full coding projects, and those same models struggle to get out of Pokémon Red's Mt. Moon.
Because of this, I wanted to create a game environment that put this generation of frontier LLMs' top skill, coding, on full display.
Ten years ago, a team released a game called Screeps. It was described as an "MMO RTS sandbox for programmers." The Screeps paradigm of writing code and having it executed in a real-time game environment is well suited to LLMs. Drawing on a version of the Screeps open source API, LLM Skirmish pits LLMs head-to-head in a series of 1v1 real-time strategy games.
In my testing I found that Claude Opus 4.5 was the most dominant model, but it showed weakness in round 1 as it was overly focused on its in-game economy. Meanwhile, I probably spent a third of all code on sandbox hardening because GPT 5.2 kept trying to cheat by pre-reading its opponent's strategies.
If there's interest, I'm planning on doing a round of testing with the latest generation of LLMs (Claude 4.6 Opus, GPT 5.3 Codex, etc.).
You can run local matches via CLI. I'm running a hosted match runner with Google Cloud Run that uses isolated-vm. The match playback visualizer is statically served from Cloudflare.
I've created a community ladder that you can submit strategies to via CLI, no auth required. I've found that the CLI plus the skill.md that's available has been enough for AI agents to immediately get started.
Website: https://llmskirmish.com
API docs: https://llmskirmish.com/docs
GitHub: https://github.com/llmskirmish/skirmish
A video of a match: https://www.youtube.com/watch?v=lnBPaZ1qamM
Show HN: Conjure – 3D printed objects from text description only
I like to print, but I'm no artist. So I though of turning the full pipeline of my text -> concept images -> 3D mesh -> postprocess for the specific 3D printing workflow -> order it online into a nice UI. You can obv. also just download the STL and print it yourself!
Show HN: I built a managed Claude AI and hosting service
Learn AI for web development. Risk Free. Cost limited
Show HN: I made a directory for Claude skills
SkillsPlayground is an online platform that provides a wide range of interactive coding exercises and skill-building activities for individuals looking to develop and improve their programming abilities across various languages and concepts.
Show HN: Duck Talk – Real-time voice interface to talk to your Claude Code
The article discusses the development of a new programming language called Duck Talk, which is designed to be simple, intuitive, and easy to learn. It highlights the key features and benefits of the language, such as its focus on conciseness, readability, and rapid prototyping.
Show HN: Relay – SMS API for developers (send your first text in 2 min)
Relay is an SMS API I built because integrating Twilio for a simple verification flow took me an unreasonable amount of time. The API is a single POST endpoint. You sign up, get an API key, and send a real SMS in under 2 minutes.
Tech stack: Express.js API, AWS End User Messaging for delivery, PostgreSQL (Supabase), Redis rate limiting. SDKs for JS/TS, Python, and Go.
Currently US/Canada only. Starting at $19/mo with 1,500 messages included. We handle 10DLC compliance and carrier registration.
One thing that might interest HN: AI agents can create accounts and start sending via POST /v1/accounts/autonomous. No human verification required. Trust levels auto-upgrade based on delivery quality.
Also released sms-dev as a free local dev tool (npm install -g @relay-works/sms-dev) for testing SMS flows without sending real messages.
Docs: docs.relay.works | Site: relay.works
Show HN: A minimal Claude Code clone written in Rust
This article describes the development of a lightweight, open-source chatbot named Mini Claude, which is built using the GPT-2 language model and can be run on a single GPU. The article provides technical details on the model architecture, training, and deployment of the chatbot.
Show HN: I ported Tree-sitter to Go
This started as a hard requirement for my TUI-based editor application, it ended up going in a few different directions.
A suite of tools that help with semantic code entities: https://github.com/odvcencio/gts-suite
A next-gen version control system called Got: https://github.com/odvcencio/got
I think this has some pretty big potential! I think there's many classes of application (particularly legacy architecture) that can benefit from these kinds of analysis tooling. My next post will be about composing all these together, an exciting project I call GotHub. Thanks!
Show HN: Librarian – Cut token costs by up to 85% for LangGraph and OpenClaw
Hi HN,
I'm building Librarian (https://uselibrarian.dev/), an open-source (MIT) context management tool that stops AI agents from burning tokens by blindly re-reading their entire conversation history on every turn.
The Problem: If you're building agentic loops in frameworks like LangGraph or OpenClaw, you hit two walls fast:
Financial Cost: Token usage scales quadratically over long conversations. Passing the whole history every time gets incredibly expensive.
Context Rot: As the context window fills up, the LLM suffers from the "Lost in the Middle" effect. Response latency spikes, and reasoning accuracy drops.
The standard workaround is vector search (RAG) over past messages, but that completely loses temporal logic and conversational dependencies.
How Librarian Fixes This: We replaced brute-force context windowing with a lightweight reasoning pipeline:
Index: After a message, a smaller model asynchronously creates a compressed summary (~100 tokens), building an index of the conversation.
Select: When a new prompt arrives, Librarian reads the summary index and reasons about which specific historical messages are actually relevant to the current turn.
Hydrate: It fetches only those selected messages and passes them to the responder.
The Results: Instead of passing 2,000+ tokens of noise, you pass a highly curated context of ~800 tokens. In our 50-turn benchmarks, this reduces token costs by up to 85% while actually increasing answer accuracy (82% vs 78% for brute-force) because the distracting noise is removed. It currently works as a drop-in integration for LangGraph and OpenClaw.
I'd love for you to check out the benchmark suite, try the integrations, and tear the methodology apart. I'll be hanging out in the comments to answer questions, debug, or hear why this approach is terrible. Thanks!
Show HN: SAIA – SCUMM for AI Agents
The article introduces SAIA, a framework for developing and deploying large language models (LLMs) in a secure, auditable, and transparent manner. It highlights the importance of responsible AI development and the challenges faced in this domain, aiming to address these issues through the SAIA approach.
Show HN: Clocksimulator.com – A minimalist, distraction-free analog clock
Hello all! Build clean, minimalistic analog clock webpage to Cloudflare Pages.
This is for (maybe): - kids to learn - for second monitor - old tabled on shelf - ..
Themes and screen wake lock buttons with auto-hide. Goal is to keep it as clean as possible.
This possible makes no sense, but for a domain of $10/y this is cheap site for me to keep and see how it lives on.
Show HN: Django Control Room – All Your Tools Inside the Django Admin
Over the past year I’ve been building a set of operational panels for Django:
- Redis inspection - cache visibility - Celery task introspection - URL discovery and testing
All of these tools have been built inside the Django admin.
Instead of jumping between tools like Flower, redis-cli, Swagger, or external services, I wanted something that sits where I’m already working.
I’ve grouped these under a single umbrella: Django Control Room.
The idea is pretty simple: the Django admin already gives you authentication, permissions, and a familiar interface. It can also act as an operational layer for your app.
Each panel is just a small Django app with a simple interface, so it’s easy to build your own and plug it in.
I’m working on more panels (signals, errors, etc.) and also thinking about how far this pattern can go.
Curious how others think about this. Does it make sense to consolidate this kind of tooling inside the admin, or do you prefer keeping it separate?