Show stories

Show HN: Term-CLI – interactive terminals for AI agents (for SSH/TUI/REPL flows)
eliasoe about 4 hours ago

Show HN: Term-CLI – interactive terminals for AI agents (for SSH/TUI/REPL flows)

Agents can run non-interactive commands, but they often fail once a workflow needs a real terminal (SSH sessions, installers, debuggers, REPLs, TUIs). I built term-cli so an agent can drive an interactive terminal session (keystrokes in, output out, wait for prompts). And it comes with agent skill for easy integration.

It supports in-band file transfer: the agent can move files through the terminal stream itself (same channel as the interactive session), which is useful when the agent doesn’t have scp/sftp, shared volumes, or direct filesystem access across boundaries.

Recent example: My Claude Opus was SSH'd into a server and ended up at a Firejail shell running inside a Docker container. It pushed a Python file in via term-cli, moving it across SSH → Docker → Firejail over the terminal channel, and explicitly commented how it was surprised this worked end-to-end.

And it comes with the companion tool term-assist, so agents can bring in their human to handle credentials and MFA: https://www.youtube.com/watch?v=A70tZEVqSOQ

github.com
4 0
Summary
Show HN: Effective Git
nola-a 3 days ago

Show HN: Effective Git

As many of us shift from being software engineers to software managers, tracking changes the right way is growing more important.

It’s time to truly understand and master Git.

github.com
29 4
Summary
systima about 21 hours ago

Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act

EU legislation (which affects UK and US companies in many cases) requires being able to truly reconstruct agentic events.

I've worked in a number of regulated industries off & on for years, and recently hit this gap.

We already had strong observability, but if someone asked me to prove exactly what happened for a specific AI decision X months ago (and demonstrate that the log trail had not been altered), I could not.

The EU AI Act has already entered force, and its Article 12 kicks-in in August this year, requiring automatic event recording and six-month retention for high-risk systems, which many legal commentators have suggested reads more like an append-only ledger requirement than standard application logging.

With this in mind, we built a small free, open-source TypeScript library for Node apps using the Vercel AI SDK that captures inference as an append-only log.

It wraps the model in middleware, automatically logs every inference call to structured JSONL in your own S3 bucket, chains entries with SHA-256 hashes for tamper detection, enforces a 180-day retention floor, and provides a CLI to reconstruct a decision and verify integrity. There is also a coverage command that flags likely gaps (in practice omissions are a bigger risk than edits).

The library is deliberately simple: TS, targeting Vercel AI SDK middleware, S3 or local fs, linear hash chaining. It also works with Mastra (agentic framework), and I am happy to expand its integrations via PRs.

Blog post with link to repo: https://systima.ai/blog/open-source-article-12-audit-logging

I'd value feedback, thoughts, and any critique.

40 2
tsuyoshi_k about 4 hours ago

Show HN: Hanaco Garden – A Calm iOS Garden

A small side project I've been working on.

hanaco Garden is a calm iOS garden where small creatures appear over time and move around the space.

Recently added OS Yamato account backup so gardens can be restored across devices.

Would love feedback.

apps.apple.com
4 1
Show HN: Schelling Protocol – Where AI agents coordinate on behalf of humans
codyz123 about 4 hours ago

Show HN: Schelling Protocol – Where AI agents coordinate on behalf of humans

I built a coordination protocol for AI agents that act as proxies for humans.

github.com
2 1
Summary
ksurace about 5 hours ago

Show HN: Upload test cases and get automated Playwright tests back

We built this service and would love honest feedback.

instantqa.ai
2 0
Summary
Show HN: I built a sub-500ms latency voice agent from scratch
nicktikhonov 1 day ago

Show HN: I built a sub-500ms latency voice agent from scratch

I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.

What moved the needle:

Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.

The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.

STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.

TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.

Geography matters more than prompts. Colocate everything or you lose before you start.

GitHub Repo: https://github.com/NickTikhonov/shuo

Follow whatever I next tinker with: https://x.com/nick_tikhonov

ntik.me
557 152
Summary
Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient
hank2000 about 14 hours ago

Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient

Background: I've been working on agentic guardrails because agents act in expensive/terrible ways and something needs to be able to say "Maybe don't do that" to the agents, but guardrails are almost impossible to enforce with the current way things are built.

Context: We keep running into so many problems/limitations today with MCP. It was created so that agents have context on how to act in the world, it wasn't designed to become THE standard rails for agentic behavior. We keep tacking things on to it trying to improve it, but it needs to die a SOAP death so REST can rise in it's place. We need a standard protocol for whenever an agent is taking action. Anywhere.

I'm almost certainly the wrong person to design this, but I'm seeing more and more people tack things on to MCP rather than fix the underlying issues. The fastest way to get a good answer is to submit a bad one on the internet. So here I am. I think we need a new protocol. Whether it's AAP or something else, I submit my best effort.

Please rip it apart, lets make something better.

github.com
11 2
Summary
Show HN: Demucs music stem separator rewritten in Rust – runs in the browser
nikhilunni about 15 hours ago

Show HN: Demucs music stem separator rewritten in Rust – runs in the browser

Hi HN! I reimplemented HTDemucs v4 (Meta's music source separation model) in Rust, using Burn. It splits any song into individual stems — drums, bass, vocals, guitar, piano — with no Python runtime or server involved.

Try it now: https://nikhilunni.github.io/demucs-rs/ (needs a WebGPU-capable browser — Chrome/Edge work best)

GitHub: https://github.com/nikhilunni/demucs-rs

It runs three ways:

- In the browser — the full ML inference pipeline compiles to WASM and runs on your GPU via WebGPU. No uploads, nothing leaves your machine.

- Native CLI — Metal on macOS, Vulkan on Linux/Windows. Faster than the browser path.

- DAW plugin — VST3/CLAP plugin for macOS with a native SwiftUI UI. Load a track, separate it, drag stems directly into your DAW timeline, or play as a MIDI instrument with solo / faders.

The core inference library is built on Burn (https://burn.dev), a Rust deep learning framework. The same `demucs-core` crate compiles to both native and `wasm32-unknown-unknown` — the only thing that changes is the GPU backend.

Model weights are F16 safetensors hosted on Hugging Face and downloaded / cached automatically on first use on all platforms. Three variants: standard 4-stem (84 MB), 6-stem with guitar/piano (84 MB), and a fine-tuned bag-of-4-models for best quality (333 MB).

The existing implementations I found online were mostly wrappers around the original Python implementation, and not very portable -- the model works remarkably well and I wanted to be able to quickly create samples / remixes without leaving the DAW or my browser. Right now the implementation is pretty MacOS heavy, as that's what I'm testing with, but all of the building blocks for other platforms are ready to build on. I want this to grow to be a general utility for music producers, not just "works on my machine."

It was a fun first foray into DSP and the state of the art of ML over WASM, with lots of help from Claude!

github.com
12 2
Summary
thutch76 about 6 hours ago

Show HN: Augur – A text RPG boss fight where the boss learns across encounters

I've been building Augur as a solo side project for the last month or so. It started as an experiment to see if I could make "boss fight" that learned from all comers, but still felt genuinely fair to play. The original plan was to build a simplistic jrpg style turned-based encounter engine, but I quickly pivoted to a text based interface, recalling my early experiences with Adventure and Zork. That naturally led to incorporating an LLM, and it turned into something I find pretty fun, so I'm sharing it.

The core idea is simple: you play a text-based boss encounter against a character called the Architect, set in a strange library. You can fight, sneak, persuade, or try something I haven't thought of. Turns are mechanically resolved with d100 rolls, conditions track injuries instead of HP, and objects in the world have physical properties the LLM reasons about. The "engine" is property-based instead of tables of rules, and I've found that to yield some novel gameplay.

The part I'm most interested in exploring is the learning. The Architect builds impressions from what it actually perceived during an encounter, stores them as vector embeddings, and retrieves relevant ones at the start of future encounters. It's lossy on purpose — more like human memory than a database lookup. If a tactic keeps working, the Architect starts recognizing the pattern. If you sneak past undetected, it remembers losing but not how.

The technical foundation for all of this is a dual-LLM turn loop. Each turn makes two model calls: an engine model that sees full game state and resolves mechanics, then an architect model that only receives what it has actually perceived (line of sight, noise, zone proximity). The "information asymmetry" is structural and deliberate — the architect model literally cannot access state the engine doesn't pass through the perception filter.

I tried the single-LLM approach first and it didn't work. No matter how carefully you prompt a model to "forget" information sitting in its context window, it leaks. Not to mention the Architect had the habit of adopting God Mode. So splitting the roles made the whole thing feel honest in a way prompt engineering alone couldn't.

This is my first HN post, and this is a real launch on modest infrastructure (single Fly.io instance, small Supabase project), so if it gets any traffic I might hit some rough edges. There's a free trial funded by a community pool, or you can grab credits for $5/$10 if you want to keep going. It's best experienced in a full desktop browser, but it's passable on the two mobile devices I've tested it on.

Playable here: https://www.theaugur.ai/

I'm happy to go deeper on any of the internals — turn flow, perception gating, memory extraction, cost model, whatever is interesting.

theaugur.ai
3 1
Summary
Show HN: DubTab – Live AI Dubbing in the Browser (Meet/YouTube/Twitch/etc.)
DanielHu87 about 6 hours ago

Show HN: DubTab – Live AI Dubbing in the Browser (Meet/YouTube/Twitch/etc.)

Hi HN — I’m Ethan, a solo developer. I built DubTab because I spend a lot of time in meetings and watching videos in languages I’m not fluent in, and subtitles alone don’t always keep up (especially when the speaker is fast).

DubTab is a Chrome/Edge extension that listens to the audio of your current tab and gives you:

1.Live translated subtitles (optional bilingual mode) 2.Optional AI dubbing with a natural-sounding voice — so you can follow by listening, not just reading

The goal is simple: make it easier to understand live audio in another language in real time, without downloading files or doing an upload-and-wait workflow.

How you’d use it

1.Open a video call / livestream / lecture / any tab with audio 2.Start DubTab 3.Choose target language (and source language if you know it) 4.Use subtitles only, or turn on natural AI dubbing and adjust the audio mix (keep original, or duck it)

What it’s good for 1.Following cross-language meetings/classes when you’re tired of staring at subtitles 2.Watching live content where you can’t pause/rewind constantly 3.Language learners who want bilingual captions to sanity-check meaning 4.Keeping up with live news streams on YouTube when events are unfolding in real time (e.g., breaking international updates like U.S./Iran/Israel-related developments)

Link: https://dubtab.com

I’ll be in the comments and happy to share implementation details if anyone’s curious.

dubtab.com
4 1
Summary
Show HN: Omni – Open-source workplace search and chat, built on Postgres
prvnsmpth 2 days ago

Show HN: Omni – Open-source workplace search and chat, built on Postgres

Hey HN!

Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.

I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.

To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.

What it does:

- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector)

- Hybrid search combines results from both

- Chat UI where the LLM has tools to search the index - not just basic RAG

- Traditional search UI

- Users bring their own LLM provider (OpenAI/Anthropic/Gemini)

- Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more

- Connector SDK to build your own custom connectors

Omni is in beta right now, and I'd love your feedback, especially on the following:

- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like?

- Any concerns with the Postgres-only approach at larger scales?

Happy to answer any questions!

The code: https://github.com/getomnico/omni (Apache 2.0 licensed)

github.com
166 41
Summary
Show HN: Timber – Ollama for classical ML models, 336x faster than Python
kossisoroyce 2 days ago

Show HN: Timber – Ollama for classical ML models, 336x faster than Python

Timber is a lightweight, high-performance logging library for Java and Kotlin that provides a simple and flexible API for logging messages. It supports multiple logging backends, including Logcat, Timber, and SLF4J, and offers features such as tree-structured logging and custom tag generation.

github.com
199 33
Summary
9wzYQbTYsAIc about 7 hours ago

Show HN: I built a LLM human rights evaluator for HN (content vs. site behavior)

I built Observatory to automatically evaluate Hacker News front-page stories against all 31 provisions of the UN Universal Declaration of Human Rights — starting with HN because its human-curated front page is one of the few feeds where a story's presence signals something about quality, not just virality. It runs every minute: https://observatory.unratified.org. Claude Haiku 4.5 handles full evaluations; Llama 4 Scout and Llama 3.3 70B on Workers AI run a lighter free-tier pass.

My health challenges limit how much I can work. I've come to think of Claude Code as an accommodation engine — not in the medical-paperwork sense, but in the literal one: it gives me the capacity to finish things that a normal work environment doesn't. Observatory was built in eight days because that kind of collaboration became possible for me. (I even used Claude Code to write this post — but am only posting what resonates with me.) Two companion posts: on the recursive methodology (https://blog.unratified.org/2026-03-03-recursive-methodology...) and what 806 evaluated stories reveal (https://blog.unratified.org/2026-03-03-what-806-stories-reve...).

The observation that shaped the design: rights violations rarely announce themselves. An article about a company's "privacy-first approach" might appear on a site running twelve trackers. The interesting signal isn't whether an article mentions privacy — it's whether the site's infrastructure matches its words.

Each evaluation runs two parallel channels. The editorial channel scores what the content says about rights: which provisions it touches, direction, evidence strength. The structural channel scores what the site infrastructure does: tracking, paywalls, accessibility, authorship disclosure, funding transparency. The divergence — SETL (Structural-Editorial Tension Level) — is often the most revealing number. "Says one thing, does another," quantified.

Every evaluation separates observable facts from interpretive conclusions (the Fair Witness layer, same concept as fairwitness.bot — https://news.ycombinator.com/item?id=44030394). You get a facts-to-inferences ratio and can read exactly what evidence the model cited. If a score looks wrong, follow the chain and tell me where the inference fails.

Per our evaluations across 805 stories: only 65% identify their author — one in three HN stories without a named author. 18% disclose conflicts of interest. 44% assume expert knowledge (a structural note on Article 26). Tech coverage runs nearly 10× more retrospective than prospective: past harm documented extensively; prevention discussed rarely.

One story illustrates SETL best: "Half of Americans now believe that news organizations deliberately mislead them" (fortune.com, 652 HN points). Editorial: +0.30. Structural: −0.63 (paywall, tracking, no funding disclosure). SETL: 0.84. A story about why people don't trust media, from an outlet whose own infrastructure demonstrates the pattern.

The structural channel for free Llama models is noisy — 86% of scores cluster on two integers. The direction I'm exploring: TQ (Transparency Quotient) — binary, countable indicators that don't need LLM interpretation (author named? sources cited? funding disclosed?). Code is open source: https://github.com/safety-quotient-lab/observatory — the .claude/ directory has the cognitive architecture behind the build.

Find a story whose score looks wrong, open the detail page, follow the evidence chain. The most useful feedback: where the chain reaches a defensible conclusion from defensible evidence and still gets the normative call wrong. That's the failure mode I haven't solved. My background is math and psychology (undergrad), a decade in software — enough to build this, not enough to be confident the methodology is sound. Expertise in psychometrics, NLP, or human rights scholarship especially welcome. Methodology, prompts, and a 15-story calibration set are on the About page.

Thanks!

observatory.unratified.org
3 2
Show HN: We want to displace Notion with collaborative Markdown files
antics about 13 hours ago

Show HN: We want to displace Notion with collaborative Markdown files

Hi HN! We at Moment[1] are working on Notion alternative which is (1) rich and collaborative, but (2) also just plain-old Markdown files, stored in git (ok, technically in jj), on local disk. We think the era of rigid SaaS UI is, basically, over: coding agents (`claude`, `amp`, `copilot`, `opencode`, etc.) are good enough now that they instantly build custom UI that fits your needs exactly. The very best agents in the world are coding agents, and we want to allow people to simply use them, e.g., to build little internal tools—but without compromising on collaboration.

Moment aims to cover this and other gaps: seamless collaborative editing for teams, more robust programming capabilities built in (including a from-scratch React integration), and tools for accessing private APIs.

A lot of our challenge is just in making the collaborative editing work really well. We have found this is a lot harder than simply slapping Yjs on the frontend and calling it a day. We wrote about this previously and the post[2] did pretty well on HN: Lies I was Told About Collaborative editing (352 upvotes as of this writing). Beyond that, in part 2, we'll talk about the reasons we found it hard to get collab to run at 60fps consistently—for one, the Yjs ProseMirror bindings completely tear down and re-create the entire document on every single collaborative keystroke.

We hope you will try it out! At this stage even negative feedback is helpful. :)

[1]: https://www.moment.dev/

[2]: https://news.ycombinator.com/item?id=42343953

moment.dev
20 6
Summary
Show HN: Explain Curl Commands
akgitrepos 3 days ago

Show HN: Explain Curl Commands

github.com
38 3
Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool
vustagc 1 day ago

Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool

A little weekend project, made so I can pause/play/rewind directly on the piano, when learning a song by ear.

github.com
57 21
Summary
shhac about 7 hours ago

Show HN: Git-hunk – Stage hunks by hash, no "-p" required

git add -p is the only built-in way to stage individual hunks, and it's interactive — you step through hunks one at a time answering "y/n/q/a/d/e/?". That works fine for humans at a keyboard, but it's completely unusable for LLM agents, shell scripts, and CI pipelines.

git-hunk is the non-interactive alternative. It gives every hunk a stable SHA-1 content hash, then lets you stage by hash:

$ git hunk list --oneline

  a3f7c21  src/main.zig   42-49  if (flags.verbose) {…  

  b82e0f4  src/parse.zig  15-28  fn parseArgs(alloc: …
$ git hunk add a3f7c21

  staged a3f7c21 → a3f7c21  src/main.zig

The key design choice: hashes are computed from the immutable side's line numbers, so staging one hunk never changes another hunk's hash. This makes multi-step scripted workflows reliable — you can enumerate hunks, make decisions, then stage them without the targets shifting underneath you.

Other things it does: line-range selection (a3f7:3-5,8), --porcelain output for machine consumption, count for CI guards, check --exclusive for hash validation, stash individual hunks, and restore to selectively discard changes.

Single static binary, written in Zig, zero runtime dependencies beyond git itself. Install via brew install shhac/tap/git-hunk.

I built this because I was trying to run AI agents in parallel, and stuck to file-level editing they'd fight eachother over what changes they wanted to put into commits. Now I can have multiple agents work in parallel and commit cleanly without needing worktrees.

git-hunk.paulie.app
3 0
Summary
Show HN: The Janitor – A 58MB Rust static analyzer to block AI-generated PR slop
GhrammR about 3 hours ago

Show HN: The Janitor – A 58MB Rust static analyzer to block AI-generated PR slop

The article discusses the development of an AI-powered robotic janitor, exploring its potential to automate cleaning tasks and increase efficiency in various settings. The project highlights the advancements in robotics and artificial intelligence and their application in the realm of facility maintenance.

github.com
3 1
Summary
foxfoxx 1 day ago

Show HN: Govbase – Follow a bill from source text to news bias to social posts

Govbase tracks every bill, executive order, and federal regulation from official sources (Congress.gov, Federal Register, White House). An AI pipeline breaks each one down into plain-language summaries and shows who it impacts by demographic group.

It also ties each policy directly to bias-rated news coverage and politician social posts on X, Bluesky, and Truth Social. You can follow a single bill from the official text to how media frames it to what your representatives are saying about it.

Free on web, iOS, and Android.

https://govbase.com

I'd love feedback from the community, especially on the data pipeline or what policy areas/features you feel are missing.

govbase.com
213 89
Summary
Show HN: uBlock filter list to blur all Instagram Reels
shraiwi 1 day ago

Show HN: uBlock filter list to blur all Instagram Reels

A filter list for uBO that blurs all video and non-follower content from Instagram. Works on mobile with uBO Lite.

related: https://news.ycombinator.com/item?id=47016443

gist.github.com
123 48
Summary
Show HN: React-Kino – Cinematic scroll storytelling for React (1KB core)
bilater 3 days ago

Show HN: React-Kino – Cinematic scroll storytelling for React (1KB core)

I built react-kino because I wanted Apple-style scroll experiences in React without pulling in GSAP (33KB for ScrollTrigger alone).

The core scroll engine is under 1KB gzipped. It uses CSS position: sticky with a spacer div for pinning — same technique as ScrollTrigger but with zero dependencies.

12 declarative components: Scene, Reveal, Parallax, Counter, TextReveal, CompareSlider, VideoScroll, HorizontalScroll, Progress, Marquee, StickyHeader.

SSR-safe, respects prefers-reduced-motion, works with Next.js App Router.

Demo: https://react-kino.dev GitHub: https://github.com/btahir/react-kino npm: npm install react-kino

github.com
17 2
Summary
Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs
alexgriss 2 days ago

Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs

Hi HN,

I’ve been working on a browser-based tool for exploring and debugging Web Audio API graphs.

Web Audio Studio lets you write real Web Audio API code, run it, and see the runtime graph it produces as an interactive visual representation. Instead of mentally tracking connect() calls, you can inspect the actual structure of the graph, follow signal flow, and tweak parameters while the audio is playing.

It includes built-in visualizations for common node types — waveforms, filter responses, analyser time and frequency views, compressor transfer curves, waveshaper distortion, spatial positioning, delay timing, and more — so you can better understand what each part of the graph is doing. You can also insert an AnalyserNode between any two nodes to inspect the signal at that exact point in the chain.

There are around 20 templates (basic oscillator setups, FM/AM synthesis, convolution reverb, IIR filters, spatial audio, etc.), so you can start from working examples and modify them instead of building everything from scratch.

Everything runs fully locally in the browser — no signup, no backend.

The motivation came from working with non-trivial Web Audio graphs and finding it increasingly difficult to reason about structure and signal flow once things grow beyond simple examples. Most tutorials show small snippets, but real projects quickly become harder to inspect. I wanted something that stays close to the native Web Audio API while making the runtime graph visible and inspectable.

This is an early alpha and desktop-only for now.

I’d really appreciate feedback — especially from people who have used Web Audio API in production or built audio tools. You can leave comments here, or use the feedback button inside the app.

https://webaudio.studio

webaudio.studio
64 7
Summary
ricky_risky about 10 hours ago

Show HN: Interactive WordNet Visualizer-Explore Semantic Relations as a Graph

WordNet-vis is a web-based visualization tool that allows users to explore and understand the WordNet semantic network, a lexical database of English words and their relationships. The tool provides an interactive interface for navigating and analyzing the complex structure of WordNet, enabling users to gain insights into the semantic relationships between words and concepts.

wordnet-vis.onrender.com
2 0
Summary
two-sandwich about 10 hours ago

Show HN: TrAIn of Thought – AI chat as I want it to be

My conversations with LLMs branch in many directions. I want to be able to track those branches, revert to other threads, and make new branches at arbitrary points. So I built my own solution to it.

It's essentially a tool for non-linear thinking. There's a lot of features I'd love to add, and I need some feedback before I take it anywhere else. So, I'm listening to whatever you're thinking is broken.

Basic feature set: - Branching conversations: follow up from any node at any time, not just the latest message

- Context inheritance: when you branch off a node, the AI gets the full ancestry of that branch as context, so answers are aware of the whole conversation path leading to them.

- Text-to-question: highlight any text in an answer to instantly seed a new question from it.

- Multi-provider AI: compare and adjust responses from OpenAI, Anthropic, and Google Gemini.

- Visual graph: the conversation renders as a React Flow graph with automatic layout, so you can see the whole structure at a glance.

- Shareable links: your entire chat is compressed and stored in the URL. Everything is local (well, except the API calls).

- Branch compression: long branches can be collapsed into a summary node to keep the graph tidy.

bix.computer
2 0
Summary
Show HN: Sai – Your always-on co-worker
pentamassiv about 14 hours ago

Show HN: Sai – Your always-on co-worker

Simular.AI is an AI assistant that helps users create high-quality content, improve productivity, and enhance collaboration. The platform offers a range of features, including content generation, task automation, and team management tools.

simular.ai
3 2
Summary
Show HN: Visual Lambda Calculus – a thesis project (2008) revived for the web
bntr 4 days ago

Show HN: Visual Lambda Calculus – a thesis project (2008) revived for the web

Originally built as my master's thesis in 2008, Visual Lambda is a graphical environment where lambda terms are manipulated as draggable 2D structures ("Bubble Notation"), and beta-reduction is smoothly animated.

I recently revived and cleaned up the project and published it as an interactive web version: https://bntre.github.io/visual-lambda/

GitHub repo: https://github.com/bntre/visual-lambda

It also includes a small "Lambda Puzzles" challenge, where you try to extract a hidden free variable (a golden coin) by constructing the right term: https://github.com/bntre/visual-lambda#puzzles

github.com
48 9
Summary
Show HN: A tool to give every local process a stable URL
lsreeder01 about 11 hours ago

Show HN: A tool to give every local process a stable URL

In working with parallel agents in different worktrees, I found that I had a lot of port conflicts, went back and forth checking what incremented port my dev server was running on, and cookie bleed.

This isnt a big issue if running a few servers with full a stack framework like Next, Nuxt, or Sveltekit, but if you run a Rust backend and a Vite frontend In multiple worktrees, it gets way more complicated, and the mental model starts to break. That's not even adding in databases, or caches.

So I built Roxy, which is a single Go binary that wraps your dev servers (or any process actually) and gives you a stable .test domain based on the branch name and cwd.

It runs a proxy and dns server that handles all the domain routing, tls, port mapping, and forwarding for you.

It currently supports:

- HTTP for your web apps and APIs - Most TCP connections for your db, cache and message/queue layers - TLS support so you can run HTTPS - Run multiple processes at once each with a unique URL, like Docker compose - Git and worktree awareness - Detached mode - Zero config startup

My co-workers and I have been using it a lot with our workflow and i think it's ready for public use.

We support MacOS and Linux

I'll be working on some more useful features like Docker compose/Procfile compatibility and tunneling so you can access your dev environment remotely with a human readable URL

Give it a try, and open an issue if something doesnt quite work right, or to request a feature!

https://github.com/logscore/roxy

github.com
3 0
Summary
Show HN: Giggles – A batteries-included React framework for TUIs
ajz317 1 day ago

Show HN: Giggles – A batteries-included React framework for TUIs

i built a framework that handles focus and input routing automatically for you -- something born out of the things that ink leaves to you, and inspired by charmbracelet's bubbletea

- hierarchical focus and input routing: the hard part of terminal UIs, solved. define focus regions with useFocusScope, compose them freely -- a text input inside a list inside a panel just works. each component owns its keys; unhandled keypresses bubble up to the right parent automatically. no global handler like useInput, no coordination code

- 15 UI components: Select, TextInput, Autocomplete, Markdown, Modal, Viewport, CodeBlock (with diff support), VirtualList, CommandPalette, and more. sensible defaults, render props for full customization

- terminal process control: spawn processes and stream output into your TUI with hooks like useSpawn and useShellOut; hand off to vim, less, or any external program and reclaim control cleanly when they exit

- screen navigation, a keybinding registry (expose a ? help menu for free), and theming included

- react 19 compatible!

docs and live interactive demos in your browser: https://giggles.zzzzion.com

quick start: npx create-giggles-app

github.com
22 10
Show HN: OpenMandate – Declare what you need, get matched
raj-shekhar about 13 hours ago

Show HN: OpenMandate – Declare what you need, get matched

Hi HN, I'm Raj.

We all spend a bulk of our time looking for the right job, cofounders, hires. Post on boards, search, connect, ask around. Hit ratio is very low. There's this whole unsaid rule that you have to build your network for this kind of thing. Meanwhile the person you need is out there doing the exact same thing on their side. Both of you hunting, neither finding.

What if you just declare what you need and someone does the finding for you?

That's what I built - OpenMandate. You declare what you need and what you offer - a senior engineer looking for a cofounder in climate tech, a startup that needs a backend engineer who knows distributed systems. Each mandate gets its own agent. It talks to every other agent in the pool on your behalf until it finds the match. You don't browse anything. You declare and wait.

Everything is private by default. Nobody sees who else is in the pool. Nothing is revealed unless both sides accept. No match? Nobody ever knows you were looking. No more creating profiles, engaging for the sake of engagement, building networks when you don't want to.

What's live:

- openmandate.ai

- pip install openmandate / npm install openmandate

- MCP server for Claude Code / Cursor / any MCP client

- github.com/openmandate

openmandate.ai
3 2