Show HN: mdto.page – Turn Markdown into a shareable webpage instantly
Hi HN
I built mdto.page because I often needed a quick way to share Markdown notes or documentation as a proper webpage, without setting up a GitHub repo or configuring a static site generator.
I wanted something dead simple: upload Markdown -> get a shareable public URL.
Key features:
Instant Publishing: No login or setup required.
Flexible Expiration: You can set links to expire automatically after 1 day, 7 days, 2 weeks, or 30 days. Great for temporary sharing.
It's free to use. I’d love to hear your feedback!
Show HN: I built a text-based business simulator to replace video courses
I am a solo developer, and I built Core MBA because I was frustrated with the "video course" default in business education.
I wanted to build a "compiler for business logic"—a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.
The project is a business simulator built on React 19 and TypeScript.
The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.
The biggest technical hurdle was building the Market Engine.
I needed it to be mathematically rigorous, not a hallucinating chatbot. I wrote a custom `useMarketEngine.ts` hook that runs a discrete-event simulation. Every "run cycle," it solves a system of equations, including a specific Ad Fatigue formula—`1 / (1 + (power - 1) * fatigueFactor)`—to force diminishing returns.
I also coded the "Theory of Constraints" directly into the state management: the system enforces bottlenecks between Inventory, Demand, and Capacity. For instance, a single employee has a hard cap of 7 operations per day. If you scale demand beyond that without hiring, the system burns your cash on lost orders.
To handle the educational content, I moved away from hardcoded quizzes.
I built a module that pipes the static lesson text into Gemini Flash to generate unique "Combat Cases" on the fly. The AI validates your strategy against the specific principles of the lesson (like LTV/CAC) rather than generic business advice.
These two engines are connected by a "Liquidity Loop."
Passing the AI cases earns you virtual capital ($500), which is the only fuel for the Market Engine. You literally cannot play the game if you don't learn the theory.
If you go bankrupt, my heuristic `Advisor` analyzes your crash data—comparing `lostRevenue` vs `lostCapacity`—and links you back to the exact lesson you ignored.
I am inviting you to test the full loop: read a brief, pass the AI simulation (Combat Cases ), and try to survive in the Market Engine.
I specifically need feedback on: 1. The Content: I aimed for maximum density—are the lessons too dry? 2. The AI Simulation: Does it accurately validate your logic based on the lesson? 3. The Market Economy: Does the math feel balanced, or is the "Ad Fatigue" too punishing?
Show HN: pgwire-replication - pure rust client for Postgres CDC
The article discusses the implementation of PostgreSQL's wire protocol in the Go programming language, focusing on the challenges and solutions involved in building a compatible PostgreSQL client and server. It explores the protocol's structure, and the techniques used to create a reusable and efficient implementation.
Show HN: SkillRisk – Free security analyzer for AI agent skills
SkillRisk offers a free online assessment that helps individuals identify their key strengths and weaknesses, providing personalized recommendations to improve their skills and enhance their career prospects.
Show HN: Hc: an agentless, multi-tenant shell history sink
This project is a tool for engineers who live in the terminal and are tired of losing their command history to ephemeral servers or fragmented `.bash_history` files. If you’re jumping between dozens of boxes, many of which might be destroyed an hour later, your "local memory" (the history file) is essentially useless. This tool builds a centralized, permanent brain for your shell activity, ensuring that a complex one-liner you crafted months ago remains accessible even if the server it ran on is long gone.
The core mechanism wants to be a "zero-touch" capture that happens at the connection gateway level. Instead of installing logging agents or scripts on every target machine, the tool reconstructs your terminal sessions from raw recording files generated by the proxy you use to connect. This "in-flight" capture means you get a high-fidelity log of every keystroke and output without ever having to touch the configuration of the remote host. It’s a passive way to build a personal knowledge base while you work.
To handle the reality of context-switching, the tool is designed with a "multi-tenant" architecture. For an individual engineer, this isn't about managing different users, but about isolating project contexts. It automatically categorizes history based on the specific organization or project tags defined at the gateway. This keeps your work for different clients or personal side-projects in separate buckets, so you don't have to wade through unrelated noise when you're looking for a specific solution.
In true nerd fashion, the search interface stays exactly where you want it: in the command line. There is no bloated web UI to slow you down. The tool turns your entire professional history into a searchable, greppable database accessible directly from your terminal.
Please read the full story [here](https://carminatialessandro.blogspot.com/2026/01/hc-agentles...)
Show HN: OpenWork – An open-source alternative to Claude Cowork
hi hn,
i built openwork, an open-source, local-first system inspired by claude cowork.
it’s a native desktop app that runs on top of opencode (opencode.ai). it’s basically an alternative gui for opencode, which (at least until now) has been more focused on technical folks.
the original seed for openwork was simple: i have a home server, and i wanted my wife and i to be able to run privileged workflows. things like controlling home assistant, or deploying custom web apps (e.g. our customs recipe app recipes.benjaminshafii.com), legal torrents, without living in a terminal.
our initial setup was running the opencode web server directly and sharing credentials to it. that worked, but i found the web ui unreliable and very unfriendly for non-technical users.
the goal with openwork is to bring the kind of workflows i’m used to running in the cli into a gui, while keeping a very deep extensibility mindset. ideally this grows into something closer to an obsidian-style ecosystem, but for agentic work.
some core principles i had in mind:
- open by design: no black boxes, no hosted lock-in. everything runs locally or on your own servers. (models don’t run locally yet, but both opencode and openwork are built with that future in mind.) - hyper extensible: skills are installable modules via a skill/package manager, using the native opencode plugin ecosystem. - non-technical by default: plans, progress, permissions, and artifacts are surfaced in the ui, not buried in logs.
you can already try it: - there’s an unsigned dmg - or you can clone the repo, install deps, and if you already have opencode running it should work right away
it’s very alpha, lots of rough edges. i’d love feedback on what feels the roughest or most confusing.
happy to answer questions.
Show HN: Claude Quest – Pixel-art visualization for Claude Code sessions
This article introduces Claude, an AI model developed by Anthropic, and discusses its capabilities in natural language processing and task completion. It highlights Claude's potential applications in various domains and the company's approach to responsible AI development.
Show HN: pubz: easy, conventional NPM publishing
The article discusses the open-source project 'pubz', which provides a simple and efficient way to publish and manage documents and articles on the web. It highlights the project's key features, including its intuitive interface, version control capabilities, and support for various document formats.
Show HN: BGP Scout – BGP Network Browser
Hi HN,
When working with BGP data, I kept running into the same friction: it’s easy to get raw data, but surprisingly hard to browse networks over time — especially by when they appeared, where they operate, and what they actually look like at a glance.
I built a small tool, bgpscout.io, to scratch that itch.
It lets you:
Browse ASNs by registration date and geography
See where a given network appears to have presence
View commonly scattered public data about an ASN in one place
Save searches to track when new networks matching certain criteria appear
All of this data is public already; the goal was to make exploration faster and less painful.
I haven’t invested heavily in expanding it yet. Before doing so, I’m curious:
Is this solving a real problem for you?
What would make something like this actually useful in day-to-day work?
Feedback is welcome.
Show HN: Gambit, an open-source agent harness for building reliable AI agents
Hey HN!
Wanted to show our open source agent harness called Gambit.
If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration.
Normally you might see an agent orchestration framework pipeline like:
compute -> compute -> compute -> LLM -> compute -> compute -> LLM
we invert this so with an agent harness, it’s more like:
LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM
Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks.
Agents can call agents, and each agent can be designed with whatever model params make sense for your task.
Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns).
We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.
Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality.
We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications:
- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.
- Rubric based grading to guarantee you (for instance) don’t leak PII accidentally
- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.
We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out!
Walkthrough video: https://youtu.be/J_hQ2L_yy60
Show HN: Reversing YouTube’s “Most Replayed” Graph
Hi HN,
I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out.
This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.
This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.
Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)
Show HN: Timberlogs – Drop-in structured logging for TypeScript
Hi HN! I built Timberlogs because I was tired of console.log in production and existing logging solutions requiring too much setup.
Timberlogs is a drop-in structured logging library for TypeScript:
npm install timberlogs-client
import { createTimberlogs } from "timberlogs-client";
const timber = createTimberlogs({
source: "my-app",
environment: "production",
apiKey: process.env.TIMBER_API_KEY,
});
timber.info("User signed in", { userId: "123" });
timber.error("Payment failed", error);
Features:
- Auto-batching with retries
- Automatic redaction of sensitive data (passwords, tokens)
- Full-text search across all your logs
- Real-time dashboard
- Flow tracking to link related logsIt's currently in beta and free to use. Would love feedback from the HN community.
Site: https://timberlogs.dev Docs: https://docs.timberlogs.dev npm: https://npmjs.com/package/timberlogs-client GitHub: https://github.com/enaboapps/timberlogs-typescript-sdk
Show HN: Tabstack – Browser infrastructure for AI agents (by Mozilla)
Hi HN,
My team and I are building Tabstack to handle the "web layer" for AI agents. Launch Post: https://tabstack.ai/blog/intro-browsing-infrastructure-ai-ag...
Maintaining a complex infrastructure stack for web browsing is one of the biggest bottlenecks in building reliable agents. You start with a simple fetch, but quickly end up managing a complex stack of proxies, handling client-side hydration, and debugging brittle selectors. and writing custom parsing logic for every site.
Tabstack is an API that abstracts that infrastructure. You send a URL and an intent; we handle the rendering and return clean, structured data for the LLM.
How it works under the hood:
- Escalation Logic: We don't spin up a full browser instance for every request (which is slow and expensive). We attempt lightweight fetches first, escalating to full browser automation only when the site requires JS execution/hydration.
- Token Optimization: Raw HTML is noisy and burns context window tokens. We process the DOM to strip non-content elements and return a markdown-friendly structure that is optimized for LLM consumption.
- Infrastructure Stability: Scaling headless browsers is notoriously hard (zombie processes, memory leaks, crashing instances). We manage the fleet lifecycle and orchestration so you can run thousands of concurrent requests without maintaining the underlying grid.
On Ethics: Since we are backed by Mozilla, we are strict about how this interacts with the open web.
- We respect robots.txt rules.
- We identify our User Agent.
- We do not use requests/content to train models.
- Data is ephemeral and discarded after the task.
The linked post goes into more detail on the infrastructure and why we think browsing needs to be a distinct layer in the AI stack.
This is obviously a very new space and we're all learning together. There are plenty of known unknowns (and likely even more unknown unknowns) when it comes to agentic browsing, so we’d genuinely appreciate your feedback, questions, and tips.
Happy to answer questions about the stack, our architecture, or the challenges of building browser infrastructure.
Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR
For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world.
Some technical details:
- Predicts conversational floor ownership, not speech endpoints
- Audio-native streaming model, no ASR dependency
- Human-timed responses without silence-based delays
- Zero interruptions at sub-100ms median latency
- In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines
I wrote more about the work here: https://www.tavus.io/post/sparrow-1-human-level-conversation...
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP
Hi HN, I built webctl because I was frustrated by the gap between curl and full browser automation frameworks like Playwright.
I initially built this to solve a personal headache: I wanted an AI agent to handle project management tasks on my company’s intranet. I needed it to persist cookies across sessions (to handle SSO) and then scrape a Kanban board.
Existing AI browser tools (like current MCP implementations) often force unsolicited data into the context window—dumping the full accessibility tree, console logs, and network errors whether you asked for them or not.
webctl is an attempt to solve this with a Unix-style CLI:
- Filter before context: You pipe the output to standard tools. webctl snapshot --interactive-only | head -n 20 means the LLM only sees exactly what I want it to see.
- Daemon Architecture: It runs a persistent background process. The goal is to keep the browser state (cookies/session) alive while you run discrete, stateless CLI commands.
- Semantic targeting: It uses ARIA roles (e.g., role=button name~="Submit") rather than fragile CSS selectors.
Disclaimer: The daemon logic for state persistence is still a bit experimental, but the architecture feels like the right direction for building local, token-efficient agents.
It’s basically "Playwright for the terminal."
Show HN: The Hessian of tall-skinny networks is easy to invert
It turns out the inverse of the Hessian of a deep net is easy to apply to a vector. Doing this naively takes cubically many operations in the number of layers (so impractical), but it's possible to do this in time linear in the number of layers (so very practical)!
This is possible because the Hessian of a deep net has a matrix polynomial structure that factorizes nicely. The Hessian-inverse-product algorithm that takes advantage of this is similar to running backprop on a dual version of the deep net. It echoes an old idea of Pearlmutter's for computing Hessian-vector products.
Maybe this idea is useful as a preconditioner for stochastic gradient descent?
Show HN: Free AI Image Upscaler (100% local, private, and free)
This article discusses an AI-powered image upscaler tool that can enhance the resolution and quality of images, making them suitable for larger displays or printing. The tool utilizes advanced machine learning algorithms to intelligently scale up images without losing important details or introducing artifacts.
Show HN: Tusk Drift – Turn production traffic into API tests
Hi HN! In the past few months my team and I have been working on Tusk Drift, a system that records real API traffic from your service, then replays those requests as deterministic tests. Outbound I/O (databases, HTTP calls, etc.) gets automatically mocked using the recorded data.
Problem we're trying to solve: Writing API tests is tedious, and hand-written mocks drift from reality. We wanted tests that stay realistic because they come from real traffic.
versus mocking libraries: Tools like VCR/Nock intercept HTTP within your tests. Tusk Drift records full request/response traces externally (HTTP, DB, Redis, etc.) and replays them against your running service, no test code or fixtures to write/maintain.
How it works:
1. Add a lightweight SDK (we currently support Python and Node.js)
2. Record traffic in any environment.
3. Run `tusk run`, the CLI sandboxes your service and serves mocks via Unix socket
We run this in CI on every PR. Also been using it as a test harness for AI coding agents, they can make changes, run `tusk run`, and get immediate feedback without needing live dependencies.
Source: https://github.com/Use-Tusk/tusk-drift-cli
Demo: https://github.com/Use-Tusk/drift-node-demo
Happy to answer questions!
Show HN: GitHub – Burn – Rust tensor library and deep learning framework
The article discusses Burn, an open-source deep learning library for building and deploying large-scale neural networks on low-power hardware. It focuses on the library's key features, such as its modular design, efficient memory management, and support for a wide range of neural network architectures.
Show HN: A cross-platform toolkit to explore OS internals and capabilities
I built this toolkit with my colleague to dive deep into OS internals and automate the identification of privilege escalation vectors. Written in pure C without external dependencies, it explores everything from Linux capabilities and Docker escapes to Windows token manipulation and service permissions. We believe that the constant struggle between breaking and securing systems is the ultimate driver of software evolution. This tool is our contribution to that cycle, designed for researchers who want to understand how low-level misconfigurations can be discovered and audited across different environments.
Source: https://github.com/Ferki-git-creator/ferki-escalator
Show HN: Munimet.ro – ML-based status page for the local subways in SF
During a recent subway outage in San Francisco I decided to build a webapp in the spirit of "Do I Need an Umbrella," basically to answer the question "Should I take the subway or a bus?"
In the interest of learning new tools I decided to vibe code it as much as possible.
First, I had Claude Code write a background script to download images of the real-time circuit diagram of the subway, which are publicly available here: http://sfmunicentral.com/
Next I had it build an image labeler tool in tkinter, which turned out to need a lot of manual tweaking before I could even get to the labeling. Seemed like the right tool for the job, but it would have saved time if I'd built it from scratch myself.
The most interesting part was turning the labeled image data into predictions with pytorch. Claude wrote the initial script fairly quickly, but as these things go it required manual tweaking and second guessing myself on the images flagged as outliers. I'll admit I got embarrassingly far along before realizing that Claude hadn't enabled pytorch's GPU support; a real facepalm moment on my part.
For those curious, brave, or crazy enough to dive in the source code is available here under an MIT license: https://github.com/MrEricSir/munimet.ro
Show HN: Tiny FOSS Compass and Navigation App (<2MB)
The MBCompass project is an open-source geomagnetic compass application designed for Android devices. It provides accurate compass functionality, allowing users to orient themselves using the Earth's magnetic field.
Show HN: ContextFort – Visibility and controls for browser agents
Hey HN! I’m Ashwin, co-founder of ContextFort (https://contextfort.ai/). We provide visibility and controls for AI browser agents like Claude in Chrome through an open-source browser extension.
Browser agents are AI copilots that can autonomously navigate and take actions in your browser. They show up as standalone browsers (Comet, Atlas) or Chrome extensions (Claude).
They’re especially useful in sites where search/API connectors don’t work well, like searching through Google Groups threads for a bug fix or pulling invoices from BILL.com. Anthropic released Claude CoWork yesterday, and in their launch video, they showcased their browser-use chromium extension: https://www.youtube.com/watch?v=UAmKyyZ-b9E
But enterprise adoption is slow because of indirect prompt injection risks, about which Simon Willison has written in great detail in his blogs: https://simonwillison.net/2025/Aug/26/piloting-claude-for-ch.... And before security teams can decide on guardrails, they need to know how employees are using browser agents to understand where the risks are.
So, we reverse-engineered how the Claude in Chrome extension works and built a visibility layer that tracks agent sessions end-to-end. It detects when an AI agent takes control of the browser and records which pages it visited during a session and what it does on each page (what was clicked and where text was input).
On top of that, we’ve also added simple controls for security teams to act on based on what the visibility layer captures:
(1) Block specific actions on specific pages (e.g., prevent the agent from clicking “Submit” on email)
(2) Block risky cross-site flows in a single session (e.g., block navigation to Atlassian after interacting with StackOverflow), or apply a stricter policy and block bringing any external context to Atlassian entirely.
We demo all the above features here in this 2-minute YouTube video: https://www.youtube.com/watch?v=1YtEGVZKMeo
You can try our browser extension here: https://github.com/ContextFort-AI/ContextFort
Thrilled to share this with you and hear your comments!
Show HN: HyTags – HTML as a Programming Language
This is hyTags, a programming language embedded in HTML for building interactive web UIs.
It started as a way to write full-stack web apps in Swift without a separate frontend, but grew into a small language with control flow, functions, and async handling via HTML tags. The result is backend language-agnostic and can be generated from any server that can produce HTML via templates or DSLs.
Show HN: A 10KiB kernel for cloud apps
BareMetal Cloud is an open-source, bare-metal cloud computing platform that allows users to deploy and manage their own cloud infrastructure without the need for virtualization. The project aims to provide a simple and efficient way for individuals and organizations to build and operate their own cloud services.
Show HN: The viral speed read at 900wpm app
This rapid serial visual processing went viral the last few days. I built this app a few weeks ago to take advantage of the auto-playing videos on social media. Now you can beam text right into your followers eye sockets!
Show HN: Xoscript
This article explores the history of the XO scripting language, tracing its origins, development, and adoption over time. It provides an overview of the language's key features, its use in various applications, and its evolution within the programming community.
Show HN: Voice Composer – Browser-based pitch detection to MIDI/strudel/tidal
Built this over the weekend to bridge the gap between "can hum a melody" and "can code algorithmic music patterns" (Strudel/TidalCycles) for live coding and live dj'ing.
What it does:
Real-time pitch detection in browser using multiple algorithms: CREPE (deep learning model via TensorFlow.js) YIN (autocorrelation-based fundamental frequency estimation) FFT with harmonic product spectrum AMDF (average magnitude difference function) Outputs: visual piano roll, MIDI files, Strudel/TidalCycles code All client-side, nothing leaves your machine Why multiple algorithms: Different pitch detection approaches work better for different inputs. CREPE is most accurate but computationally expensive; YIN is fast and works well for clean monophonic input; FFT/HPS handles harmonic-rich sounds; AMDF is lightweight. Let users switch based on their use case.
Technical details:
React, runs entirely in browser via Web Audio API Canvas-based visualization with real-time waveform rendering
The original problem: I wanted to learn live coding but had zero music theory. This makes it trivial to capture melodic ideas and immediately use them in pattern-based music systems.
Try it: https://dioptre.github.io/tidal/
Works best on desktop. Will work more like a Digital Audio Workbench (DAW).
Source: https://github.com/dioptre/tidal
Show HN: An iOS budget app I've been maintaining since 2011
I’ve been building and selling software since the early 2000s, starting with classic shareware. In 2011, I moved into the App Store world and built an iOS budget app because I needed a simple way to track my own expenses.
At the time, my plan was to replace a few larger shareware projects with several smaller apps to spread the risk. That didn’t quite work out — one app, MoneyControl, quickly grew so much that it became my main focus.
Fifteen years later, the app is still on the App Store, still actively developed, and still used by people who started with version 1.0. Many apps from that era are long gone.
Looking back, these are some of the things that mattered most:
Starting early helped, but wasn’t enough on its own. Early visibility made a difference, but long-term maintenance and reliability are what kept users.
Focus beat diversification. I wanted many small apps. I ended up with one large, long-lived product. Deep focus turned out to be more sustainable.
Long-term maintenance is most of the work. Adapting to new iOS versions, migrating data safely, handling edge cases, and keeping old data usable mattered more than flashy features.
Discoverability keeps getting harder. Reaching users on the App Store today is much more difficult than it was years ago. Prices are higher than in the old 99-cent days, but visibility hasn’t improved.
I’m a developer first, not a marketer. I work alone, with occasional help from freelancers. No employees, no growth team. The app could probably have grown more with better marketing, but that was never my strength.
You don’t need to get rich to build something sustainable. I didn’t build this for an exit. I’ve been able to make a living from my work for over 20 years, which feels like success to me.
Building things you actually use keeps you honest. Every product I built was something I personally needed. That authenticity mattered more than any roadmap.
This week I released version 10 with a new design and a major technical overhaul. It feels less like a milestone and more like preparing the app for the next phase.
Happy to answer questions about long-term app maintenance, indie development, or keeping a product alive across many iOS generations.