Show stories

Show HN: Knock-Knock.net – Visualizing the bots knocking on my server's door
djkurlander about 1 hour ago

Show HN: Knock-Knock.net – Visualizing the bots knocking on my server's door

knock-knock.net
4 3
Show HN: Ingglish – What if English spelling made sense?
ptarjan about 1 hour ago

Show HN: Ingglish – What if English spelling made sense?

My 5-year-old is learning to read and I keep having to say "yeah sorry, that letter is silent" and "no, those letters make a different sound in this word."

So I built Ingglish — English where every letter always makes the same sound. "ough" alone makes 6 different sounds (though, through, rough, cough, thought, bough). In Ingglish, every letter has one sound, no silent letters, no exceptions.

  - Paste text to see it translated instantly
  - Translate any webpage while preserving its layout
  - Chrome extension to browse the web in Ingglish
  - Fully reversible — Ingglish text can be converted back to standard English (minus homophones)
The core translator, DOM integration, and website are all open source: https://github.com/ptarjan/ingglish

I'd love your feedback! Thanks.

ingglish.com
2 1
Show HN: Copy-and-patch compiler for hard real-time Python
Saloc 4 days ago

Show HN: Copy-and-patch compiler for hard real-time Python

I built Copapy as an experiment: Can Python be used for hard real-time systems?

Instead of an interpreter or JIT, Copapy builds a computation graph by tracing Python code and uses a custom copy-and-patch compiler. The result is very fast native code with no GC, no syscalls, and no memory allocations at runtime.

The copy-and-patch compiler currently supports x86_64 as well as 32- and 64-bit ARM. It comes as small Python package with no other dependencies - no cross-compiler, nothing except Python.

The current focus is on robotics and control systems in general. This project is early but already usable and easy to try out.

Would love your feedback!

github.com
45 2
Summary
samcgraw about 2 hours ago

Show HN: Fieldnotes

Hi HN!

I wanted a simple UI for notes and observations around my neighborhood (e.g. this garden has beautiful poppies, this coffee shop has excellent espresso, etc.) and built this. It’s open and free to use, I hope you enjoy it as much as I do!

Feedback welcome.

fieldnote.ink
2 0
Summary
Show HN: Lineark – Linear CLI and Rust SDK for Humans and LLMs
fb03 about 3 hours ago

Show HN: Lineark – Linear CLI and Rust SDK for Humans and LLMs

lineark is an unofficial CLI and Rust SDK for Linear (the issue tracker). I built it because I use Claude Code heavily, and the Linear MCP server eats ~13K tokens of context just to describe its tools — before my agent does any actual work.

lineark takes a different approach: it's a CLI your agent calls via Bash. The full command reference (lineark usage) is under 1,000 tokens.

It's also just a nice CLI for humans — human-readable names instead of UUIDs, auto-detected output format (outputs tables in terminal/interactive session, JSON when piped).

Under the hood: the SDK is fully generated from Linear's GraphQL schema via a custom codegen pipeline (apollo-parser → typed Rust). The CLI consumes the SDK with zero raw GraphQL — just typed method calls. You can also create your own lean return data types and validate them against Linear's schema at comptime.

MIT Licensed.

Happy to answer questions. Thanks!

github.com
3 0
Show HN: WCAG 2.2 AAA Toolkit – AI Skill for Accessible Web Apps
simonmak about 4 hours ago

Show HN: WCAG 2.2 AAA Toolkit – AI Skill for Accessible Web Apps

github.com
2 0
Show HN: Arcmark – macOS bookmark manager that attaches to browser as sidebar
ahmed_sulajman 1 day ago

Show HN: Arcmark – macOS bookmark manager that attaches to browser as sidebar

Hey HN! I was a long-time Arc browser user and loved how its sidebar organized tabs and bookmarks into workspaces. I wanted to switch to other browsers without losing that workflow. So I built Arcmark, it's a macOS bookmark manager (Swift/AppKit) that floats as a sidebar attached to any browser window. It uses macOS accessibility API to follow the browser window around.

You get workspace-based links/bookmarks organization with nested folders, drag-and-drop reordering, and custom workspace colors. For the most part I tried replicating Arc's sidebar UX as close as possible.

1. Local-first: all data lives in a single JSON file ( ~/Library/Application Support/Arcmark/data.json). No accounts, no cloud sync.

2. Works with any browser: Chrome, Safari, Brave, Arc, etc. Or use it standalone as a bookmark manager with a regular window.

3. Import pinned tab and spaces from Arc: it parses Arc's StorableSidebar.json to recreate the exact workspace/folder structure.

4. Built with swift-bundler rather than Xcode.

There's a demo video in the README showing the sidebar attachment in action. The DMG is available on the releases page (macOS 13+), or you can build from source.

This is v0.1.0 so it's a very early version. Would appreciate any feedback or thoughts

GitHub: https://github.com/Geek-1001/arcmark

github.com
86 19
Summary
Show HN: 500x faster string matching for Linux Netfilter (O(1) vs. O(N))
landerrosette about 5 hours ago

Show HN: 500x faster string matching for Linux Netfilter (O(1) vs. O(N))

I built a drop-in replacement for the kernel’s xt_string module.

xt_string scales linearly (O(N)), causing massive slowdowns with many rules. Strider uses Aho–Corasick for O(1) matching.

Key Features:

O(1) Algorithmic Complexity: Uses a compact, double-array trie-based Aho–Corasick automaton, sustaining above 1 Gbps when matching 3,000 patterns, while xt_string (KMP) drops below 2 Mbps.

Lockless Datapath: RCU-protected lookups ensure zero locking overhead on the packet processing hot path.

Correctness: Never misses patterns spanning across IP fragments (unlike xt_string’s fast Boyer–Moore mode).

github.com
3 0
Show HN: Sameshi – a ~1200 Elo chess engine that fits within 2KB
datavorous_ 1 day ago

Show HN: Sameshi – a ~1200 Elo chess engine that fits within 2KB

I made a chess engine today, and made it fit within 2KB. I used a variant of MinMax called Negamax, with alpha beta pruning. For the board representation I have used a 120-cell "mailbox". I managed to squeeze in checkmate/stalemate in there, after trimming out some edge cases.

I am a great fan of demoscene (computer art subculture) since middle school, and hence it was a ritual i had to perform.

For estimating the Elo, I measured 240 automated games against Stockfish Elo levels (1320 to 1600) under fixed depth-5 and some constrained rules, using equal color distribution.

Then converted pooled win/draw/loss scores to Elo through some standard logistic formula with binomial 95% confidence interval.

github.com
223 69
Summary
marquisdegeek about 6 hours ago

Show HN: Eliza, a line-by-line remake of the original AI chatbot from 1966

Source at https://github.com/MarquisdeGeek/Eliza-Origins along links to with a talk I gave explaining a bit about it.

marquisdegeek.github.io
3 0
Summary
Show HN: MOL – A programming language where pipelines trace themselves
MouneshK 4 days ago

Show HN: MOL – A programming language where pipelines trace themselves

Hi HN,

I built MOL, a domain-specific language for AI pipelines. The main idea: the pipe operator |> automatically generates execution traces — showing timing, types, and data at each step. No logging, no print debugging.

Example:

    let index be doc |> chunk(512) |> embed("model-v1") |> store("kb")
This auto-prints a trace table with each step's execution time and output type. Elixir and F# have |> but neither auto-traces.

Other features: - 12 built-in domain types (Document, Chunk, Embedding, VectorStore, Thought, Memory, Node) - Guard assertions: `guard answer.confidence > 0.5 : "Too low"` - 90+ stdlib functions - Transpiles to Python and JavaScript - LALR parser using Lark

The interpreter is written in Python (~3,500 lines). 68 tests passing. On PyPI: `pip install mol-lang`.

Online playground (no install needed): http://135.235.138.217:8000

We're building this as part of IntraMind, a cognitive computing platform at CruxLabx. """

github.com
38 16
Summary
Show HN: SQL-tap – Real-time SQL traffic viewer for PostgreSQL and MySQL
mickamy 1 day ago

Show HN: SQL-tap – Real-time SQL traffic viewer for PostgreSQL and MySQL

sql-tap is a transparent proxy that captures SQL queries by parsing the PostgreSQL/MySQL wire protocol and displays them in a terminal UI. You can run EXPLAIN on any captured query. No application code changes needed — just change the port.

github.com
223 42
Summary
Show HN: Rover – Embeddable web agent
arjunchint 2 days ago

Show HN: Rover – Embeddable web agent

Rover is the world's first Embeddable Web Agent, a chat widget that lives on your website and takes real actions for your users. Clicks buttons. Fills forms. Runs checkout. Guides onboarding. All inside your UI.

One script tag. No APIs to expose. No code to maintain.

We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent.

We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench.

Beta with embed script is live at rtrvr.ai/rover.

Built by two ex-Google engineers. Happy to answer architecture questions.

rtrvr.ai
19 10
Summary
Show HN: Manga Viewer – Zero-dep manga/comic reader in vanilla JavaScript
tokagemushi about 7 hours ago

Show HN: Manga Viewer – Zero-dep manga/comic reader in vanilla JavaScript

The article describes a Manga Viewer, a web-based application that allows users to read manga content offline. The Manga Viewer provides a user-friendly interface and supports a wide range of manga formats, making it a convenient tool for manga enthusiasts.

github.com
2 0
Summary
rosslazer 2 days ago

Show HN: A reputation index from mitchellh's Vouch trust files

I was inspired by mitchellh's Vouch project, an explicit trust system where maintainers vouch for contributors before they can interact with a repo. Ghostty uses it to filter out AI slop PRs.

Because Vouch exposes the vouch list as a plain text file (VOUCHED.td), I realized I could aggregate them across GitHub and build a reputation index. A crawler finds every VOUCHED.td file, pulls the entries, and computes a weighted score per user. Vouches from high-star repos count more than vouches from zero-star repos.

Next step is to wire up an API so that the vouch GH action can start to use this data to auto approve contributors.

vouchbook.dev
18 3
Summary
Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs
somesortofthing 2 days ago

Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs

I was frustrated with how bad a signal of progress through a big PR "Files viewed" was, so I made a "Lines viewed" indicator to complement it.

Designed to look like a stock Github UI element - even respects light/dark theme. Runs fully locally, no API calls.

Splits insertions and deletions by default, but you can also merge them into a single "lines" figure in the settings.

chromewebstore.google.com
15 11
Summary
Show HN: Off Grid – Run AI text, image gen, vision offline on your phone
ali_chherawalla about 19 hours ago

Show HN: Off Grid – Run AI text, image gen, vision offline on your phone

Your phone has a GPU more powerful than most 2018 laptops. Right now it sits idle while you pay monthly subscriptions to run AI on someone else's server, sending your conversations, your photos, your voice to companies whose privacy policy you've never read. Off Grid is an open-source app that puts that hardware to work. Text generation, image generation, vision AI, voice transcription — all running on your phone, all offline, nothing ever uploaded.

That means you can use AI on a flight with no wifi. In a country with internet censorship. In a hospital where cloud services are a compliance nightmare. Or just because you'd rather not have your journal entries sitting in someone's training data.

The tech: llama.cpp for text (15-30 tok/s, any GGUF model), Stable Diffusion for images (5-10s on Snapdragon NPU), Whisper for voice, SmolVLM/Qwen3-VL for vision. Hardware-accelerated on both Android (QNN, OpenCL) and iOS (Core ML, ANE, Metal).

MIT licensed. Android APK on GitHub Releases. Build from source for iOS.

github.com
113 61
Summary
ekadet about 8 hours ago

Show HN: Retry script for Oracle Cloud free tier ARM instances

Oracle's free tier (4 ARM cores, 24GB RAM, forever) is great but nearly impossible to provision due to capacity issues. I built a Terraform retry script that automatically tries until capacity becomes available.

Also includes the fix for the "did not find a proper configuration for key id" error that everyone hits in Cloud Shell.

GitHub: https://github.com/ekadetov/oci-terraform-retry-script

2 0
Show HN: Data Engineering Book – An open source, community-driven guide
xx123122 2 days ago

Show HN: Data Engineering Book – An open source, community-driven guide

Hi HN! I'm currently a Master's student at USTC (University of Science and Technology of China). I've been diving deep into Data Engineering, especially in the context of Large Language Models (LLMs).

The Problem: I found that learning resources for modern data engineering are often fragmented and scattered across hundreds of medium articles or disjointed tutorials. It's hard to piece everything together into a coherent system.

The Solution: I decided to open-source my learning notes and build them into a structured book. My goal is to help developers fast-track their learning curve.

Key Features:

LLM-Centric: Focuses on data pipelines specifically designed for LLM training and RAG systems.

Scenario-Based: Instead of just listing tools, I compare different methods/architectures based on specific business scenarios (e.g., "When to use Vector DB vs. Keyword Search").

Hands-on Projects: Includes full code for real-world implementations, not just "Hello World" examples.

This is a work in progress, and I'm treating it as "Book-as-Code". I would love to hear your feedback on the roadmap or any "anti-patterns" I might have included!

Check it out:

Online: https://datascale-ai.github.io/data_engineering_book/

GitHub: https://github.com/datascale-ai/data_engineering_book

github.com
242 31
Summary
anateus 2 days ago

Show HN: Open Notes – Community Notes-style context for Discord

Howdy, Open Notes co-founder here!

At Open Notes, we're building a system for community-driven constructive moderation and annotation that can be added to anything. Under the hood, we're using the open-source Twitter/X Community Notes algorithm (though that doesn't really kick in until you've got some scale). We're interested in providing everyone with tools for managing discourse that go beyond traditional moderation. Discord is the demo/reference integration, but we want it go anywhere and everywhere. Part of our thesis is that we want to get to where people are already talking rather than drag them to a clean and empty new room where we ask them to continue the conversation.

It's interesting that Pol.is was just recently on HN (https://news.ycombinator.com/item?id=46992815) because we're obviously inspired by them as well as the whole canon of social choice theory--we're just going at it from a different angle. It's long been true that if you wanted to trap me/yourself in a conversation, you could just bring up the Condorcet criterion (amongst others), so I'm finally turning an obsession into an actual product.

We want to enable people to make decisions about conversations as close to the conversation as possible while minimizing impact on live threads. Later, this nicely extends into all sorts of group decisionmaking. As our conversations are increasingly awash in AI of all sorts (as moderators, participants, analysts, etc.), things that help manage the discourse to fit the needs of individual communities need to be scalable but without drowning human choice in an ocean of automation.

Also, we're open-source: https://github.com/opennotes-ai/opennotes

Would love to hear people's thoughts and reactions. This has so much surface area ("all online discourse"), it's hard to formulate specific questions so instead we built a thing and now we'd love to see if it works for folks.

opennotes.ai
14 0
Summary
Show HN: Tufte Editor – Local Markdown Editor with Tufte CSS Live Preview
avngr86 about 10 hours ago

Show HN: Tufte Editor – Local Markdown Editor with Tufte CSS Live Preview

A split-pane Markdown editor that renders live preview with Tufte CSS. Sidenotes, margin notes, epigraphs, full-width figures, and BibTeX citations with autocomplete — all in standard Markdown extensions.

Documents are .md files on disk. Images are regular files. Exports to standalone HTML with Tufte CSS baked in — my use case is writing essays and uploading them directly to my personal site.

Zero dependencies, no npm install, no accounts, no build step. Just `node server.js`. ~7 files total.

Full disclosure in the README: I'm a researcher, not a JS developer, and the code was AI-generated. Contributions and code review welcome.

github.com
2 1
Summary
Show HN: Bubble sort on a Turing machine
purplejacket 2 days ago

Show HN: Bubble sort on a Turing machine

Bubble sort is pretty simple in most programming languages ... what about on a Turing Machine? I used all three of Claude 4.6, GLM 5, and GPT 5.2 to get a result, so this exercise was not quite trivial, at least at this time. The resulting machine, bubble_sort_unary.yaml, will take this input:

111011011111110101111101111

and give this output:

101101110111101111101111111

I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well).

When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM.

Some comments about how the 31 states of bubbles_sort_unary.yaml operate:

  | Group | Count | Purpose |
  |---|---|---|
  | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. |
  | `cmpR_*`, `cmpL_*`, `cmpL_ret_*`, `cmpL_fwd_*` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. |
  | `chk_excess_*`, `scan_excess_*`, `mark_all_X_*` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). |
  | `swap_*` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. |
  | `restore_\*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. |
  | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. |
(The above is in the README.md if it doesn't render on HN.)

I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan).

A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays?

github.com
8 0
Summary
pattle 3 days ago

Show HN: Geo Racers – Race from London to Tokyo on a single bus pass

Geo Racers is a mobile game that combines geography and racing, allowing players to explore real-world locations and compete in fast-paced races. The game aims to make learning about different countries and landmarks engaging and fun.

geo-racers.com
144 86
Summary
Show HN: DocSync – Git hooks that block commits with stale documentation
suhteevah about 11 hours ago

Show HN: DocSync – Git hooks that block commits with stale documentation

Hi HN,

I built DocSync because every team I've worked on has the same problem: documentation that was accurate when it was written and never updated after.

DocSync uses tree-sitter to parse your code and extract symbols (functions, classes, types). On every commit, a pre-commit hook compares those symbols against existing docs. If you added a function without documenting it, the commit is blocked.

How it works:

1. `clawhub install docsync` (free) 2. `docsync generate .` — generates docs from your code 3. `docsync hooks install` — installs a lefthook pre-commit hook 4. From now on, every commit checks for doc drift

Key design decisions: - 100% local — no code leaves your machine. Uses tree-sitter for AST parsing, not an LLM. - Falls back to regex if tree-sitter isn't installed - Uses lefthook (not husky) for git hooks — it's faster and language-agnostic - License validation is offline (signed JWT, no phone-home) - Free tier does one-shot doc generation. Pro ($29/user/mo) adds hooks and drift detection.

Supports TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, Ruby, PHP, C#, Swift, Kotlin.

Landing page: https://docsync-1q4.pages.dev

Would love feedback on the approach. Is doc drift detection something your team would actually use?

github.com
4 0
Summary
austinwang115 2 days ago

Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs

I've been working on CloudRouter, a skill + CLI that gives coding agents like Claude Code and Codex the ability to start cloud VMs and GPUs.

When an agent writes code, it usually needs to start a dev server, run tests, open a browser to verify its work. Today that all happens on your local machine. This works fine for a single task, but the agent is sharing your computer: your ports, RAM, screen. If you run multiple agents in parallel, it gets a bit chaotic. Docker helps with isolation, but it still uses your machine's resources, and doesn't give the agent a browser, a desktop, or a GPU to close the loop properly. The agent could handle all of this on its own if it had a primitive for starting VMs.

CloudRouter is that primitive — a skill that gives the agent its own machines. The agent can start a VM from your local project directory, upload the project files, run commands on the VM, and tear it down when it's done. If it needs a GPU, it can request one.

  cloudrouter start ./my-project
  cloudrouter start --gpu B200 ./my-project
  cloudrouter ssh cr_abc123 "npm install && npm run dev"
Every VM comes with a VNC desktop, VS Code, and Jupyter Lab, all behind auth-protected URLs. When the agent is doing browser automation on the VM, you can open the VNC URL and watch it in real time. CloudRouter wraps agent-browser [1] for browser automation.

  cloudrouter browser open cr_abc123 "http://localhost:3000"
  cloudrouter browser snapshot -i cr_abc123
  # → @e1 [link] Home  @e2 [link] Settings  @e3 [button] Sign Out
  cloudrouter browser click cr_abc123 @e2
  cloudrouter browser screenshot cr_abc123 result.png
Here's a short demo: https://youtu.be/SCkkzxKBcPE

What surprised me is how this inverted my workflow. Most cloud dev tooling starts from cloud (background agents, remote SSH, etc) to local for testing. But CloudRouter keeps your agents local and pushes the agent's work to the cloud. The agent does the same things it would do locally — running dev servers, operating browsers — but now on a VM. As I stopped watching agents work and worrying about local constraints, I started to run more tasks in parallel.

The GPU side is the part I'm most curious to see develop. Today if you want a coding agent to help with anything involving training or inference, there's a manual step where you go provision a machine. With CloudRouter the agent can just spin up a GPU sandbox, run the workload, and clean it up when it's done. Some of my friends have been using it to have agents run small experiments in parallel, but my ears are open to other use cases.

Would love your feedback and ideas. CloudRouter lives under packages/cloudrouter of our monorepo https://github.com/manaflow-ai/manaflow.

[1] https://github.com/vercel-labs/agent-browser

cloudrouter.dev
134 33
Summary
Show HN: PlanOpticon – Extract structured knowledge from video recordings
ragelink about 12 hours ago

Show HN: PlanOpticon – Extract structured knowledge from video recordings

We built PlanOpticon to solve a problem we kept hitting: hours of recorded meetings, training sessions, and presentations that nobody rewatches. It extracts structured knowledge from video — transcripts, diagrams, action items, key points, and a knowledge graph — into browsable outputs (Markdown, HTML, PDF).

How it works:

  - Extracts frames using change detection (not just every Nth frame), with periodic capture for slow-evolving content like screen shares
  - Filters out webcam/people-only frames automatically via face detection
  - Transcribes audio (OpenAI Whisper API or local Whisper — no API needed)
  - Sends frames to vision models to identify and recreate diagrams as Mermaid code
  - Builds a knowledge graph (entities + relationships) from the transcript
  - Extracts key points, action items, and cross-references between visual and spoken content
  - Generates a structured report with everything linked together
Supports OpenAI, Anthropic, and Gemini as providers — auto-discovers available models and routes each task to the best one. Checkpoint/resume so long analyses survive failures.

  pip install planopticon
  planopticon analyze -i meeting.mp4 -o ./output
Also supports batch processing of entire folders and pulling videos from Google Drive or Dropbox.

Example: We ran it on a 90-minute training session: 122 frames extracted (from thousands of candidates), 6 diagrams recreated, full transcript with speaker diarization, 540-node knowledge graph, and a comprehensive report — all in about 25 minutes.

Python 3.10+, MIT licensed. Docs at https://planopticon.dev.

github.com
2 0
Summary
fabienpenso 3 days ago

Show HN: Moltis – AI assistant with memory, tools, and self-extending skills

Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime.

Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus).

I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content (https://pen.so/2020/11/07/own-your-content/) and owning your email (https://pen.so/2020/12/10/own-your-email/). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction.

It's alpha. I use it daily and I'm shipping because it's useful, not because it's done.

Longer architecture deep-dive: https://pen.so/2026/02/12/moltis-a-personal-ai-assistant-bui...

Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback.

moltis.org
120 47
jimmyechan 1 day ago

Show HN: A playable toy model of frontier AI lab capex decisions

I made a lightweight web game about compute CAPEX tradeoffs: https://darios-dilemma.up.railway.app/

No signup, runs on mobile/desktop.

Loop per round:

1. choose compute capacity 2. forecast demand 3. allocate capacity between training and inference 4. random demand shock resolves outcome

You can end profitable, cash constrained, or bankrupt depending on allocation + forecast error.

Goal was to make the decision surface intuitive in 2–3 minutes per run.

It’s a toy model and deliberately omits many real world factors.

Note: this is based on what I learned after listening to Dario on Dwarkesh's podcast - thought it was fascinating.

darios-dilemma.up.railway.app
8 0
Summary
Show HN: OpenWhisper – free, local, and private voice-to-text macOS app
rwu1997 2 days ago

Show HN: OpenWhisper – free, local, and private voice-to-text macOS app

I wanted a voice-to-text app but didn't trust any of the proprietary ones with my privacy.

So I decided to see if I could vibe code it with 0 macOS app & Swift experience.

It uses a local binary of whisper.cpp (a fast implementation of OpenAI's Whisper voice-to-text model in C++).

Github: https://github.com/richardwu/openwhisper

I also decided to take this as an opportunity to compare 3 agentic coding harnesses:

Cursor w/ Opus 4.6: - Best one-shot UI by far - Didn't get permissioning correct - Had issues making the "Cancel recording" hotkey being turned on all the time

Claude Code w/ Opus 4.6: - Fewest turns to get main functionality right (recording, hotkeys, permissions) - Was able to get a decent UI with a few more turns

Codex App w/ Codex 5.3 Extra-High: - Worst one-shot UI - None of the functionality worked without multiple subsequent prompts

github.com
36 14
Summary
Show HN: ClipPath – Paste screenshots as file paths in your terminal
viniciusborgeis 1 day ago

Show HN: ClipPath – Paste screenshots as file paths in your terminal

ClipPath is an open-source library that provides a simple and efficient way to implement clipping paths in web applications. It offers cross-browser compatibility and supports various image formats, making it a useful tool for web developers working with complex visual elements.

github.com
16 1
Summary