Show HN: I built a synth for my daughter
The article discusses the author's decision to build a custom synthesizer for their young daughter, focusing on the educational and bonding benefits of introducing her to music and electronics at an early age.
Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files
I built a Rust-based CLI/terminal UI for inspecting Parquet files—data, metadata, and row-group-level structure—right from the terminal. If someone sent me a Parquet file, I used to open DuckDB or Polars just to see what was inside. Now I can do it with one command.
Repo: https://github.com/kaushiksrini/parqeye
Show HN: Continuous Claude – run Claude Code in a loop
Continuous Claude is a CLI wrapper I made that runs Claude Code in an iterative loop with persistent context, automatically driving a PR-based workflow. Each iteration creates a branch, applies a focused code change, generates a commit, opens a PR via GitHub's CLI, waits for required checks and reviews, merges if green, and records state into a shared notes file.
This avoids the typical stateless one-shot pattern of current coding agents and enables multi-step changes without losing intermediate reasoning, test failures, or partial progress.
The tool is useful for tasks that require many small, serial modifications: increasing test coverage, large refactors, dependency upgrades guided by release notes, or framework migrations.
Blog post about this: https://anandchowdhary.com/blog/2025/running-claude-code-in-...
Show HN: ESPectre – Motion detection based on Wi-Fi spectre analysis
Hi everyone, I'm the author of ESPectre.
This is an open-source (GPLv3) project that uses Wi-Fi signal analysis to detect motion using CSI data, and it has already garnered almost 2,000 stars in two weeks.
Key technical details:
- The system does NOT use Machine Learning, it relies purely on Math. — Runs in real-time on a super affordable chip like the ESP32. - It integrates seamlessly with Home Assistant via MQTT.
Show HN: Reversing a Cinema Camera's Peripherals Port
The article explores the process of reversing the communication protocol used by the FS7 camera system, providing insights into the technical implementation and potential applications for developers and enthusiasts in the field of camera control and automation.
Show HN: PrinceJS – 19,200 req/s Bun framework in 2.8 kB (built by a 13yo)
Hey HN,
I'm 13, from Nigeria, and I just released PrinceJS — the fastest web framework for Bun right now.
• 19,200 req/s (beats Hono/Elysia/Express) • 2.8 kB gzipped • Tree-shakable (cache, AI, email, cron, SSE, queue, test, static...) • Zero deps. Zero config.
Built in < 1 week. No team. Just me and Bun.
Try it: `bun add princejs` GitHub: https://github.com/MatthewTheCoder1218/princejs Docs: https://princejs.vercel.app
Brutal feedback welcome. What's missing?
– @Lil_Prince_1218
Show HN: Kalendis – Scheduling API (keep your UI, we handle timezones/DST)
Kalendis is an API-first scheduling backend. You keep your UI; we handle the gnarly parts (recurrence, time zones, DST, conflict-safe bookings).
What it does: • MCP tool: generates typed clients and API route handlers (Next.js/Express/Fastify/Nest) so you can scaffold calls straight from your IDE/agent tooling. • Availability engine: recurring rules + one-off exceptions/blackouts, returned in a clean, queryable shape. • Bookings: conflict-safe endpoints for creating/updating/canceling slots.
Why we built it: We kept rebuilding the same "hard parts" of scheduling: time zones/DST edge cases, recurring availability, conflict-aware booking, etc. We wanted a boring, reliable backend so we could ship product features without adopting a hosted scheduling UI.
How it's helped: We stopped re-implementing DST/recurrence math and shipped booking flows faster. One small team (just 2 developers) built a robust booking platform for their business using Kalendis—they kept full control of their UX without spending lots of cycles on scheduling infrastructure. The MCP generator cut the glue code: drop in a typed client or route, call the API, move on.
Some tech details: • REST API with ISO-8601 timestamps and IANA time zones • Recurring availability + one-off exceptions (designed to compose cleanly) • Focused scope: users, availability, exceptions, bookings (not a monolithic suite)
The MCP server exposes tools like generate-frontend-client, generate-backend-client, generate-api-routes, and list-endpoints. Add to your MCP settings:
{
"mcpServers": {
"kalendis": {
"command": "npx",
"args": ["-y", "@kalendis/mcp"]
}
}
}
How to try it: Create a free account → get an API key. (https://kalendis.dev). Then hit an endpoint: curl -H "x-api-key: $KALENDIS_API_KEY" \
"https://api.kalendis.dev/v1/availability/getAvailability?userId=<user-id>&start=2025-10-07T00:00:00Z&end=2025-10-14T00:00:00Z&includeExceptions=true"
Happy to answer questions and post example snippets in the thread. Thanks for taking a look!
Show HN: Building WebSocket in Apache Iggy with Io_uring and Completion Based IO
The article discusses the use of IO_URING, a modern Linux kernel feature, to improve the performance of WebSocket communication. It explores how IO_URING can be leveraged to provide efficient and scalable WebSocket handling, addressing the challenges of traditional socket-based approaches.
Show HN: How are Markov chains so different from tiny LLMs?
I polished a Markov chain generator and trained it on an article by Uri Alon and al [0].
It generates text that seems to me at least on par with tiny LLMs, such as demonstrated by NanoGPT. Here is an example:
jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM10b_train UriAlon.txt 3
Training model with order 3...
Skip-gram detection: DISABLED (order < 5)
Pruning is disabled
Calculating model size for JSON export...
Will export 29832 model entries
Exporting vocabulary (1727 entries)...
Vocabulary export complete.
Exporting model entries...
Processed 12000 contexts, written 28765 entries (96.4%)...
JSON export complete: 29832 entries written to model.jsonModel trained and saved to model.json
Vocabulary size: 1727
jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM9_gen model.json
Aging cell model requires comprehensive incidence data. To obtain such a large medical database of the joints are risk factors. Therefore, the theory might be extended to describe the evolution of atherosclerosis and metabolic syndrome. For example, late‐stage type 2 diabetes is associated with collapse of beta‐cell function. This collapse has two parameters: the fraction of the senescent cells are predicted to affect disease threshold . For each individual, one simulates senescent‐cell abundance using the SR model has an approximately exponential incidence curve with a decline at old ages In this section, we simulated a wide range of age‐related incidence curves. The next sections provide examples of classes of diseases, which show improvement upon senolytic treatment tends to qualitatively support such a prediction. model different disease thresholds as values of the disease occurs when a physiological parameter ϕ increases due to the disease. Increasing susceptibility parameter s, which varies about 3‐fold between BMI below 25 (male) and 54 (female) are at least mildly age‐related and 25 (male) and 28 (female) are strongly age‐related, as defined above. Of these, we find that 66 are well described by the model as a wide range of feedback mechanisms that can provide homeostasis to a half‐life of days in young mice, but their removal rate slows down in old mice to a given type of cancer have strong risk factors should increase the removal rates of the joint that bears the most common biological process of aging that governs the onset of pathology in the records of at least 104 people, totaling 877 disease category codes (See SI section 9), increasing the range of 6–8% per year. The two‐parameter model describes well the strongly age‐related ICD9 codes: 90% of the codes show R 2 > 0.9) (Figure 4c). This agreement is similar to that of the previously proposed IMII model for cancer, major fibrotic diseases, and hundreds of other age‐related disease states obtained from 10−4 to lower cancer incidence. A better fit is achieved when allowing to exceed its threshold mechanism for classes of disease, providing putative etiologies for diseases with unknown origin, such as bone marrow and skin. Thus, the sudden collapse of the alveoli at the outer parts of the immune removal capacity of cancer. For example, NK cells remove senescent cells also to other forms of age‐related damage and decline contribute (De Bourcy et al., 2017). There may be described as a first‐passage‐time problem, asking when mutated, impair particle removal by the bronchi and increase damage to alveolar cells (Yang et al., 2019; Xu et al., 2018), and immune therapy that causes T cells to target senescent cells (Amor et al., 2020). Since these treatments are predicted to have an exponential incidence curve that slows at very old ages. Interestingly, the main effects are opposite to the case of cancer growth rate to removal rate We next consider the case of frontline tissues discussed above.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7963340/
Show HN: Bsub.io – zero-setup batch execution for command-line tools
I built bsub because I was tired of wiring up Docker images, Python environments, GPUs, sandboxing, and resource limits every time I needed to run heavy command-line tools from web apps. I wanted: send files -> run job in the cloud -> get output -> done.
https://www.bsub.io
bsub lets you execute tools like Whisper, Typst, Pandoc, Docling, and FFmpeg as remote batch jobs with no environment setup. You can try them locally via the CLI or integrate via a simple REST API.
Example (PDF extraction):
bsubio submit -w pdf/extract *.pdf
Works like running the tool locally, but the compute and isolation happen in the cloud.Technical details: - Each job runs in an isolated container with defined CPU/GPU/RAM limits. - Files are stored ephemerally for the duration of the job and deleted after completion. - REST API returns job status, logs, and results. - Cold start for light processors (Typst, Pandoc) is low; Whisper/FFmpeg take longer due to model load/encoding time. - Backend scales horizontally; more workers can be added during load spikes.
Current processors:
SST/Whisper -- speech-to-text
Typography -- Typst, Pandoc
PDF extraction -- Docling
Video transcoding -- FFmpeg
More coming; suggestions welcome for tools that are painful to set up locally.Looking for testers! CLI is open source: https://github.com/bsubio/cli. Installers available for Linux/macOS; Windows testing is in progress. Free during early testing; pricing TBD.
If you’re on Windows, feedback is especially helpful: contact@bsub.io
If you try it, I’d appreciate feedback on API design, latency, missing processors, or anything rough around the edges.
Show HN: My hobby OS that runs Minecraft
Astral OS, a newly launched operating system, has announced support for running Minecraft on its platform. This integration aims to provide a seamless and optimized Minecraft experience for users of the Astral OS ecosystem.
Show HN: Discussion of ICT Model – Linking Information, Consciousness and Time
Hi HN,
I’ve been working on a conceptual framework that tries to formalize the relationship between:
– informational states, – their minimal temporal stability (I_fixed), – the rate of informational change (dI/dT), – and the emergence of time, processes, and consciousness-like dynamics.
This is not a final theory, and it’s not metaphysics. It’s an attempt to define a minimal, falsifiable vocabulary for describing how stable patterns persist and evolve in time.
Core ideas:
– I_fixed = any pattern that remains sufficiently stable across time to allow interaction/measurement. – dI/dT = the rate at which such patterns change.
Time is defined as a relational metric of informational change (dI/dT), but the arrow of time does not arise from within the system — it emerges from an external temporal level, a basic temporal background.
The model stays strictly physicalist: it doesn’t require spatial localization of information and doesn’t assume any “Platonic realm.” It simply reformulates what it means for a process to persist long enough to be part of reality.
Why I’m posting here
I’m looking for rigorous critique from physicists, computer scientists, mathematicians, and anyone interested in foundational models. If you see flaws, ambiguities, or missing connections — I’d really appreciate honest feedback.
A full preprint (with equations, phenomenology, and testable criteria) and discussion is here:
https://www.academia.edu/s/8924eff666
DOI: 10.5281/zenodo.17584782
Thanks in advance to anyone willing to take a look.
Show HN: Octopii, a framework for building distributed applications in Rust
it won't let me put url for some reason, here it is: https://github.com/octopii-rs/octopii
Show HN: Model-agnostic cognitive architecture for LLMs
Hi HN,
A couple weeks ago I shared an early version of a side project I’ve been tinkering with called Persistent Mind Model. I built it at home on an i7-10700K / 32GB RAM / RTX 3080 because I was curious whether an AI could keep a stable “mind” over time, that could "think" about it's own identity as an LLM, instead of resetting every session.
After a lot more tinkering, I think the architecture is finally in a solid place.
Basically, it saves everything the AI does, thoughts, decisions, updates as a chain of events in a local SQLite database. Because the “identity” is stored in that ledger (and not inside the model), you can swap between OpenAI, Ollama, or other backends and it just keeps going from where it left off, and reasons about its own history/identity development.
I cleaned up the runtime and added things like: a better control loop, a simple concept system for organizing ideas, graph-based telemetry so you can inspect how it evolves, and a draft whitepaper (for now), and several full sessions you can replay to see how the behavior develops.
It's basically my experiment to develop persistent memories, and self-evolving "identities" for LLMs.
The whole system is only a few MB of code, plus about 1.3 MB for the example ledger I’m sharing.
If you’re interested in AI systems that can grow over time, or you want to experiment with persistent reasoning/memory/ verifiable mechanical cognition, I’d love feedback.
Repo: https://github.com/scottonanski/persistent-mind-model-v1.0
It’s open-source, free to use, and still early. but it’s already producing some surprisingly interesting results.
Would love to see what others do with it.
Show HN: Agfs – Aggregated File System, a modern tribute to the spirit of Plan9
The article describes AGFS, a secure, decentralized file storage system built on top of the Ethereum blockchain. AGFS aims to provide a censorship-resistant and transparent alternative to traditional cloud storage services, allowing users to store and share files while maintaining control over their data.
Show HN: I have created an alternative for Miro
Hey HN
This project took almost two years and is probably one the best alternatives for tools like Miro and MindMeister
Let me know what you think
Show HN: Unflip – a puzzle game about XOR patterns of squares
UnFlip is a unique puzzle game where players must flip and rotate tiles to uncover hidden images. The game features a minimalist design, challenging levels, and an addictive gameplay loop that encourages players to keep playing and solving increasingly complex puzzles.
Show HN: I build a strace clone for macOS
Ever since I tested software on macOS, I deeply missed my beloved strace that I use when programs are missbehaving. macOS has dtruss but it's getting locked down and more unusable with every machine. My approach uses the signed lldb binary on the system and re-implements the output you are know from the wonderful strace tool. I just created the tool yesterday evening, so it may have a few bugs, but I already got quiet a few integration tests and I am happy so far with it.
Show HN: Hegelion-Dialectic Harness for LLMs (Thesis –> Antithesis –> Synthesis)
Show HN: UltraLocked – iOS file vault using Secure Enclave and PFS
Show HN: MCP Traffic Analysis Tool
The article discusses the MCP-Shark project, an open-source platform that enables researchers to develop, test, and deploy machine learning models for underwater shark detection and tracking. It highlights the project's focus on leveraging computer vision and deep learning techniques to support marine conservation efforts.
Show HN: UpBeat – an AI-Enhanced RSS/Atom Reader that only shows you good news
Hey everyone, I'm Sean, and I've built UpBeat.
Why did I build this?
Well, the world is more complex than ever, and every stream, device and social feed screams for our attention whilst telling us that everything is awful.
While it's important to know what's going on in the world - do we really need to be bombarded with negativity 24/7?
Absolutely not! It's bad for our mental health, it's bad for our attention spans, and it's bad for society as a whole.
So that's why I built UpBeat - My friends, loved ones, and I needed a break from the doom cycle. So, here it is :)
Some technical details, it's a macOS app built with Go using the Wails.io framework and it (currently) uses the Distilbert model which runs on the Apple Neural engine, so inference takes ~40ms.
Show HN: ToolHop – Fast, simple utilities for every workflow
ToolHop is your all-in-one browser toolbox with 200+ fast-loading calculators, converters, generators, color labs, and dev helpers. Use global search or curated categories to jump straight into the right utility, run it client-side for instant feedback, and deep-link results to your team. Whether you’re formatting copy, validating data, checking DNS, or exploring palettes, ToolHop keeps your core workflows a single tab away, and it’s entirely free, no account required.
---
I built ToolHop because I was sick of the usual “free tool” bait-and-switch. Every time I needed to convert an image, compress a file, check some text, or run a quick calculation, I’d end up hitting some arbitrary limit like “10 uses per week” or a forced signup wall. It’s ridiculous how something as basic as converting a JPG to a PNG can turn into a subscription pitch.
So ToolHop started as a personal frustration project: I wanted a single place with a ton of genuinely useful tools that didn’t nag, lock you out, or throttle you. Over time that grew into 200+ handcrafted tools, all fast, simple, and actually free. No trickery, no timers, no limits.
As I built it, the process became about consistency and quality. I wanted the tools to feel seamless, not slapped together. That meant focusing on speed, clean UI, accurate results, and making sure each tool works instantly without friction.
The goal was always the same: a site that respects people’s time. Something you can rely on whenever you just need a tool to work. If ToolHop saves someone even a few minutes of hassle, then the project did its job.
Show HN: I ditched Grafana for my home server and built this instead
Frustrated by the complexity and resource drain of multi service monitoring stacks, I built Simon. I wanted a single, lightweight dashboard to replace the heavy stack and the constant need for an SSH client for routine tasks. The result is a resource efficient dashboard in a single Rust binary, just a couple of megabytes in size. Its support for various architectures on Linux also makes it ideal for embedded systems and lightweight SBCs.
It integrates: Comprehensive Monitoring: Realtime and historical metrics for the host system and Docker containers (CPU, memory, disk usage, and network activity). Integrated File & Log Management: A web UI for file operations and for viewing container logs, right where you need them. Flexible Alerting: A system to set rules on any metric, with templates for sending notifications to Telegram, ntfy, and webhooks. My goal was to create a cohesive, lightweight tool for self hosters and resource constrained environments. I'd love to get your feedback.
https://github.com/alibahmanyar/simon
Show HN: Learn Docker in your terminal
Inspired by rustlings for the concept and the name. A little project to learn or refine the basics of docker / compose commands. I am planning to add more advanced themes later on. Hoping to have discussions on how it could become more useful for others.
Show HN: Encore – Type-safe back end framework that generates infra from code
Show HN: DBOS Java – Postgres-Backed Durable Workflows
Hi HN - I’m Peter, here with Harry (devhawk), and we’re building DBOS Java, an open-source Java library for durable workflows, backed by Postgres.
https://github.com/dbos-inc/dbos-transact-java
Essentially, DBOS helps you write long-lived, reliable code that can survive failures, restarts, and crashes without losing state or duplicating work. As your workflows run, it checkpoints each step they take in a Postgres database. When a process stops (fails, restarts, or crashes), your program can recover from those checkpoints to restore its exact state and continue from where it left off, as if nothing happened.
In practice, this makes it easier to build reliable systems for use cases like AI agents, payments, data synchronization, or anything that takes hours, days, or weeks to complete. Rather than bolting on ad-hoc retry logic and database checkpoints, durable workflows give you one consistent model for ensuring your programs can recover from any failure from exactly where they left off.
This library contains all you need to add durable workflows to your program: there's no separate service or orchestrator or any external dependencies except Postgres. Because it's just a library, you can incrementally add it to your projects, and it works out of the box with frameworks like Spring. And because it's built on Postgres, it natively supports all the tooling you're familiar with (backups, GUIs, CLI tools) and works with any Postgres provider.
If you want to try it out, check out the quickstart:
https://docs.dbos.dev/quickstart?language=java
We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions.
Show HN: Whirligig.live
Hi guys, I stitched a few APIs together into a fun gig finder app and thought some of you might enjoy it. Warning - autoplay!
Show HN: Minivac 601 Simulator - a 1961 Relay Computer
Hey HN!
I'm very proud of sharing this project with you all, after ~2 years of starts and stops, and about 5 different attempts at making it.
Context/history: In 1961, the Minivac 601 [0], an educational electronics kit - somewhat similar to those "300 circuits in one" you may have had growing up as I did - was created by none other than Claude Shannon.
The Minivac is disarmingly simple: it consists roughly speaking of 6 relays, 12 lights, 6 buttons, and a motorized wheel. You'd think that it couldn't really do much.
Well, amazingly, it can do a lot. You can wire up the components in a way that will make the Minivac play tic-tac-toe, or OCR-detect 10 digits... The sample "demo" circuit I chose for the homepage shows a binary counter that counts up to 7.
Another amazing thing about the Minivac is definitely its manuals [1]. Their spirit is what I hope to capture in the coming (years?) as I keep improving this project. The manuals are generous and well-written and are not only an amazing gradual introduction to relay-based logic - they touch on computing at large. With amazing 1960s graphics/cartoons, of course.
That's probably what got me to work on the Minivac. I learned about the device a bit before going to the Recurse Center, fell in love with the manuals, and was frustrated that I couldn't try out the circuits or play around with the device! I thought that creating a JavaScript-based emulator would be an "easy" way to get there. Turns out that correctly simulating electricity isn't "easy". :-) But I'm very proud that it now seems to be doing the right thing for most circuits that I've tested from the book. Yes, this Minivac Simulator has a TypeScript testing suite!
Looking forward to hearing from you all. Cheers
[0] https://en.wikipedia.org/wiki/Minivac_601
[1] https://minivac.greg.technology/manuals/1961-minivac601-book...
Repo: https://github.com/gregsadetsky/minivac/
Show HN: Internet Object – a lean, schema-first JSON alternative
TL;DR: Internet Object (IO) is a lean, schema-first, JSON-compatible format that cuts structural noise, improves clarity for modern systems, and reduces tokens by ~40–50%.
Playground: https://play.internetobject.org
---
I started exploring this idea in 2017 after repeatedly running into the same pain points with JSON while building distributed systems and structured data pipelines. Rather than extending JSON or adding more layers on top of it, I wanted a format that was clean, schema-first, human-friendly, and still compatible, where that compatibility matters.
The concept evolved slowly over the years, with several redesigns, dead ends, and restarts - until it eventually converged into what I now call Internet Object (IO). The story behind this evolution is here:
https://internetobject.org/the-story/
Although IO was not created with LLMs in mind, its structure ends up being significantly more token-efficient (around 40-50% fewer tokens compared to JSON), which has become a practical advantage in today’s workloads.
I've written a practical guide showing how JSON developers can transition to IO, with syntax explanations: https://blog.maniartech.com/from-json-to-internet-object-a-l...
For side-by-side comparisons with JSON, see the following link:
https://www.internetobject.org/io-vs-json/
There is also an interactive playground if you'd like to try the format directly:
https://play.internetobject.org
https://play.internetobject.org/simple-collection
This is a soft launch to gather early feedback - I would appreciate any thoughts from the community.