Show HN: I turned algae into a bio-altimeter and put it on a weather balloon
Hi HN - My name is Andrew, and I'm a high school student.
This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).
The blog post covers the full engineering mess/process, including:
- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.
-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.
- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.
The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: https://github.com/radeeyate/stratospore
I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!
Show HN: Safe-NPM – only install packages that are +90 days old
This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...() and the [Chalk/debug Compromise](https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).
Install: npm install -g @dendronhq/safe-npm Usage: safe-npm install react@^18 lodash
How it works: - Queries npm registry for all versions matching your semver range - Filters out anything published in the last 90 days - Installs the newest "aged" version
Limitations: - Won't protect against packages malicious from day one - Doesn't control transitive dependencies (yet - looking into overrides) - Delays access to legitimate new features
This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.
Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.
How I did it:
Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.
Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.
The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.
Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.
Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.
Follow-up: ScopeDoom
After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.
The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.
Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).
Links:
KiDoom GitHub: https://github.com/MichaelAyles/KiDoom, writeup: https://www.mikeayles.com/#kidoom
ScopeDoom GitHub: https://github.com/MichaelAyles/ScopeDoom, writeup: https://www.mikeayles.com/#scopedoom
Show HN: We built an open source, zero webhooks payment processor
Hi HN! For the past bit we’ve been building Flowglad (https://flowglad.com) and can now feel it’s just gotten good enough to share with you all:
Repo: https://github.com/flowglad/flowglad
Demo video: https://www.youtube.com/watch?v=G6H0c1Cd2kU
Flowglad is a payment processor that you integrate without writing any glue code. Along with processing your payments, it tells you in real time the features and usage credit balances that your customers have available to you based on their billing state. The DX feels like React, because we wanted to bring the reactive programming paradigm to payments.
We make it easy to spin up full-fledged pricing models (including usage meters, feature gates and usage credit grants) in a few clicks. We schematize these pricing models into a pricing.yaml file that’s kinda like Terraform but for your pricing.
The result is a payments layer that AI coding agents have a substantially easier time one-shotting (for now the happiest path is a fullstack Typescript + React app).
Why we built this:
- After a decade of building on Stripe, we found it powerful but underopinionated. It left us doing a lot of rote work to set up fairly standard use cases - That meant more code to maintain, much of which is brittle because it crosses so many server-client boundaries - Not to mention choreographing the lifecycle of our business domain with the Stripe checkout flow and webhook event types, of which there are 250+ - Payments online has gotten complex - not just new pricing models for AI products, but also cross border sales tax, etc. You either need to handle significant chunks of it yourself, or sign up for and compose multiple services
This all feels unduly clunky, esp when compared to how easy other layers like hosting and databases have gotten in recent years.
These patterns haven’t changed much in a decade. And while coding agents can nail every other rote part of an app (auth, db, analytics), payments is the scariest to tab-tab-tab your way through. Because the the existing integration patterns are difficult to reason about, difficult to verify correctness, and absolutely mission critical.
Our beta version lets you:
- Spin up common pricing models in just a few clicks, and customize them as needed - Clone pricing models between testmode and live mode, and import / export via pricing.yaml - Check customer usage credits and feature access in real time on your backend and React frontend - Integrate without any DB schema changes - you reference your customers via your ids, and reference prices, products, features and usage meters via slugs that you define
We’re still early in our journey so would love your feedback and opinions. Billing has a lot of use cases, so if you see anything that you wish we supported, please let us know!
Show HN: I built an interactive HN Simulator
Hey HN! Just for fun, I built an interactive Hacker News Simulator.
You can submit text posts and links, just like the real HN. But on HN Simulator, all of the comments are generated by LLMs + generate instantly.
The best way to use it (IMHO) is to submit a text post or a curl-able URL here: https://news.ysimulator.run/submit. You don't need an account to post.
When you do that, various prompts will be built from a library of commenter archetypes, moods, and shapes. The AI commenters will actually respond to your text post and/or submitted link.
I really wanted it to feel real, and I think the project mostly delivers on that. When I was developing it, I kept getting confused between which tab was the "real" HN and which was the simulator, and accidentally submitted some junk to HN. (Sorry dang and team – I did clean up after myself).
The app itself is built with Node + Express + Postgres, and all of the inference runs on Replicate.
Speaking of Replicate, they generously loaded me up with some free credits for the inference – so shoutout to the team there.
The most technically interesting part of the app is how the comments work. You can read more about it here, as well as explore all of the available archetypes, moods, and shapes that get combined into prompts: https://news.ysimulator.run/comments.html
I hope you all have as much fun playing with it as I did making it!
Show HN: Aigit – AI-powered Git CLI for commit messages, branch names, and PRs
Built a CLI tool that every developer needs:
aigit - Git workflow automation with AI
AI-generated commit messages Smart branch naming Automated PR creation Code review assistance
No more "fix stuff" commits
Show HN: Wozz – Agentless Kubernetes cost auditor (open source)
Wozz is an open-source web application that provides a user-friendly interface for managing and visualizing data from various sources. It offers features such as data integration, analysis, and real-time updates, making it a versatile tool for data-driven decision-making.
Show HN: Fixing Google Nano Banana Pixel Art with Rust
The article describes the Spritefusion Pixel Snapper, a tool that allows users to create and export pixel art from a variety of image sources. The tool offers a range of features, including image cropping, pixel scaling, and color palette customization, aimed at simplifying the pixel art creation process.
Show HN: Anthony Bourdain's Lost Li.st's
I read through the years about Bourdain's content on the defunct li.st service, but was never able to find an archive of it. A more thorough perusing of archive.org and a pointer from an Internet stranger led me to create this site. Cheers
Show HN: I built an open source, code-first Intercom alternative
I spent the last 3 months working on an code-first Intercom alternative.
I think with AI, smaller teams will be handling bigger loads of customers.
Each product being different, each support should be as well, or at least adapted.
Support should expendable, living in your codebase, easily testable and changeable.
Code first means your LLMs can update, upgrade and help you with your support too.
It comes as a NPM package, lives in your React code and soon, even your AI agents tools will be centralised with the rest of your code.
Of course it comes with a beautiful dashboard from where you can talk with your visitors and monitor everything in real-time.
Curious to get your feedbacks on that?
Show HN: LLM-models – a CLI tool to list available LLM models across providers
I built a simple CLI tool to solve a problem I kept running into: which exact model names are actually available through OpenAI, Anthropic, Google, and xAI APIs at any given time?
The APIs themselves provide this info, but I got tired of checking docs or writing one-off scripts. Now I can just run:
$ llm-models -p Anthropic
and get the current list with human-readable names.
Installation:
macOS: brew tap ljbuturovic/tap && brew install llm-models
Linux: pipx install llm-models
Windows: pip install llm-models
Built with help from Claude Code. The tool queries each provider's API directly, so you get real-time availability rather than stale
documentation.Open to feedback and happy to add more providers if there's interest!
Show HN: ChatIndex – A Lossless Memory System for AI Agents
Current AI chat assistants face a fundamental challenge: context management in long conversations. While current LLM apps use multiple separate conversations to bypass context limits, a truly human-like AI assistant should maintain a single, coherent conversation thread, making efficient context management critical. Although modern LLMs have longer contexts, they still suffer from the long-context problem (e.g. context rot problem) - reasoning ability decreases as context grows longer.
Memory-based systems have been invented to alleviate the context rot problem, however, memory-based representations are inherently lossy and inevitably lose information from the original conversation. In principle, no lossy representation is universally perfect for all downstream tasks. This leads to two key requirements for defining a flexible in-context management system:
1. Preserve raw data: An index system that can retrieve the original conversation when necessary.
2. Multi-resolution access: Ability to retrieve information at different levels of detail on-demand.
ChatIndex is a context management system that enables LLMs to efficiently navigate and utilize long conversation histories through hierarchical tree-based indexing and intelligent reasoning-based retrieval.
Open-sourced repo: https://github.com/VectifyAI/ChatIndex
Show HN: Wolfrominoes
This is a little puzzle game I made based on Wolfram's rule 30[0]. I was doing some sketches based on old Genuary[1] prompts and for some reason this idea stuck with me, so I polished it up into a hopefully playable state.
I wanted to keep things minimal, but as a semi-easter egg you can play custom variants using url parameters like https://demos.samgentle.com/wolfrominoes/?rows=20&rule=110
[0] https://mathworld.wolfram.com/Rule30.html
[1] https://genuary2021.github.io/prompts
Show HN: OCR Arena – A playground for OCR models
I built OCR Arena as a free playground for the community to compare leading foundation VLMs and open-source OCR models side-by-side.
Upload any doc, measure accuracy, and (optionally) vote for the models on a public leaderboard.
It currently has Gemini 3, dots.ocr, DeepSeek, GPT5, olmOCR 2, Qwen, and a few others. If there's any others you'd like included, let me know!
Show HN: Stun LLMs with thousands of invisible Unicode characters
I made a free tool that stuns LLMs with invisible Unicode characters.
*Use cases:* Anti-plagiarism, text obfuscation against LLM scrapers, or just for fun!
Even just one word's worth of “gibberified” text is enough to block most LLMs from responding coherently.
Show HN: A WordPress plugin that rewrites image URLs for near-zero-cost delivery
Hi HN,
I built a WordPress plugin called Bandwidth Saver. It takes the images your site already has and serves them through Cloudflare R2 and Workers, which means zero egress fees and extremely low storage cost. The goal is to make image delivery fast and cheap without adding any of the complexity of traditional optimization plugins.
The idea is simple. WordPress keeps generating images normally. The plugin rewrites the URLs on the frontend so images are served from a Cloudflare Worker. On the first request, the Worker fetches the original image and stores it in R2. After that, Cloudflare’s edge serves the image from its global cache with no egress charges. There’s no need to preload or sync anything, and if something fails, the original image loads. That’s the entire system.
I built this because most image CDN plugins try to do everything: compression, resizing, AI transforms, asset management, custom dashboards, and monthly fees. That’s useful for some users, but it’s unnecessary for most sites that just want their existing media to load faster without breaking the bank. Bandwidth Saver focuses only on delivery, not transformations. It’s intentionally minimal.
There are two ways to use it. The plugin is completely free if you want to run your own Cloudflare Worker. I included the Worker code and the steps needed to deploy it. If you don’t want to deal with any Cloudflare setup, there’s a managed option for $2.99 per month that uses my Worker and my R2 bucket. I’m trying to keep it accessible while also covering operational costs.
The plugin works with any theme or builder and doesn’t modify the database. It only rewrites URLs on output. WordPress remains the system of record for all media. R2 simply becomes a cheap, durable cache layer backed by Cloudflare’s edge.
I’m especially interested in feedback about the approach. Does the fetch-on-first-request model make sense? Is the pricing fair for a plugin of this scope? Should I prioritize allowing users to connect their own R2 buckets or the managed service? And for those with experience in edge compute or CDNs, I would love thoughts on how to improve the Worker or the rewrite strategy.
Thanks for reading, happy to answer any questions.
Show HN: I wrote a minimal memory allocator in C
A fun toy memory allocator (not thread safe, that's a future TODO). I also wanted to explain how I approached it, so I also wrote a tutorial blog post (~20 minute read) covering the code which you can find the link to in the README.
Show HN: Search London StreetView panoramas by text
Inspired by All Text in NYC (https://alltext.nyc) by Yufeng Zhang I thought I would replicate something similar for London.
A searchable tool that lets you explore text captured across Google Street View imagery in London; shop signs, posters, graffiti, van numbers etc
Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows
Easy to use, portable app to play midi music files on all flavours of Microsoft Windows.
Brief Background - Used midi playback way back in the days of Windows 95 for some fun and entertaining apps, but as Windows progressed, it seemed their midi support (for Win32 anyway) regressed in both startup speed and reliability. Midi playback used to be near instant on Windows 95, but on later versions of Windows this was delayed to about 5-7 seconds. And reliability became somewhat patchy. This made working with midi a real headache.
Cynthia was built to test and enjoy midi music once again. It's taken over a year of solid coding, recoding, testing, re-testing, and a lot more testing, and some hair pulling along the way, but finally Cynthia works pretty solidly on Windows now.
Some of Cynthia's Key Features: * 25 built-in sample midis on a virtual disk - play right out-of-the box * Play Modes: Once, Repeat One, Repeat All, All Once, Random * Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats * Realtime track data indicators, channel output volume indicators with peak hold, 128 note usage indicators * Volume Bars to display realtime average volume and bass volume levels * Use an Xbox Controller to control Cynthia's main functions * Large list capacity for handling thousands of midi files * Switch between up to 10 midi playback devices in realtime * Playback through a single midi device, or multiple simultaneous midi devices with lag and channel output support * Custom built midi playback engine for high playback stability * Custom built codebase for low-level work to GUI level * Also runs on Linux/Mac (including apple silicon) via Wine * Smart Source Code - compiles in Borland Delphi 3 and Lazarus 2 * MIT License
YouTube Video of Cynthia playing a midi: https://youtu.be/IDEOQUboTvQ
GitHub Repo: https://github.com/blaiz2023/Cynthia
Show HN: Parm – Install GitHub releases just like your favorite package manager
Hi all, I built a CLI tool that allows you to seamlessly install software from GitHub release assets, similar to how your system's package manager installs software.
It works by exploiting common patterns among GitHub releases across different open-source software such as naming conventions and file layouts to fetch proper release assets for your system and then downloading the proper asset onto your machine via the GitHub API. Parm will then extract the files, find the proper binaries, and then add them to your PATH. Parm can also check for updates and uninstall software, and otherwise manages the entire lifecycle of all software installed by Parm.
Parm is not meant to replace your system's package manager. It is instead meant as an alternative method to install prebuilt software off of GitHub in a more centralized and simpler way.
It's currently in a pre-release stage, and there's a lot of features I want to add. I'm currently working (very slowly) on some new features, so if this sounds interesting to you, check it out! It's completely free and open-source and is currently released for Linux/macOS. I would appreciate any feedback.
Link: https://github.com/yhoundz/parm
Show HN: Datamorph – A clean JSON ⇄ CSV converter with auto-detect
Hi everyone,
I built a small web tool called Datamorph because I kept running into JSON/CSV converters that either broke with nested data, required login, or added weird formatting.
Datamorph is a minimal, fast, no-login tool that can:
• Convert JSON → CSV and CSV → JSON • Auto-detect structure (arrays, nested objects, mixed data) • Handle uploads or manual text input • Beautify / fix invalid JSON • Give clean, flat CSV output for real-world messy data
It’s built with React + Supabase + serverless functions. Everything runs client-side except file parsing, so nothing is stored.
I know there are many similar tools, but I tried focusing on:
• better handling of nested JSON, • simpler UI, • zero ads / zero login, • instant conversion without waiting.
Would love feedback on edge cases it fails on, or features you think would make this actually useful for devs and analysts.
Live tool: https://datamorphio.vercel.app/
Thanks for checking it out!
Show HN: MightyGrep
A program I used stopped getting updates, so I made my own version of it. It's a fast plaintext search utility in a GUI. It's great for finding text in code projects, log files, configs, anything plaintext! Available for Windows, macOS, and Linux. Free version limitations are a splash screen and limited history entries.
Show HN: I built a standalone .mdb to Parquet exporter to avoid ODBC driver hell
I’ve been stuck maintaining a legacy project that relies on massive .accdb files. I wasted two days trying to get the 64-bit ACE.OLEDB drivers to play nice with my Python environment without breaking other dependencies.
Access also kept segfaulting when I tried to export tables over 1GB to CSV, so I wrote a dedicated tool to handle the extraction.
The Tool:
Standalone: Doesn't require a local install of Office/Access. Streaming: Uses a stream reader so it doesn't OOM on large tables.
Parquet Support: Preserves data types better than CSV (and much smaller file size).
I threw in a basic SQL query window just to check data before dumping, but the main goal is just getting data out of Access and into a modern warehouse/dataframe as fast as possible.
It’s Windows-only for now (Access file locking is tricky on *nix), but let me know if it breaks on your specific schema.
Show HN: I built directory of fashion brands because I didn't know how to dress
Hi HN guys! I hope you have a happy day.
A year ago, I was terrible at grooming and styling myself. To be honest, it severely affected my self-esteem and made me hesitant to even go outside.
I didn't want to be a supermodel or hot—I just wanted to look like a normal person. But when I tried to find clothes, I hit a wall. I didn't know which brands were good, what style I liked, or where to start. I wasted hours just looking for a simple shirt.
So, I decided to solve this problem the way I know how: by building a tool.
BrandList is a directory of fashion brands categorized by specific styles (like "Classic", "Minimal", "Streetwear"). The goal is to help people find brands that fit their vibe without knowing the specific brand names beforehand.
Key Features:
Curated by Style: You don't search by name; you browse by "look."
No Login Required: You can add/suggest brands without signing up (I hate friction).
It is currently an MVP. I would love your feedback on the UX and the categorization. If you know a cool brand, please add it (no account needed).
Show HN: We cut RAG latency ~2× by switching embedding model
The article discusses the migration from Voyage Embedding to a more modern embedding solution, highlighting the challenges, considerations, and the overall process undertaken to ensure a smooth transition for the company's product and its users.
Show HN: Constitutional AI Agent OS (governance enforced at kernel level)
I built the first multi-agent OS where constitutional governance is architecturally enforced.
Agents literally cannot boot without cryptographically verified oath.
Try it: python scripts/research_yagya.py Code: kernel_impl.py lines 544-621
Challenge: Prove it wrong.
Show HN: Build the habit of writing meaningful commit messages
Too often I find myself being lazy with commit messages. But I don't want AI to write them for me... only i truly know why i wrote the code i did.
So why don't i get AI to help me get that into words from my head?
That's what i built: smartcommit asks you questions about your changes, then helps you articulate what you already know into a proper commit message. Captures the what, how, and why.
Built this after repeatedly being confused 6 months in a project as to why i made the change i had made...
Would love feedback!
Show HN: Wealthfolio 2.0- Open source investment tracker. Now Mobile and Docker
Hi HN, creator of Wealthfolio here.
A year ago, I posted the first version. Since then, the app has matured significantly with two major updates:
1. Multi-platform Support: Now available on Mobile (iOS), Desktop (macOS, Windows, Linux), and as a Self-hosted Docker image. (Android coming soon).
2. Addons System: We added explicit support for extensions so you can hack around, vibe code your own integrations, and customize the app to fit your needs.
The core philosophy remains the same: Always private, transparent, and open source.
Show HN: Forty.News – Daily news, but on a 40-year delay
This started as a reaction to a conversational trope. Despite being a tranquil place, even conversations at my yoga studio often start with, "Can you believe what's going on right now?" with that angry/scared undertone.
I'm a news avoider, so I usually feel some smug self-satisfaction in those instances, but I wondered if there was a way to satisfy the urge to doomscroll without the anxiety.
My hypothesis: Apply a 40-year latency buffer. You get the intellectual stimulation of "Big Events" without the fog of war, because you know the world didn't end.
40 years creates a mirror between the Reagan Era and today. The parallels include celebrity populism, Cold War tensions (Soviets vs. Russia), and inflation economics.
The system ingests raw newspaper scans and uses a multi-step LLM pipeline to generate the daily edition:
OCR & Ingestion: Converts raw pixels to text.
Scoring: Grades events on metrics like Dramatic Irony and Name Recognition to surface stories that are interesting with hindsight. For example, a dry business blurb about Steve Jobs leaving Apple scores highly because the future context creates a narrative arc.
Objective Fact Extraction: Extracts a list of discrete, verifiable facts from the raw text.
Generation: Uses those extracted facts as the ground truth to write new headlines and story summaries.
I expected a zen experience. Instead, I got an entertaining docudrama. Historical events are surprisingly compelling when serialized over weeks.
For example, on Oct 7, 1985, Palestinian hijackers took over the cruise ship Achille Lauro. Reading this on a delay in 2025, the story unfolded over weeks: first they threw an American in a wheelchair overboard, then US fighter jets forced the escape plane to land, leading to a military standoff between US Navy SEALs and the Italian Air Force. Unbelievably, the US backed down, but the later diplomatic fallout led the Italian Prime Minister to resign.
It hits the dopamine receptors of the news cycle, but with the comfort of a known outcome.
Stack: React, Node.js (Caskada for the LLM pipeline orchestration), Gemini for OCR/Scoring.
Link: https://forty.news (No signup required, it's only if you want the stories emailed to you daily/weekly)
Show HN: Deft-Intruder – Real-time malware detection daemon for Linux
I built an open-source malware detection daemon that monitors all running processes in real-time using ML + heuristics. No kernel modules or eBPF required.
Key points:
- Polls /proc for new processes (works on any Linux kernel 2.6+)
- Random Forest model trained on EMBER 2018 dataset (2.3M samples)
- Heuristic rules for crypto miners, ransomware, rootkits
- ~20MB RAM, <1% CPU, sub-millisecond scan latency
- Pure C, zero runtime dependencies
- Model embedded directly in binary (50KB)
Why I built this: Existing solutions either require modern kernels (eBPF) or are heavy/proprietary. I wanted something lightweight that works everywhere - servers, containers, old distros.
Detection approach: Extract features from executables (entropy, imports, sections), run ML prediction, apply heuristic rules, combine scores. If above threshold, kill the process.
Happy to discuss implementation details or Linux security in general.