Show HN: StormWatch – Weather emergency dashboard with prep checklists
Basically was getting annoyed jumping between 5 different sites during this winter storm season, so I built "StormWatch". It's a no-fuss, mobile-friendly webpage (dashboard) that shows all the stuff I was looking for, but in one simple UI.
Features:
- Real-time NWS alerts with safety tips - Snow/ice/precip accumulation forecasts (+wind) - Dynamic preparation checklists based on your alerts - Supply calculator for your household size - Regional weather news
It's free, no login required, works on any device. Just enter your ZIP.
https://jeisey.github.io/stormwatch/
Uses NWS and GDELT APIs and open source. Feel free to fork and modify however you'd like.
For builders: - Used an API-testing agent to verify all endpoints, response patterns, types, and rate limits - Used a scope & validation agent to keep the slices simple, focused, and tested - VS-code Copilot (Sonnet 4 for dev agents + Opus 4.5 for scope and validation)
Show HN: Debugging consent and conversion tracking with a headless scan
Hi HN,
I’ve spent the last few months debugging a pattern I kept seeing on client sites:
Ads are running, tags are installed, dashboards look fine — but Google Ads conversions are missing, inconsistent, or silently degraded.
In most cases, the problem wasn’t the conversion tag itself. It was consent timing:
tracking scripts firing before consent
cookies set before consent
Consent Mode v2 present but misconfigured
conversion events firing before consent updates
These setups often “look” correct in GTM or DevTools, but behave differently for a real first-time visitor.
So I built a small scanner that loads a site like a new user would and checks:
what scripts fire before consent
whether cookies are set pre-consent
whether Consent Mode v2 is actually configured
whether conversions would fire after consent
The output is a technical report with detected issues and suggested fixes (mostly small configuration changes).
This is not a compliance/legal tool — it’s meant as a debugging aid for people working with Google Ads, GA4, and consent setups.
I’m sharing it here mainly for feedback from people who’ve dealt with consent/tracking edge cases, false positives, or odd GTM behavior.
Happy to answer technical questions or explain how detection works.
Show HN: JSciPy – SciPy-inspired signal processing library for Java and Android
jSciPy is an open-source Java signal processing and scientific computing library inspired by SciPy.
It focuses on FFT, filters, PSD, STFT, DCT and Android compatibility, aiming to fill the gap for DSP-heavy workloads on JVM and Android.
Show HN: PicoFlow – a tiny DSL-style Python library for LLM agent workflows
Hi HN, I’m experimenting with a small Python library called PicoFlow for building LLM agent workflows using a lightweight DSL.
I’ve been using tools like LangChain and CrewAI, and wanted to explore a simpler, more function-oriented way to compose agent logic, closer to normal Python control flow and async functions.
PicoFlow focuses on: - composing async functions with operators - minimal core and few concepts to learn - explicit data flow through a shared context - easy embedding into existing services
A typical flow looks like:
flow = plan >> retrieve >> answer
await flow(ctx)
Patterns like looping and fork/merge are also expressed as operators rather than
separate graph or config layers.This is still early and very much a learning project. I’d really appreciate any feedback on the DSL design, missing primitives, or whether this style feels useful for real agent workloads.
Repo: https://github.com/the-picoflow/picoflow
Show HN: Pingaroo – a tiny native macOS menu bar app for network stats
Hi HN,
I built a tiny native menu bar app I'm calling Pingaroo. It separates router latency from internet latency and shows real-time signal quality/noise graphs to help diagnose intermittent lag. Written in 100% Swift/SwiftUI and fully open source.
Pingaroo is a recreation of the WhyFi app by James Potter based on screenshots, with a few personal riffs and improvements.
Feedback welcome!
Show HN: Kontra, a data quality validator that avoids unnecessary full scans
Hi HN,
I’ve been working on a small project called Kontra and just released it.
Kontra is a data quality measurement engine. You define rules in YAML or Python, run them against Parquet, CSV, or database tables, and get back violation counts and sampled failing rows.
The main goal was to avoid doing more work than necessary. Instead of treating all rules the same, Kontra separates execution paths. Some checks can be answered from Parquet metadata alone, others are pushed down to SQL, and full in-memory scans only happen for rules that actually need them. The guarantees differ, and Kontra is explicit about that rather than hiding it.
Under the hood it uses DuckDB for SQL pushdown on files and Polars for in-memory execution. It also supports profiling datasets, drafting starter rules from observed data, and diffing validation runs over time. Rules can carry user-defined context, and runs can be annotated after execution without affecting validation behavior.
It works as both a CLI and a Python library.
Happy to answer questions or get feedback.
Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents
I built Polymcp, a framework that allows you to transform any Python function into an MCP (Model Context Protocol) tool ready to be used by AI agents. No rewriting, no complex integrations.
Examples
Simple function:
from polymcp.polymcp_toolkit import expose_tools_http
def add(a: int, b: int) -> int: """Add two numbers""" return a + b
app = expose_tools_http([add], title="Math Tools")
Run with:
uvicorn server_mcp:app --reload
Now add is exposed via MCP and can be called directly by AI agents.
API function:
import requests from polymcp.polymcp_toolkit import expose_tools_http
def get_weather(city: str): """Return current weather data for a city""" response = requests.get(f"https://api.weatherapi.com/v1/current.json?q={city}") return response.json()
app = expose_tools_http([get_weather], title="Weather Tools")
AI agents can call get_weather("London") to get real-time weather data instantly.
Business workflow function:
import pandas as pd from polymcp.polymcp_toolkit import expose_tools_http
def calculate_commissions(sales_data: list[dict]): """Calculate sales commissions from sales data""" df = pd.DataFrame(sales_data) df["commission"] = df["sales_amount"] * 0.05 return df.to_dict(orient="records")
app = expose_tools_http([calculate_commissions], title="Business Tools")
AI agents can now generate commission reports automatically.
Why it matters for companies • Reuse existing code immediately: legacy scripts, internal libraries, APIs. • Automate complex workflows: AI can orchestrate multiple tools reliably. • Plug-and-play: multiple Python functions exposed on the same MCP server. • Reduce development time: no custom wrappers or middleware needed. • Built-in reliability: input/output validation and error handling included.
Polymcp makes Python functions immediately usable by AI agents, standardizing integration across enterprise software.
Repo: https://github.com/poly-mcp/Polymcp
Show HN: Coi – A language that compiles to WASM, beats React/Vue
I usually build web games in C++, but using Emscripten always felt like overkill for what I was doing. I don't need full POSIX emulation or a massive standard library just to render some stuff to a canvas and handle basic UI.
The main thing I wanted to solve was the JS/WASM interop bottleneck. Instead of using the standard glue code for every call, I moved everything to a Shared Memory architecture using Command and Event buffers.
The way it works is that I batch all the instructions in WASM and then just send a single "flush" signal to JS. The JS side then reads everything directly out of Shared Memory in one go. It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.
But writing DOM logic in C++ is painful, so I built Coi. It’s a component-based language that statically analyzes changes at compile-time to enable O(1) reactivity. Unlike traditional frameworks, there is no Virtual DOM overhead; the compiler maps state changes directly to specific handles in the command buffer.
I recently benchmarked this against React and Vue on a 1,000-row table: Coi came out on top for row creation, row updating and element swapping because it avoids the "diffing" step entirely and minimizes bridge crossings. Its bundle size was also the smallest of the three.
One of the coolest things about the architecture is how the standard library works. If I want to support a new browser API (like Web Audio or a new Canvas feature), I just add the definition to my WebCC schema file. When I recompile the Coi compiler, the language automatically gains a new standard library function to access that API. There is zero manual wrapping involved.
I'm really proud of how it's coming along. It combines the performance of a custom WASM stack with a syntax that actually feels good to write (for me atleast :P). Plus, since the intermediate step is C++, I’m looking into making it work on the server side too, which would allow for sharing components across the whole stack.
Example (Coi Code):
component Counter(string label, mut int& value) {
def add(int i) : void {
value += i;
}
style {
.counter {
display: flex;
gap: 12px;
align-items: center;
}
button {
padding: 8px 16px;
cursor: pointer;
}
}
view {
<div class="counter">
<span>{label}: {value}</span>
<button onclick={add(1)}>+</button>
<button onclick={add(-1)}>-</button>
</div>
}
}component App { mut int score = 0;
style {
.app {
padding: 24px;
font-family: system-ui;
}
h1 {
color: #1a73e8;
}
.win {
color: #34a853;
font-weight: bold;
}
}
view {
<div class="app">
<h1>Score: {score}</h1>
<Counter label="Player" &value={score} />
<if score >= 10>
<p class="win">You win!</p>
</if>
</div>
}
}app { root = App; title = "My Counter App"; description = "A simple counter built with Coi"; lang = "en"; }
Live Demo: https://io-eric.github.io/coi
Coi (The Language): https://github.com/io-eric/coi
WebCC: https://github.com/io-eric/webcc
I'd love to hear what you think. It's still far from finished, but as a side project I'm really excited about :)
Show HN: Open-source Figma design to code
Hi HN, founders of VibeFlow (YC S25) here.
We mostly work on backend and workflow tooling, but we needed a way to turn Figma designs into frontend code as a kickstart for prototyping. It takes a Figma frame and converts it into React + Tailwind components (plus assets).
If you want to try it: You can run it locally or use it via the VibeFlow UI to poke at it without setup (https://app.vibeflow.ai/)
Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go
The article describes 'WhoseThere', an open-source project that allows users to view a list of people who have been in their vicinity based on Bluetooth signals from their devices. The project aims to provide a privacy-focused alternative to location-tracking apps, giving users control over their personal data.
Show HN: Remote workers find your crew
Working from home? Are you a remote employee that "misses" going to the office?
Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom.
Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling.
https://dialtoneapp.com/
This app helps you find a crew and meet up for work and get that crew feeling.
This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql.
Show HN: Zsweep – Play Minesweeper using only Vim motions
Show HN: isometric.nyc – giant isometric pixel art map of NYC
Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.
I didn't write a single line of code.
Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!
I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:
http://cannoneyed.com/projects/isometric-nyc
Show HN: I built a space travel calculator using Vanilla JavaScript
I built this because measuring my age in years felt boring—I wanted to see the kilometers.
The first version only used Earth's orbital speed (~30km/s), but the number moved too slowly. To get the "existential dread" feeling, I switched to using the Milky Way's velocity relative to the CMB (~600km/s). The math takes some liberties (using scalar sum instead of vector) to make the speed feel "fast," but it gets the point across.
Under the hood, it's a single HTML file with zero dependencies. No React, no build step. The main challenge was the canvas starfield—I had to pre-allocate the star objects to stop the garbage collector from causing stutters on mobile.
Let me know if the physics makes you angry or if the stars run smooth on your device.
Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)
Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2
We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.
These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.
We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.
Why train a model from scratch?
We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.
For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)
The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).
What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.
Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.
What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling
We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!
Show HN: Giving Claude Code "hands" to deliver local files (P2P, No Cloud)
Show HN: BrowserOS – "Claude Cowork" in the browser
Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.
The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!
Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.
The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).
But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.
In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: https://git.new/browseros-agent
How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.
What you can actually do today:
a) Organize files in my desktop folder https://youtu.be/NOZ7xjto6Uc
b) Open top 5 HN links, extract the details and write summary into a HTML file https://youtu.be/uXvqs_TCmMQ
--- Where we are now If you haven't tried us since the last Show HN (https://news.ycombinator.com/item?id=44523409), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:
c) You can now build more reliable workflows using n8n-like graph https://youtu.be/H_bFfWIevSY
d) You can also use BrowserOS as an MCP server in Cursor or Claude Code https://youtu.be/5nevh00lckM
We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.
Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."
Curious to hear your take on this and the overall thesis.
We’ll be in the comments. Thanks for reading!
GitHub: https://github.com/browseros-ai/BrowserOS
Download: https://browseros.com (available for Mac, Windows, Linux!)
Show HN: S2-lite, an open source Stream Store
S2 was on HN for our intro blog post a year ago (https://news.ycombinator.com/item?id=42480105). S2 started out as a serverless API — think S3, but for streams.
The idea of streams as a cloud storage primitive resonated with a lot of folks, but not having an open source option was a sticking point for adoption – especially from projects that were themselves open source! So we decided to build it: https://github.com/s2-streamstore/s2
s2-lite is MIT-licensed, written in Rust, and uses SlateDB (https://slatedb.io) as its storage engine. SlateDB is an embedded LSM-style key-value database on top of object storage, which made it a great match for delivering the same durability guarantees as s2.dev.
You can specify a bucket and path to run against an object store like AWS S3 — or skip to run entirely in-memory. (This also makes it a great emulator for dev/test environments).
Why not just open up the backend of our cloud service? s2.dev has a decoupled architecture with multiple components running in Kubernetes, including our own K8S operator – we made tradeoffs that optimize for operation of a thoroughly multi-tenant cloud infra SaaS. With s2-lite, our goal was to ship something dead simple to operate. There is a lot of shared code between the two that now lives in the OSS repo.
A few features remain (notably deletion of resources and records), but s2-lite is substantially ready. Try the Quickstart in the README to stream Star Wars using the s2 CLI!
The key difference between S2 vs a Kafka or Redis Streams: supporting tons of durable streams. I have blogged about the landscape in the context of agent sessions (https://s2.dev/blog/agent-sessions#landscape). Kafka and NATS Jetstream treat streams as provisioned resources, and the protocols/implementations are oriented around such assumptions. Redis Streams and NATS allow for larger numbers of streams, but without proper durability.
The cloud service is completely elastic, but you can also get pretty far with lite despite it being a single-node binary that needs to be scaled vertically. Streams in lite are "just keys" in SlateDB, and cloud object storage is bottomless – although of course there is metadata overhead.
One thing I am excited to improve in s2-lite is pipelining of writes for performance (already supported behind a knob, but needs upstream interface changes for safety). It's a technique we use extensively in s2.dev. Essentially when you are dealing with high latencies like S3, you want to keep data flowing throughout the pipe between client and storage, rather than go lock-step where you first wait for an acknowledgment and then issue another write. This is why S2 has a session protocol over HTTP/2, in addition to stateless REST.
You can test throughput/latency for lite yourself using the `s2 bench` CLI command. The main factors are: your network quality to the storage bucket region, the latency characteristics of the remote store, SlateDB's flush interval (`SL8_FLUSH_INTERVAL=..ms`), and whether pipelining is enabled (`S2LITE_PIPELINE=true` to taste the future).
I'll be here to get thoughts and feedback, and answer any questions!
Show HN: New 3D Mapping website - Create heli orbits and "playable" map tours.
Show HN: I've been using AI to analyze every supplement on the market
Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies.
My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps:
1.) I index every supplement on the market (extract each ingredient, normalize by quantity)
2.) I index every research paper on supplementation (rank every claim by effect type and effect size)
3.) I link data between supplements and research papers
Earlier last year, I took pause on a project because I've ran into a few issues:
Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio.
Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data.
Business: I still haven't figured out how to monetize it or even who the target customer is.
Despite these challenges, I decided to restart my journey.
My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation).
Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before.
I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated.
Some key areas of the website:
* Example of navigating supplements by ingredient https://pillser.com/search?q=%22Vitamin+D%22&s=jho4espsuc
* Example of research paper analyzed using AI https://pillser.com/research-papers/effect-of-lactobacillus-...
* Example of looking for very specific strains or ingredients https://pillser.com/probiotics/bifidobacterium-bifidum
* Example of navigating research by health-outcomes https://pillser.com/health-outcomes/improved-intestinal-barr...
* Example of product listing https://pillser.com/supplements/pb-8-probiotic-663
Show HN: Interactive physics simulations I built while teaching my daughter
I started teaching my daughter physics by showing her how things actually work - plucking guitar strings to explain vibration, mixing paints to understand light, dropping objects to see gravity in action.
She learned so much faster through hands-on exploration than through books or videos. That's when I realized: what if I could recreate these physical experiments as interactive simulations?
Lumen is the result - an interactive physics playground covering sound, light, motion, life, and mechanics. Each module lets you manipulate variables in real-time and see/hear the results immediately.
Try it: https://www.projectlumen.app/
Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete
Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here (https://huggingface.co/sweepai/sweep-next-edit-1.5b) or try it in our JetBrains plugin (https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomp...).
Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy.
We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small.
Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand.
Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs.
We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it!
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs
Hi HN, we're Sam, Shane, and Abhi.
Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.
Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.
If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.
Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.
Agent development is changing quickly, so we’ve added a lot since February:
- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.
- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.
- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.
- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.
(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)
Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.
We'll be around and happy to answer any questions!
Show HN: Heterogeneous Agent Protocol (Derived from Nursing and Construction)
The Heterogeneous Agent Protocol is a communication framework designed to enable diverse software agents to collaborate and exchange information, even if they have different internal architectures or programming languages. The protocol aims to facilitate interoperability and data sharing between heterogeneous systems, enabling more flexible and adaptive distributed applications.
Show HN: Rails UI
RailsUI is a comprehensive open-source library of UI components and design tools for building modern, responsive web applications with Ruby on Rails. It provides a range of pre-built, visually appealing components that can be easily integrated into Rails projects to accelerate development and enhance the user experience.
Show HN: Jar.tools – online Jar file opener
JAR.tools is a website that provides a collection of online tools for Java developers, including a Java decompiler, a bytecode viewer, and a JAR file explorer, among other utilities to analyze and manipulate Java applications.
Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters
I’ve been working on a project to bridge the gap between AI generation and my AxiDraw, and I think I finally have a workflow that avoids the usual headaches.
If you’ve tried plotting AI-generated images, you probably know the struggle: generic tracing tools (like Potrace) trace the outline of a line, resulting in double-strokes that ruin the look and take twice as long to plot.
What I tried previously:
- Potrace / Inkscape Trace: Great for filled shapes, but results in "hollow" lines for line art.
- Canny Edge Detection: Often too messy; it picks up noise and creates jittery paths.
- Standard SDXL: Struggled with geometric coherence, often breaking lines or hallucinating perspective.
- A bunch of projects that claimed to be txt2svg but which produced extremely poor results, at least for pen plotting. (Chat2SVG, StarVector, OmniSVG, DeepSVG, SVG-VAE, VectorFusion, DiffSketcher, SVGDreamer, SVGDreamer++, NeuralSVG, SVGFusion, VectorWeaver, SwiftSketch, CLIPasso, CLIPDraw, InternSVG)
My Approach:
I ended up writing a Python tool that combines a few specific technologies to get a true "centerline" vector:
1. Prompt Engineering: An LLM rewrites the prompt to enforce a "Technical Drawing" style optimized for the generator.
2. Generation: I'm using Flux.2-dev (4-bit). It seems significantly better than SDXL at maintaining straight lines and coherent geometry.
3. Skeletonization: This is the key part. Instead of tracing contours, I use Lee’s Method (via scikit-image) to erode the image down to a 1-pixel wide skeleton. This recovers the actual stroke path.
4. Graph Conversion: The pixel skeleton is converted into a graph to identify nodes and edges, pruning out small artifacts/noise.
5. Optimization: Finally, I feed it into vpype to merge segments and sort the paths (TSP) so the plotter isn't jumping around constantly.
You can see the results in the examples inside the Github repo.
The project is currently quite barebones, but it produces better results than other options I've tested so I'm publishing it. I'm interested in implementing better pre/post processing, API-based generation, and identifying shapes for cross-hatching.
Show HN: Synesthesia, make noise music with a colorpicker
This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js.
NOTE! Turn the volume way down before using the site. It is noise music. :)
Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.
The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:
- LTTB downsampling runs as a compute shader - Hit-testing for tooltips/hover is GPU-accelerated - Rendering uses instanced draws (one draw call per series)
The result: 1M points at 60fps with smooth zoom/pan.
Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/
Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`
Happy to answer questions about WebGPU internals or architecture decisions.
Show HN: Teemux – Zero-config log multiplexer with built-in MCP server
I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP.
There is one implementation detail that I geek out about:
It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator).
A super quick demo:
npx teemux -- curl -N https://teemux.com/random-logs