Show stories

Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go
rvermeulen98 about 10 hours ago

Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go

The article describes 'WhoseThere', an open-source project that allows users to view a list of people who have been in their vicinity based on Bluetooth signals from their devices. The project aims to provide a privacy-focused alternative to location-tracking apps, giving users control over their personal data.

github.com
175 61
Summary
oug-t 5 days ago

Show HN: Zsweep – Play Minesweeper using only Vim motions

zsweep.com
50 16
dobodob about 4 hours ago

Show HN: New 3D Mapping website - Create heli orbits and "playable" map tours.

easy3dmaps.com
21 11
schopra909 1 day ago

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2

We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.

These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.

We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.

Why train a model from scratch?

We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.

For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)

The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).

What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.

Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.

What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling

We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!

huggingface.co
118 23
Summary
gajus about 6 hours ago

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server

I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP.

There is one implementation detail that I geek out about:

It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator).

A super quick demo:

npx teemux -- curl -N https://teemux.com/random-logs

teemux.com
8 6
Summary
Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review
juanpabloaj about 3 hours ago

Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review

The article discusses the challenges and benefits of using Git for version control, highlighting the importance of understanding Git's branching model and common workflows to effectively manage software projects.

gist.github.com
5 1
Summary
cannoneyed 1 day ago

Show HN: isometric.nyc – giant isometric pixel art map of NYC

Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.

I didn't write a single line of code.

Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!

I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:

http://cannoneyed.com/projects/isometric-nyc

cannoneyed.com
1,231 225
Summary
Show HN: S2-lite, an open source Stream Store
shikhar 2 days ago

Show HN: S2-lite, an open source Stream Store

S2 was on HN for our intro blog post a year ago (https://news.ycombinator.com/item?id=42480105). S2 started out as a serverless API — think S3, but for streams.

The idea of streams as a cloud storage primitive resonated with a lot of folks, but not having an open source option was a sticking point for adoption – especially from projects that were themselves open source! So we decided to build it: https://github.com/s2-streamstore/s2

s2-lite is MIT-licensed, written in Rust, and uses SlateDB (https://slatedb.io) as its storage engine. SlateDB is an embedded LSM-style key-value database on top of object storage, which made it a great match for delivering the same durability guarantees as s2.dev.

You can specify a bucket and path to run against an object store like AWS S3 — or skip to run entirely in-memory. (This also makes it a great emulator for dev/test environments).

Why not just open up the backend of our cloud service? s2.dev has a decoupled architecture with multiple components running in Kubernetes, including our own K8S operator – we made tradeoffs that optimize for operation of a thoroughly multi-tenant cloud infra SaaS. With s2-lite, our goal was to ship something dead simple to operate. There is a lot of shared code between the two that now lives in the OSS repo.

A few features remain (notably deletion of resources and records), but s2-lite is substantially ready. Try the Quickstart in the README to stream Star Wars using the s2 CLI!

The key difference between S2 vs a Kafka or Redis Streams: supporting tons of durable streams. I have blogged about the landscape in the context of agent sessions (https://s2.dev/blog/agent-sessions#landscape). Kafka and NATS Jetstream treat streams as provisioned resources, and the protocols/implementations are oriented around such assumptions. Redis Streams and NATS allow for larger numbers of streams, but without proper durability.

The cloud service is completely elastic, but you can also get pretty far with lite despite it being a single-node binary that needs to be scaled vertically. Streams in lite are "just keys" in SlateDB, and cloud object storage is bottomless – although of course there is metadata overhead.

One thing I am excited to improve in s2-lite is pipelining of writes for performance (already supported behind a knob, but needs upstream interface changes for safety). It's a technique we use extensively in s2.dev. Essentially when you are dealing with high latencies like S3, you want to keep data flowing throughout the pipe between client and storage, rather than go lock-step where you first wait for an acknowledgment and then issue another write. This is why S2 has a session protocol over HTTP/2, in addition to stateless REST.

You can test throughput/latency for lite yourself using the `s2 bench` CLI command. The main factors are: your network quality to the storage bucket region, the latency characteristics of the remote store, SlateDB's flush interval (`SL8_FLUSH_INTERVAL=..ms`), and whether pipelining is enabled (`S2LITE_PIPELINE=true` to taste the future).

I'll be here to get thoughts and feedback, and answer any questions!

github.com
64 18
Startups_in about 3 hours ago

Show HN: Startups.in: An in-development "global" startup intelligence database

Hi HN, I'm building Startups.in, a work-in-progress startup intelligence platform that aggregates global startup profiles with funding, sector, and location data in one place. It's still rough around the edges and in active development, but I wanted to share the public version for feedback.

This started as a personal project because I wanted a clean, searchable dataset of startups across regions without jumping between multiple sources or dealing with noise I didn't want :).

The product is still very much a work in progress, but it's in a usable state and open to feedback.

What it currently does: + Browse startup profiles with funding and basic company metadata + Search and filter by industry, geography, etc. + View simple ecosystem trends

No signup required to try it though you're welcome to sign-up to use watchlists etc.

How I built it: It's backed by a custom crawler (for data I need) and enrichment pipeline using n8n workflows, with a lightweight web UI focused on fast querying and filtering.

What I'm trying to learn from HN: + What data points would make this genuinely useful to you? + Would an API be valuable? + Does the UX get in the way of exploration?

I'm actively iterating on it and happy to discuss further. Thanks.

startups.in
4 3
Show HN: BrowserOS – "Claude Cowork" in the browser
felarof 1 day ago

Show HN: BrowserOS – "Claude Cowork" in the browser

Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.

The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!

Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.

The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).

But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.

In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: https://git.new/browseros-agent

How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.

What you can actually do today:

a) Organize files in my desktop folder https://youtu.be/NOZ7xjto6Uc

b) Open top 5 HN links, extract the details and write summary into a HTML file https://youtu.be/uXvqs_TCmMQ

--- Where we are now If you haven't tried us since the last Show HN (https://news.ycombinator.com/item?id=44523409), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:

c) You can now build more reliable workflows using n8n-like graph https://youtu.be/H_bFfWIevSY

d) You can also use BrowserOS as an MCP server in Cursor or Claude Code https://youtu.be/5nevh00lckM

We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.

Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."

Curious to hear your take on this and the overall thesis.

We’ll be in the comments. Thanks for reading!

GitHub: https://github.com/browseros-ai/BrowserOS

Download: https://browseros.com (available for Mac, Windows, Linux!)

github.com
80 32
Summary
mraspuzzi about 4 hours ago

Show HN: Claude Tutor – an open source engineering tutor

We used Claude Agent SDK to make Claude Tutor. It's main goal is increase human knowledge, understanding, and agency.

It's an email and CLI agent to help people level up their software engineering skills.

We think there's too much focus on AI agency right now and not enough on human agency.

Open sourced, and curious for feedback! This is v0.1 so it's hella early.

Ps. Next step is to get this working on Open Agent SDK and explore other interfaces.

twitter.com
3 0
Show HN: Cholesterol Tracker – Built after high cholesterol diagnosis at 33
briskibe about 4 hours ago

Show HN: Cholesterol Tracker – Built after high cholesterol diagnosis at 33

After my annual checkup showed LDL 4.4 mmol/L (170 mg/dL) and triglycerides 2.0 mmol/L at 33, I tried tracking with ChatGPT (lost data when context got too big), then spreadsheets (too tedious).

Built a simple tracker focused on cholesterol. Log meals, see lipid breakdown, track trends. I believe snacks and sugar were my main issue.

Stack: Angular 17 + NestJS + Supabase

Started January 1st, already lost 3kg. Same breakfast daily (psyllium, oats, chia, skyr, whey, berries), cut sugar from daily to once per week.

Free during beta. Looking for feedback on whether strict diet cutting or 80/20 approach is more sustainable long-term.

cholesterol-tracker.poniansoft.com
2 0
Summary
Show HN: I've been using AI to analyze every supplement on the market
lilouartz 1 day ago

Show HN: I've been using AI to analyze every supplement on the market

Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies.

My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps:

1.) I index every supplement on the market (extract each ingredient, normalize by quantity)

2.) I index every research paper on supplementation (rank every claim by effect type and effect size)

3.) I link data between supplements and research papers

Earlier last year, I took pause on a project because I've ran into a few issues:

Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio.

Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data.

Business: I still haven't figured out how to monetize it or even who the target customer is.

Despite these challenges, I decided to restart my journey.

My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation).

Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before.

I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated.

Some key areas of the website:

* Example of navigating supplements by ingredient https://pillser.com/search?q=%22Vitamin+D%22&s=jho4espsuc

* Example of research paper analyzed using AI https://pillser.com/research-papers/effect-of-lactobacillus-...

* Example of looking for very specific strains or ingredients https://pillser.com/probiotics/bifidobacterium-bifidum

* Example of navigating research by health-outcomes https://pillser.com/health-outcomes/improved-intestinal-barr...

* Example of product listing https://pillser.com/supplements/pb-8-probiotic-663

pillser.com
82 43
Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters
tsanummy 4 days ago

Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters

I’ve been working on a project to bridge the gap between AI generation and my AxiDraw, and I think I finally have a workflow that avoids the usual headaches.

If you’ve tried plotting AI-generated images, you probably know the struggle: generic tracing tools (like Potrace) trace the outline of a line, resulting in double-strokes that ruin the look and take twice as long to plot.

What I tried previously:

- Potrace / Inkscape Trace: Great for filled shapes, but results in "hollow" lines for line art.

- Canny Edge Detection: Often too messy; it picks up noise and creates jittery paths.

- Standard SDXL: Struggled with geometric coherence, often breaking lines or hallucinating perspective.

- A bunch of projects that claimed to be txt2svg but which produced extremely poor results, at least for pen plotting. (Chat2SVG, StarVector, OmniSVG, DeepSVG, SVG-VAE, VectorFusion, DiffSketcher, SVGDreamer, SVGDreamer++, NeuralSVG, SVGFusion, VectorWeaver, SwiftSketch, CLIPasso, CLIPDraw, InternSVG)

My Approach:

I ended up writing a Python tool that combines a few specific technologies to get a true "centerline" vector:

1. Prompt Engineering: An LLM rewrites the prompt to enforce a "Technical Drawing" style optimized for the generator.

2. Generation: I'm using Flux.2-dev (4-bit). It seems significantly better than SDXL at maintaining straight lines and coherent geometry.

3. Skeletonization: This is the key part. Instead of tracing contours, I use Lee’s Method (via scikit-image) to erode the image down to a 1-pixel wide skeleton. This recovers the actual stroke path.

4. Graph Conversion: The pixel skeleton is converted into a graph to identify nodes and edges, pruning out small artifacts/noise.

5. Optimization: Finally, I feed it into vpype to merge segments and sort the paths (TSP) so the plotter isn't jumping around constantly.

You can see the results in the examples inside the Github repo.

The project is currently quite barebones, but it produces better results than other options I've tested so I'm publishing it. I'm interested in implementing better pre/post processing, API-based generation, and identifying shapes for cross-hatching.

github.com
33 7
Summary
anticlickwise 5 days ago

Show HN: Interactive physics simulations I built while teaching my daughter

I started teaching my daughter physics by showing her how things actually work - plucking guitar strings to explain vibration, mixing paints to understand light, dropping objects to see gravity in action.

She learned so much faster through hands-on exploration than through books or videos. That's when I realized: what if I could recreate these physical experiments as interactive simulations?

Lumen is the result - an interactive physics playground covering sound, light, motion, life, and mechanics. Each module lets you manipulate variables in real-time and see/hear the results immediately.

Try it: https://www.projectlumen.app/

projectlumen.app
83 21
Summary
Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal
tariqshams about 6 hours ago

Show HN: MermaidTUI - Deterministic Unicode/ASCII diagrams in the terminal

Hi HN, I built mermaidtui, a lightweight TypeScript engine that renders Mermaid flowcharts directly in your terminal as clean Unicode or ASCII boxes.

Visualizing Mermaid diagrams usually requires a heavy setup: a headless browser (Puppeteer/Playwright), SVG-to-image conversion, or a web preview. That's fine for documentation sites, but it's overkill for TUI apps, CI logs, or quick terminal previews.

The Solution is a small engine (<= 1000 LOC) that uses a deterministic grid-based layout to render diagrams using box-drawing characters. Key Features

- Intelligent Routing: It uses corner characters (┌, ┐, └, ┘) for orthogonal paths.

- Topological Layering: Attempts a readable, structured layout.

- Support for Chained Edges: A --> B --> C works out of the box.

- Zero Heavy Dependencies: No Mermaid internals, no Chromium, just pure TypeScript/JavaScript. With commander for the CLI, not the MermaidTUI library

I wanted a way to see high-quality diagrams in my CLI tools quickly, it’s great for SSH sessions where you can’t easily open an SVG. I was initially embedding this within a cli tool I’m working on and figured I’d extract out a library for others to use. I also initially used regex to parse, but now I made the parser a bit more robust. I'd love to hear your thoughts on the layout engine or any specific Mermaid syntax you'd like to see supported next!

GitHub: https://github.com/tariqshams/mermaidtui

npm i mermaidtui

github.com
3 0
Summary
Show HN: RTK – Simple CLI to reduce token usage in your LLM prompts
patrick4urcloud about 6 hours ago

Show HN: RTK – Simple CLI to reduce token usage in your LLM prompts

I built this small tool for my own use to reduce the number of tokens I send to LLMs (Claude Code, etc.). It’s just a simple utility to filter command outputs before they hit the context.

Here is what I’m getting with it so far:

I’m putting it out there in case it's useful to anyone else. It's written in Rust.

P.S. This is just a tool I built for my own needs and decided to share. If you have constructive feedback on the Rust code or the logic, I'd love to hear it. If it's not for you, that's totally fine too—no need for "angry" comments, just trying to be helpful!

github.com
2 1
Summary
possiblelion 5 days ago

Show HN: AskUCP – UCP protocol explorer showing all products on Shopify

On January 11th, Google and Shopify announced the Universal Commerce Protocol (ucp.dev). It's an open standard that lets any application query products across e-commerce platforms without needing APIs, integrations, or middlemen.

AskUCP is one of the first applications built on it.

Right now, if you want to buy something online, you have to know which store sells it. You go to Amazon, or you go to a Shopify store, or you go to Etsy. Each one has its own search, its own interface, its own checkout. The experience is fragmented because the infrastructure is siloed.

UCP changes this at the protocol level. If products are described in a standard format, any application can discover them. You don't need permission from each platform. You don't need to build integrations. Anybody or any AI agent just querys the protocol.

AskUCP is designed to be a single pane of glass into online commerce. You search once, and you see products from across the ecosystem. Currently, that means the entire Shopify catalog. As more platforms adopt UCP, their products become explorable too. Eventually, it should be everything.

This is a proof of concept. It's early, and there are rough edges. Let me know what you think, refinements, ideas etc etc.

askucp.com
10 5
Summary
Show HN: A social network populated only by AI models
capela about 9 hours ago

Show HN: A social network populated only by AI models

AIFeed.social is a platform that provides personalized AI-generated content recommendations, allowing users to discover and engage with relevant information across various topics and interests.

aifeed.social
8 8
Summary
tevans3 1 day ago

Show HN: Synesthesia, make noise music with a colorpicker

This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js.

NOTE! Turn the volume way down before using the site. It is noise music. :)

visualnoise.ca
35 13
Summary
Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete
williamzeng0 2 days ago

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete

Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here (https://huggingface.co/sweepai/sweep-next-edit-1.5b) or try it in our JetBrains plugin (https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomp...).

Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy.

We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small.

Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand.

Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs.

We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it!

huggingface.co
522 147
Summary
SerafimKorablev 1 day ago

Show HN: First Claude Code client for Ollama local models

Just to clarify the background a bit. This project wasn’t planned as a big standalone release at first. On January 16, Ollama added support for an Anthropic-compatible API, and I was curious how far this could be pushed in practice. I decided to try plugging local Ollama models directly into a Claude Code-style workflow and see if it would actually work end to end.

Here is the release note from Ollama that made this possible: https://ollama.com/blog/claude

Technically, what I do is pretty straightforward:

- Detect which local models are available in Ollama.

- When internet access is unavailable, the client automatically switches to Ollama-backed local models instead of remote ones.

- From the user’s perspective, it is the same Claude Code flow, just backed by local inference.

In practice, the best-performing model so far has been qwen3-coder:30b. I also tested glm-4.7-flash, which was released very recently, but it struggles with reliably following tool-calling instructions, so it is not usable for this workflow yet.

twitter.com
43 22
Show HN: CLI for working with Apple Core ML models
schappim 1 day ago

Show HN: CLI for working with Apple Core ML models

The CoreML-CLI is a command-line tool that simplifies the integration of Core ML models into iOS and macOS applications. It provides an easy-to-use interface for converting various model formats, including TensorFlow, PyTorch, and ONNX, into the Core ML format.

github.com
45 5
Summary
epsteingpt 1 day ago

Show HN: Bible translated using LLMs from source Greek and Hebrew

Built an auditable AI (Bible) translation pipeline: Hebrew/Greek source packets -> verse JSON with notes rolling up to chapters, books, and testaments. Final texts compiled with metrics (TTR, n-grams).

This is the first full-text example as far as I know (Gen Z bible doesn't count).

There are hallucinations and issues, but the overall quality surprised me.

LLMs have a lot of promise translating and rendering 'accessible' more ancient texts.

The technology has a lot of benefit for the faithful, that I think is only beginning to be explored.

biblexica.com
49 64
Summary
Show HN: Rails UI
justalever 2 days ago

Show HN: Rails UI

RailsUI is a comprehensive open-source library of UI components and design tools for building modern, responsive web applications with Ruby on Rails. It provides a range of pre-built, visually appealing components that can be easily integrated into Rails projects to accelerate development and enhance the user experience.

railsui.com
204 109
Summary
Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
huntergemmer 2 days ago

Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)

Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.

The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:

- LTTB downsampling runs as a compute shader - Hit-testing for tooltips/hover is GPU-accelerated - Rendering uses instanced draws (one draw call per series)

The result: 1M points at 60fps with smooth zoom/pan.

Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/

Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`

Happy to answer questions about WebGPU internals or architecture decisions.

github.com
662 209
Summary
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs
calcsam 3 days ago

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

Hi HN, we're Sam, Shane, and Abhi.

Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.

Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.

If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.

Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.

Agent development is changing quickly, so we’ve added a lot since February:

- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.

- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.

- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.

- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.

(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)

Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.

We'll be around and happy to answer any questions!

github.com
213 69
Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets
borenstein 2 days ago

Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets

I made this for myself, and it seemed like it might be useful to others. I'd love some feedback, both on the threat model and the tool itself. I hope you find it useful!

Backstory: I've been using many agents in parallel as I work on a somewhat ambitious financial analysis tool. I was juggling agents working on epics for the linear solver, the persistence layer, the front-end, and planning for the second-generation solver. I was losing my mind playing whack-a-mole with the permission prompts. YOLO mode felt so tempting. And yet.

Then it occurred to me: what if YOLO mode isn't so bad? Decision fatigue is a thing. If I could cap the blast radius of a confused agent, maybe I could just review once. Wouldn't that be safer?

So that day, while my kids were taking a nap, I decided to see if I could put YOLO-mode Claude inside a sandbox that blocks exfiltration and regulates git access. The result is yolo-cage.

Also: the AI wrote its own containment system from inside the system's own prototype. Which is either very aligned or very meta, depending on how you look at it.

github.com
59 74
Summary
Show HN: I'm writing an alternative to Lutris
death_eternal 1 day ago

Show HN: I'm writing an alternative to Lutris

It's free and open source. The aim is to have more transparent access to wine prefixes and the surrounding tooling (winetricks, proton configuration, etc...) per game in comparison to Lutris. Same features like statistics (time played, times launched, times crashed, and so on) per game is available in the app.

github.com
14 4
Summary
Show HN: C/C++ Cheatsheet – a modern, practical reference for C and C++
crazyguitar about 17 hours ago

Show HN: C/C++ Cheatsheet – a modern, practical reference for C and C++

Hi HN,

I’m the creator of C/C++ Cheatsheet — a modern, practical reference for both C and C++ developers. It includes concise snippet-style explanations of core language features, advanced topics like coroutines and constexpr, system programming sections, debugging tools, and useful project setups. You can explore it online at https://cppcheatsheet.com/.

I built this to help both beginners and experienced engineers quickly find clear examples and explanations without digging through fragmented blogs or outdated docs. It’s open source, regularly maintained, and contributions are welcome on GitHub.

If you’ve ever wanted a lightweight, example-focused guide to: - Modern C++ (templates, lambdas, concepts) - C fundamentals and memory handling - System programming - Debugging & profiling …this site aims to be that resource.

Any feedback is welcome. Thank you.

github.com
7 5
Summary