Show stories

aed 2 days ago

Show HN: AI agents play SimCity through a REST API

This is a weekend project that spiraled out of control. I was originally trying to get Claude to play a ROM of the SNES SimCity. I struggled with it and that led me to Micropolis (the open-sourced SimCity engine) and was able to get it to work by bolting on an API.

The weekend hack turned into a headless city simulation platform where anyone can get an API key (no signup) and have their AI agent play mayor. The simulation runs the real Micropolis engine inside Cloudflare Durable Objects, one per city. Every city is public and browsable on the site.

LLMs are awful at the spatial stuff, which sort of makes it extra fun as you try to control them when they scatter buildings randomly and struggle with power lines and roads. A little like dealing with a toddler.

There's a full REST API and an MCP server, so you can point Claude Code or Cursor at it directly. You can usually get agents building in seconds.

Website: https://hallucinatingsplines.com

API docs: https://hallucinatingsplines.com/docs

GitHub: https://github.com/andrewedunn/hallucinating-splines

Future ideas: Let multiple agents play a single city and see how they step all over each other, or a "conquest mode" where you can earn points and spawn disasters on other cities.

hallucinatingsplines.com
37 5
Show HN: Itsyhome – Control HomeKit from your Mac menu bar (open source)
nixus76 about 16 hours ago

Show HN: Itsyhome – Control HomeKit from your Mac menu bar (open source)

Hey HN!

Nick here – developer of Itsyhome, a menu bar app for macOS that gives you control over your whole HomeKit fleet (and very soon Home Assistant). I run 130+ HomeKit devices at home and the Home app was too heavy for quick adjustments.

Full HomeKit support, favourites, hidden items, device groups, pinning of rooms/accessories/groups as separate menu bar items, iCloud sync – all in a native experience and tiny package.

Open source (https://github.com/nickustinov/itsyhome-macos) and free to use (there is an optional one-time purchase for a Pro version which includes cameras and automation features).

Itsyhome is a Mac Catalyst app because HomeKit requires the iOS SDK, so it runs a headless Catalyst process for HomeKit (and now Home Assistant) access while using a native AppKit plugin over a bridge protocol to provide the actual menu bar UI – since AppKit gives you the real macOS menu bar experience that Catalyst alone can't.

It comes with deeplink support, a webhook server, a CLI tool (golang, all open source), a Stream Deck plugin (open source, all accessories supported), and the recent update also includes an SSE event stream (HomeKit and HA) - you can curl -N localhost:8423/events and get a real-time JSON stream of every device state change in your home.

Home Assistant version is still in beta – would anyone be willing to test it via TestFlight?

Appreciate any feedback and happy to answer any questions.

itsyhome.app
36 38
Summary
seansh 3 days ago

Show HN: CodeMic

With CodeMic you can record and share coding sessions directly inside your editor.

Think Asciinema, but for full coding sessions with audio, video, and images.

While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video.

Local first, and open source.

p.s. I’ve been working on this for a little over two years and would love to hear your thoughts.

codemic.io
36 17
Summary
bizzz about 2 hours ago

Show HN: I tried to build a soundproof sleep capsule

Hi HN,

I've struggled with apartment noise for years, so I attempted to engineer a mechanical solution: a decoupled, mass-loaded sleep capsule.

I went down a deep rabbit hole involving:

- Mass Law vs. decoupling

- Building a prototype cube

- Accidentally creating a resonance chamber (my prototype amplified bass by ~10dB)

- Pivoting to acoustic metamaterials (Helmholtz resonators) and parametric CAD

The project was ultimately a failure in terms of silence, but a success in understanding acoustics and regaining a sense of agency. I wrote up the physics, the build process, and the mistakes here.

Happy to answer questions about the build.

lepekhin.com
3 0
Summary
vkaufmann about 9 hours ago

Show HN: I taught GPT-OSS-120B to see using Google Lens and OpenCV

I built an MCP server that gives any local LLM real Google search and now vision capabilities - no API keys needed.

  The latest feature: google_lens_detect uses OpenCV to find objects in an image, crops each one, and sends them to Google Lens for identification. GPT-OSS-120B, a text-only model with
   zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.

  Also includes Google Search, News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends, and more. 17 tools total.

  Two commands: pip install noapi-google-search-mcp && playwright install chromium

  GitHub: https://github.com/VincentKaufmann/noapi-google-search-mcp
  PyPI: https://pypi.org/project/noapi-google-search-mcp/

Booyah!

40 26
Gravityloss about 2 hours ago

Show HN: Musical Interval Trainer

This web application is a musical interval trainer that allows users to practice identifying different intervals in a fun and interactive way, helping them improve their music theory skills.

valtterimaja.github.io
5 2
Summary
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
segmenta about 22 hours ago

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

github.com
176 47
Summary
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
thisisjedr 2 days ago

Show HN: JavaScript-first, open-source WYSIWYG DOCX editor

We needed a JS-first WYSIWYG DOCX editor and couldn't find a solid OSS option, most were either commercial or abandoned.

As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.

Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.

github.com
114 39
Summary
moshmage about 3 hours ago

Show HN: Baby Vault – A 100% offline, privacy-first PWA for new parents

babyvault.moshmage.com
2 1
Show HN: I built managed OpenClaw hosting with 60s provisioning in 6 days
yixn_io about 3 hours ago

Show HN: I built managed OpenClaw hosting with 60s provisioning in 6 days

Hey HN,

I'm Daniel, solo dev from Germany. I built ClawHosters (https://clawhosters.com), a managed hosting platform for OpenClaw, the open-source AI agent framework.

Quick timeline: domain registered February 5th. First paying customer six days later. I probably should have spent more time on it, but it works.

If you haven't seen OpenClaw, it lets you run a personal AI assistant that connects to Telegram, Discord, Slack, and WhatsApp. Self-hosting it is absolutely possible, but it's a pain. You're dealing with Docker setup, SSL certs, port forwarding, security hardening, keeping the image updated. Most people don't want to deal with any of that. They just want the thing running.

That's what ClawHosters does. You pick a tier (EUR 19-59/mo), click create, and you've got a running instance with a subdomain. About 60 seconds if we have prewarmed capacity, maybe 90 seconds from a cold snapshot.

Some technical details that might interest this crowd:

*Subdomain routing chain.* Every instance gets a subdomain like `mybot.clawhosters.com`. The request path is Cloudflare -> my production server -> Traefik (looks up VPS IP from Redis) -> customer's Hetzner VPS -> nginx on the VPS (validates Host header) -> Docker container (port 18789) -> OpenClaw gateway. All subdomains require HTTP Basic Auth, configured per-instance through Traefik Redis middleware keys. The VPS itself only accepts connections from my production server's IP via Hetzner Cloud Firewall. No way to hit it directly.

*Prewarmed VPS pool.* Even from a snapshot, Hetzner VPS creation takes ~30-60 seconds. That felt too slow. So I maintain a pool of idle, pre-provisioned VPS instances sitting there ready to go. When someone creates an instance, we claim one from the pool, upload the config via SCP, run docker-compose up, done. The pool refills in the background.

*Security is 4 layers deep.* Hetzner Cloud Firewall restricts all VPS inbound traffic to only my production server IP. Host iptables (baked into the snapshot) add OS-level rules with SMTP/IRC blocking. SSH is key-only on both host port 22 and container port 2222, so brute-forcing isn't happening. fail2ban on top of that, and the Docker daemon runs with no-new-privileges. Probably overkill. I'm fine with that.

*SSH into the Docker container.* Users can enable SSH access to their actual container (port 2222). I built a custom image extending OpenClaw with an SSH server, key-only auth, no passwords. Fair warning though: enabling SSH permanently marks the instance as no_support. Once you're installing your own stuff in there, I can't guarantee stability anymore.

*Container commit for state preservation.* This one was tricky to get right. Users can install packages (apt, pip, npm) inside their container. Before any restart or redeploy, `CommitContainerService` runs `docker commit` to save the full filesystem as a new image. Next startup uses the committed image instead of the base one. Basically snapshotting your container's state so nothing gets lost.

I wrote a more detailed technical post about the architecture here: [link to blog post]

The whole thing runs inside a single Rails app that also serves my portfolio site (https://yixn.io). One person, one codebase, real paying customers. I'm happy to answer questions about the architecture, the Hetzner API, or the tradeoffs I made along the way.

Source isn't open yet, but I'm thinking about open-sourcing the provisioning layer. Haven't decided.

https://clawhosters.com

clawhosters.com
2 0
Summary
n1sni 1 day ago

Show HN: I built a macOS tool for network engineers – it's called NetViews

Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

bedpage.com
224 55
Show HN: Distr 2.0 – A year of learning how to ship to customer environments
louis_w_gk 1 day ago

Show HN: Distr 2.0 – A year of learning how to ship to customer environments

A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.

It turned out things get messy when your software is running in places you can't simply SSH into.

Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.

We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.

Other notable features we’ve implemented since our first launch:

- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.

- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.

- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.

- Secret Management, so database passwords don’t show up in configuration steps or logs.

Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.

For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.

Distr is fully open source and self-hostable: https://github.com/distr-sh/distr

Docs: https://distr.sh/docs

We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.

github.com
93 29
Summary
Show HN: I built a tool for lazy founders – it's called BunnyDesk
jacobsyc about 3 hours ago

Show HN: I built a tool for lazy founders – it's called BunnyDesk

BunnyDesk.ai is a platform that offers virtual office spaces and remote work solutions for businesses. The article highlights the company's services, including shared office spaces, private offices, and meeting rooms, as well as its focus on providing a flexible and collaborative work environment for remote teams.

bunnydesk.ai
2 0
Summary
Show HN: Claudit – Claude Code Conversations as Git Notes, Automatically
EngineerBetter about 3 hours ago

Show HN: Claudit – Claude Code Conversations as Git Notes, Automatically

Uses agent and Git Hooks to automatically create Git Notes on commit, containing the agent conversation that led to that commit. Works if either you or the agent commit.

It's basically the same thing as entire.io just announced that they got $60m investment for. Except I got Claude Code to write it last week, in my spare time, without really paying attention. I certainly didn't read or write any of the code, except for one rubbish joke in the README.

I've got a Claude Code instance working on Gemini CLI support and OpenCode support currently.

github.com
4 0
Summary
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
prasoonds about 21 hours ago

Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB

Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks.

Here's a demo video: https://youtu.be/cyEgW7wElcs

Why is this useful? (1) You don't have to figure out which webhooks you need or write listeners for each one. The library handles all of that. This follows the approach of libraries like dj-stripe in the Django world (https://dj-stripe.dev/). (2) Stripe's API has a 100 rpm rate limit. If you're checking subscription status frequently or building internal tools, you'll hit it. Querying your own Postgres doesn't have this problem. (3) You can give an AI agent read access to the stripe.* schema to debug payment issues—failed charges, refunds, whatever—without handing over Stripe dashboard access. (4) You can join Stripe data with your own tables for custom analytics, LTV calculations, etc.

It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.

It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:

  billing.subscriptions.get({ userId });
  billing.credits.consume({ userId, key: "api_calls", amount: 1 });
  billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.

Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.

You define your plan in TypeScript:

  {
    name: "Pro",
    description: "Cursor Pro plan",
    price: [{ amount: 2000, currency: "usd", interval: "month" }],
    features: {
      api_completion: {
        pricePerCredit: 1,              // 1 cent per unit
        trackUsage: true,               // Enable usage billing
        credits: { allocation: 500 },
        displayName: "API Completions",
      },
      tab_completion: {
        credits: { allocation: 2000 },
        displayName: "Tab Completions",
      },
    },
  }
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.

Consume code would look like this:

  await billing.credits.consume({
    userId: user.id,
    key: "api_completion",
    amount: 1,
  });
And if they want to allow manual top-ups by the user:

  await billing.credits.topUp({
    userId: user.id,
    key: "api_completion",
    amount: 500,     // buy 500 credits, charges $5.00
  });
Similarly, we have APIs for wallets and usage.

This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.

This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.

I vibe-coded a little toy app for testing: https://snw-test.vercel.app. There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.

Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx

Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.

github.com
61 26
Summary
Show HN: I made paperboat.website, a platform for friends and creativity
yethiel about 22 hours ago

Show HN: I made paperboat.website, a platform for friends and creativity

paperboat.website
67 27
Show HN: Εἶδος – A non-Turing-complete language built on Plato's Theory of Forms
proletarian about 5 hours ago

Show HN: Εἶδος – A non-Turing-complete language built on Plato's Theory of Forms

I've been reading Plato text and picking up some ancient Greek, and I had a useless thought experiment: what would a programming language look like with 4th century Athens constraints?

Εἶδος (Eidos — "Form") is one result. It's a declarative language called Λόγος where you don't execute code — you declare what exists. Forms belong to Kinds. Forms bear testimony. A law of correspondence maps petitions to answers. There are no loops, no conditionals, no mutation. It's intentionally not Turing-complete, aligned with Plato's rejection of the apeiron (the infinite).

It governs a real HTTP server (Ἱστός) where routes aren't matched by branching — they're recognized as Forms and answered according to law. An unrecognized path returns οὐκ ἔστιν ("it is not") — not an error, an ontological statement.

The project includes a parser that recognizes rather than executes, static verification expressed as philosophical propositions (Totality, Consistency, Well-formedness), Graphviz ontology diagrams, and a Socratic dialectic generator that examines the specification through the four phases of the elenchus.

The Jupyter notebook walks through everything interactively — from parsing the spec in polytonic Greek to petitioning the live server to watching Socrates interrogate the ontology.

https://github.com/realadeel/eidos

github.com
2 1
Summary
grazulex 3 days ago

Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure

Hey HN,

I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles.

Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser.

What's in there:

- 12 courses across 11 kingdoms (PHP basics to deployment)

- 100+ interactive exercises with real-time code validation using AST analysis

- AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers

- Full gamification: XP, levels, streaks, achievements, guilds, leaderboard

- Multilingual (EN/FR/NL)

The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content.

Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators.

artisanforge.online
37 3
mert_gerdan about 20 hours ago

Show HN: Multimodal perception system for real-time conversation

I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.

One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.

So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.

It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.

Some quick specs:

- Runs in real-time per conversation

- Processing at ~15fps video + overlapping audio alongside the conversation

- Handles nuanced emotions, whispers vs shouts

- Trained on synthetic + internal convo data

Happy to answer questions or go deeper on architecture/tradeoffs

More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...

raven.tavuslabs.org
48 14
intervolz about 18 hours ago

Show HN: Sol LeWitt-style instruction-based drawings in the browser

Sol LeWitt was a conceptual artist who never touched his own walls.

He wrote instructions and other people executed them, the original prompt engineer!

I bookmarked a project called "Solving Sol" seven years ago and made a repo in 2018. Committed a README. Never pushed anything else.

Fast forward to 2026, I finally built it.

https://intervolz.com/sollewitt/

intervolz.com
41 6
Summary
Show HN: Building My Own Google Analytics for $0
adwait12345 about 7 hours ago

Show HN: Building My Own Google Analytics for $0

How I reverse-engineered Google Analytics and built my own analytics service for personal projects.

adwait.me
10 5
baqiwaqi about 5 hours ago

Show HN: Windy – Place wind turbines on a map, see residential impact

I built a free tool that lets you drop wind turbines on an interactive map. It draws distance circles (500m–2km), detects nearby residential buildings, enforces minimum separation rules, and exports to PDF.

windy-pi.vercel.app
2 0
Summary
vrathee about 5 hours ago

Show HN: Web Scraping Sandbox Website

Scrapingsandbox.com is a platform that provides a safe and ethical sandbox environment for web scraping, allowing users to test and develop their scraping tools without the risk of being blocked or banned by target websites.

scrapingsandbox.com
2 1
Summary
Show HN: AI-Templates for Obsidian Templater
ady1981 about 5 hours ago

Show HN: AI-Templates for Obsidian Templater

I developed AI-templates for Obsidian Templater for new knowledge development. The valuable features: * ready-to-use templates (with default settings) * structured LLM prompting * maximal efficiency of LLM prompting via aspect management * flexible LLM output management.

github.com
2 1
czheo 3 days ago

Show HN: Model Training Memory Simulator

This article presents a memory simulator for training machine learning models, which can help researchers understand the memory usage and performance of their models during the training process. The simulator provides insights into the memory dynamics and trade-offs involved in training complex models.

czheo.github.io
9 0
Summary
Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework
saltyaom 3 days ago

Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework

Wrote a thing about what makes Elysia stand out in a performance benchmark game

Basically, there's a JIT "compiler" embedded into a framework

This approach has been used by ajv and TypeBox before for input validation, making it faster than other competitors

Elysia basically does the same, but scales that into a full backend framework

This gave Elysia an unfair advantage in the performance game, making Elysia the fastest framework on Bun runtime, but also faster than most on Node, Deno, and Cloudflare Worker as well, when using the same underlying HTTP adapter

There is an escape hatch if necessary, but for the past 3 years, there have been no critical reports about the JIT "compiler"

What do you think?

elysiajs.com
50 10
Summary
diNgUrAndI about 12 hours ago

Show HN: I vibecoded 177 tools for my own use (CalcBin)

Hey HN! I've been building random tools whenever I needed them over the past few months, and now I have 177 of them. Started because I was tired of sketchy converter sites with 10 ads, so I just... made my own.

Some highlights for the dev crowd:

Developer tools: - UUID Generator (v1/v4/v7, bulk generation): https://calcbin.com/tools/uuid-generator - JWT Generator & Decoder: https://calcbin.com/tools/jwt-generator - JSON Formatter/Validator: https://calcbin.com/tools/json-formatter - Cron Expression Generator (with natural language): https://calcbin.com/tools/cron-expression-generator - Base64 Encoder/Decoder: https://calcbin.com/tools/base64 - Regex Tester: https://calcbin.com/tools/regex-tester - SVG Optimizer (SVGO-powered, client-side): https://calcbin.com/tools/svg-optimizer

Fun ones: - Random Name Picker (spin wheel animation): https://calcbin.com/tools/random-name-picker - QR Code Generator: https://calcbin.com/tools/qr-code-generator

Everything runs client-side (Next.js + React), no ads, no tracking, works offline. Built it for myself but figured others might find it useful.

Browse all tools: https://calcbin.com/tools

Tech: Next.js 14 App Router, TypeScript, Tailwind, Turborepo monorepo.

All open to feedback!

calcbin.com
9 0
Show HN: HN Companion – web app that enhances the experience of reading HN
georgeck about 22 hours ago

Show HN: HN Companion – web app that enhances the experience of reading HN

HN is all about the rich discussions. We wanted to take the HN experience one step further - to bring the familiar keyboard-first navigation, find interesting viewpoints in the threads and get a gist of long threads so that we can decide which rabbit holes to explore. So we built HN Companion a year ago, and have been refining it ever since.

Try it: https://app.hncompanion.com or available as an extension for Firefox / Chrome: [0].

Most AI summarization strips the voices from conversations by flattening threads into a wall of text. This kills the joy of reading HN discussions. Instead, HN Companion works differently - it understands the thread hierarchy, the voting patterns and contrasting viewpoints - everything that makes HN interesting. Think of it like clustering related discussions across multiple hierarchies into a group and surfacing the comments that represent each cluster. It keeps the verbatim text with backlinks so that you never lose context and can continue the conversation from that point. Here is how the summarization works under the hood [1].

We first built this as an open source browser extension. But soon we learned that people hesitate to install it. So we built the same experience as a web app with all the features. This helped people see how it works, and use it on mobile too (in the browser or as PWA). This is now a playground to try new features before taking them to the browser extension.

We did a Show HN a year ago [2] and we have added these features based on user feedback:

* cached summaries - summaries are generated and cached on our servers. This improved the speed significantly. You still have the option to use your own API key or use local models through Ollama.

* our system prompt is available in the Settings page of the extension. You can customize it as you wish.

* sort the posts in the feed pages (/home, /show etc.) based on points, comments, time or the default sorting order.

* We tried fine tuning an open weights model to summarize, but learned that with a good system prompt and user prompt, the frontier models deliver results of similar quality. So we didn’t use the fine-tuned model, but you can run them locally.

The browser extension does not track any usage or analytics. The code is open source[3].

We want to continue to improve HN Companion, specifically add features like following an author, notes about an author, draft posts etc.

See it in action for a post here https://app.hncompanion.com/item?id=46937696

We would love to get your feedback on what would make this more useful for your HN reading.

[0] https://hncompanion.com/#download

[1] https://hncompanion.com/how-it-works

[2] https://news.ycombinator.com/item?id=42532374

[3] https://github.com/hncompanion/browser-extension

hncompanion.com
29 14
Summary
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks
dirteater_ about 21 hours ago

Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks

I've done this same println debugging thing so many times, along with some sed/awk stuff to figure out which call was causing the issue. Now it's a small Go package.

With some `runtime.Callers` I can usually find the spot by just swapping the existing Mutex or RWMutex for this one.

Sometimes I switch the

  mu.Lock()
  defer mu.Unlock()
with the LockFunc/RLockFunc to get more detail

  defer mu.LockFunc()()
I almost always initialize it with `deadlog.New(deadlog.WithTrace(1))` and that's plenty.

Not the most polished library, but it's not supposed to land in any commit, just a temporary debugging aid. I find it useful.

github.com
18 1
Summary
Show HN: Talk things through to find your next step
samxkoh about 8 hours ago

Show HN: Talk things through to find your next step

Hey folks. I'm working on Hey Echo, an app to talk things, find perspective and get clarity on actual next steps. Thanks for checking this out!

heyecho.app
2 0
Summary