Show stories

Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3
petewarden about 8 hours ago

Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3

I wanted to share our new speech to text model, and the library to use them effectively. We're a small startup (six people, sub-$100k monthly GPU budget) so I'm proud of the work the team has done to create streaming STT models with lower word-error rates than OpenAI's largest Whisper model. Admittedly Large v3 is a couple of years old, but we're near the top the HF OpenASR leaderboard, even up against Nvidia's Parakeet family. Anyway, I'd love to get feedback on the models and software, and hear about what people might build with it.

github.com
193 36
Summary
Show HN: Emdash – Open-source agentic development environment
onecommit about 12 hours ago

Show HN: Emdash – Open-source agentic development environment

Hey HN! We’re Arne and Raban, the founders of Emdash (https://github.com/generalaction/emdash).

Emdash is an open-source and provider-agnostic desktop app that lets you run multiple coding agents in parallel, each isolated in its own git worktree, either locally or over SSH on a remote machine. We call it an Agentic Development Environment (ADE).

You can see a 1 minute demo here: https://youtu.be/X31nK-zlzKo

We are building Emdash for ourselves. While working on a cap-table management application (think Stripe Atlas + Pulley), we found our development workflow to be messy: lots of terminals, lots of branches, and too much time spent waiting on Codex.

Emdash puts the terminal at the center and makes it easy to run multiple agents at once. Each agent runs as a task in its own git worktree. You can start one or a few agents on the same problem, test, and review.

Emdash works over SSH so you can run agents where your code lives and keep the parallel workflow. You can assign tickets to agents, edit files manually, and review changes.

We also spent time making task startup fast. Each task can be created in a worktree, and creating worktrees on demand was taking 5s+ in some cases. We now keep a small reserve of worktrees in the background and let a new task claim one instantly. That brought task start time down to ~500–1000ms depending on the provider. We also spawn the shell directly and avoid loading the shell environments on startup.

We believe using the providers’ native CLIs is the right approach. It gives you the full capabilities of each agent, always. If a provider starts supporting plan mode, we don't have to add that first.

We support 21 coding agent CLIs today, including Claude Code, Codex, Gemini, Droid, Amp, Codebuff, and more. We auto-detect what you have installed and we’re provider-agnostic by design. If there’s a provider you want that we don’t support yet, we can add it. We believe that in the future, some agents will be better suited for task X and others for task Y. Codex, Claude Code, and Gemini all have fans. We want to be agnostic and enable individuals and teams to freely switch between them.

Beyond orchestration, we try to pull most of the development loop into Emdash. You can review diffs, commit, open PRs, see CI/CD checks, and merge directly from Emdash once checks pass. When starting a task, you can pass issues from Linear, GitHub, and Jira to an agent. We also support convenience variables and lifecycle scripts so it’s easy to allocate ports and test changes.

Emdash is fully open-source and MIT-licensed.

Download for macOS, Linux or Windows (as of yesterday !), or install via Homebrew: brew install --cask emdash.

We’d love your feedback. How does your coding agent development setup look like, especially when working with multiple agents? We would want to learn more about it. Check out our repository here: https://github.com/generalaction/emdash

We’ll be around in the comments — thanks!

github.com
124 54
Summary
seveibar about 8 hours ago

Show HN: Recursively apply patterns for pathfinding

I've been begrudgingly working on autorouters for 2 years, looking for new techniques or modern methods that might allow AI to create circuit boards.

One of the biggest problems in my view for training an AI to do autorouting is the traditional grid-based representation of autorouting problems which challenges spatial understanding. But we know that vision models are very good at classifying, so I wondered if we could train a model to output a path as a classification. But then how do you represent the path? This lead me down the track of trying to build an autorouter that represented paths as a bunch of patterns.

More details: https://blog.autorouting.com/p/the-recursive-pattern-pathfin...

pattern-pathfinder.vercel.app
20 4
Summary
devarifhossain about 1 hour ago

Show HN: A free tool to turn your boring screenshots brutalist in seconds

The article explores the development of Neo, an open-source and decentralized blockchain platform, focusing on its features, capabilities, and potential applications in the world of blockchain technology.

neo.retroui.dev
2 0
Summary
prithvi2206 about 12 hours ago

Show HN: Tag Promptless on any GitHub PR/Issue to get updated user-facing docs

Hi HN! I'm Prithvi—my co-founder Frances and I launched Promptless almost a year ago here (https://news.ycombinator.com/item?id=43092522). It's an AI teammate that watches your workflows—code changes, support tickets, Slack threads, etc.—and automatically drafts doc updates when it spots something that should be documented.

Frances and I really appreciated the feedback from our first launch. Today we’re launching Promptless 1.0, which addresses our biggest learnings from the last 12 months.

I also made it way easier to try it out. You can tag @promptless on any open-source Github PR or Issue with a doc update request, and Promptless will create a fork and open a PR for your docs to help. Feel free to use our own docs as a playground: https://github.com/Promptless/docs/issues

Or, you can sign up at https://promptless.ai to get free access for your own docs for the next 30 days. Here's a demo video: https://youtu.be/IWwimHCEY7Y

For me, the coolest part of the last year has been seeing how users got creative with Promptless. One user has Promptless listening in to all their Slack Connect channels, so whenever they answer a customer question, Promptless figures out if their docs should be updated and drafts an update if so. Another user has Promptless processing every customer meeting transcript and updating their internal docs after each meeting: customer dashboards, feature request pages, etc.

Some of the biggest things that are new with version 1.0:

- Automatically updating screenshots: this was by far our most requested feature. The need here was always clear. People would exclude screenshots from docs because they’d get stale quickly, even though they knew screenshots would be helpful to users. A year ago, we just couldn't ship a good enough solution, but given how much LLMs' visual grounding has improved in the last year, now we've got something we're proud of.

- Slop-free writing: The most common critique on early Promptless suggestions was that even though they were accurate, they could sound generic or verbose, or might just reek of AI slop. Promptless 1.0 is 3.5x better at this (measured by voice-alignment compared to what users actually published), through a combination of fine-tuned models, sub-agents, and alignment on user-defined preferences.

- Open-source program: We're especially proud of this—Promptless is now free for CNCF/Linux Foundation projects (reach out if you’re a maintainer!). You can take a look at how Promptless is supporting Vitess (a CNCF-graduated project) with their docs here: https://github.com/vitessio/website/commits

Check it out and let us know if you have any questions, feedback, or criticism!

31 6
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP)
MdSadiqMd 2 days ago

Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP)

It takes an input video and converts it into H.264/Opus RTP streams that you can blast at your video call systems (WebRTC, SFUs, etc.). It also injects network chaos like packet loss, jitter, and bitrate throttling to see how things break

It scales from 1 to n participants, depending on the compute and memory of the host system Best part? It’s packaged with Nix, so it builds the same everywhere (Linux, macOS, ARM, x86). No dependency hell

It supports both UDP (with a relay chain for Kubernetes) and WebRTC (with containerized TURN servers). Chaos spikes can be distributed evenly, randomly, or front/back-loaded for different test scenarios. To change this, just edit the values in a single config file

github.com
37 2
Summary
Show HN: enveil – hide your .env secrets from prAIng eyes
parkaboy 1 day ago

Show HN: enveil – hide your .env secrets from prAIng eyes

Enveil is an open-source framework that provides secure, end-to-end encryption for data in use, enabling organizations to perform computations on encrypted data without exposing sensitive information. The project aims to advance privacy-preserving technologies and promote secure data sharing and collaboration.

github.com
194 122
Summary
Show HN: StreamHouse – S3-native Kafka alternative written in Rust
gbram about 3 hours ago

Show HN: StreamHouse – S3-native Kafka alternative written in Rust

Hey HN,

I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost.

How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box.

What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes

The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes.

Written in Rust, ~50K lines across 15 crates. Apache 2.0 licensed.

GitHub: https://github.com/gbram1/streamhouse

Happy to answer questions about the architecture, tradeoffs, or what I learned building this.

github.com
4 2
Summary
Show HN: PgDog – Scale Postgres without changing the app
levkk 1 day ago

Show HN: PgDog – Scale Postgres without changing the app

Hey HN! Lev and Justin here, authors of PgDog (https://pgdog.dev/), a connection pooler, load balancer and database sharder for PostgreSQL. If you build apps with a lot of traffic, you know the first thing to break is the database. We are solving this with a network proxy that works without requiring application code changes or database migrations.

Our post from last year: https://news.ycombinator.com/item?id=44099187

The most important update: we are in production. Sharding is used a lot, with direct-to-shard queries (one shard per query) working pretty much all the time. Cross-shard (or multi-database) queries are still a work in progress, but we are making headway.

Aggregate functions like count(), min(), max(), avg(), stddev() and variance() are working, without refactoring the app. PgDog calculates the aggregate in-transit, while transparently rewriting queries to fetch any missing info. For example, multi-database average calculation requires a total count of rows to calculate the original sum. PgDog will add count() to the query, if it’s not there already, and remove it from the rows sent to the app.

Sorting and grouping works, including DISTINCT, if the columns(s) are referenced in the result. Over 10 data types are supported, like, timestamp(tz), all integers, varchar, etc.

Cross-shard writes, including schema changes (CREATE/DROP/ALTER), are now atomic and synchronized between all shards with two-phase commit. PgDog keeps track of the transaction state internally and will rollback the transaction if the first phase fails. You don’t need to monkeypatch your ORM to use this: PgDog will intercept the COMMIT statement and execute PREPARE TRANSACTION and COMMIT PREPARED instead.

Omnisharded tables, a.k.a replicated or mirrored (identical on all shards), support atomic reads and writes. That’s important because most databases can’t be completely sharded and will have some common data on all databases that has to be kept in-sync.

Multi-tuple inserts, e.g., INSERT INTO table_x VALUES ($1, $2), ($3, $4), are split by our query rewriter and distributed to their respective shards automatically. They are used by ORMs like Prisma, Sequelize, and others, so those now work without code changes too.

Sharding keys can be mutated. PgDog will intercept and rewrite the update statement into 3 queries, SELECT, INSERT, and DELETE, moving the row between shards. If you’re using Citus (for everyone else, Citus is a Postgres extension for sharding databases), this might be worth a look.

If you’re like us and prefer integers to UUIDs for your primary keys, we built a cross-shard unique sequence, directly inside PgDog. It uses the system clock (and a couple other inputs), can be called like a Postgres function, and will automatically inject values into queries, so ORMs like ActiveRecord will continue to work out of the box. It’s monotonically increasing, just like a real Postgres sequence, and can generate up to 4 million numbers per second with a range of 69.73 years, so no need to migrate to UUIDv7 just yet.

    INSERT INTO my_table (id, created_at) VALUES (pgdog.unique_id(), now());
Resharding is now built-in. We can move gigabytes of tables per second, by parallelizing logical replication streams across replicas. This is really cool! Last time we tried this at Instacart, it took over two weeks to move 10 TB between two machines. Now, we can do this in just a few hours, in big part thanks to the work of the core team that added support for logical replication slots to streaming replicas in Postgres 16.

Sharding hardly works without a good load balancer. PgDog can monitor replicas and move write traffic to a promoted primary during a failover. This works with managed Postgres, like RDS (incl. Aurora), Azure Pg, GCP Cloud SQL, etc., because it just polls each instance with “SELECT pg_is_in_recovery()”. Primary election is not supported yet, so if you’re self-hosting with Patroni, you should keep it around for now, but you don’t need to run HAProxy in front of the DBs anymore.

The load balancer is getting pretty smart and can handle edge cases like SELECT FOR UPDATE and CTEs with INSERT/UPDATE statements, but if you still prefer to handle your read/write separation in code, you can do that too with manual routing. This works by giving PgDog a hint at runtime: a connection parameter (-c pgdog.role=primary), SET statement, or a query comment. If you have multiple connection pools in your app, you can replace them with just one connection to PgDog instead. For multi-threaded Python/Ruby/Go apps, this helps by reducing memory usage, I/O and context switching overhead.

Speaking of connection pooling, PgDog can automatically rollback unfinished transactions and drain and re-sync partially sent queries, all in an effort to preserve connections to the database. If you’ve seen Postgres go to 100% CPU because of a connection storm caused by an application crash, this might be for you. Draining connections works by receiving and discarding rows from abandoned queries and sending the Sync message via the Postgres wire protocol, which clears the query context and returns the connection to a normal state.

PgDog is open source and welcomes contributions and feedback in any form. As always, all features are configurable and can be turned off/on, so should you choose to give it a try, you can do so at your own pace. Our docs (https://docs.pgdog.dev) should help too.

Thanks for reading and happy hacking!

pgdog.dev
315 57
Summary
Show HN: Declarative open-source framework for MCPs with search and execute
samrith about 9 hours ago

Show HN: Declarative open-source framework for MCPs with search and execute

Hi HN,

I’m Samrith, creator of Hyperterse.

Today I’m launching Hyperterse 2.0, a schema-first framework for building MCP servers directly on top of your existing production databases.

If you're building AI agents in production, you’ve probably run into agents needing access to structured, reliable data but wiring your business logic to MCP tools is tedious. Most teams end up writing fragile glue code. Or worse, giving agents unsafe, overbroad access.

There isn’t a clean, principled way to expose just the right data surface to agents.

Hyperterse lets you define a schema over your data and automatically exposes secure, typed MCP tools for AI agents.

Think of it as: Your business data → controlled, agent-ready interface.

Some key properties include a schema-first access layer, typed MCP tool generation, works with existing Postgres, MySQL, MongoDB, Redis databases, fine-grained exposure of queries, built for production agent workloads.

v2.0 focuses heavily on MCP with first-class MCP server support, cleaner schema ergonomics, better type safety, faster tool surfaces.

All of this, with only two tools - search & execute - reducing token usage drastically.

Hyperterse is useful if you are building AI agents/copilots, adding LLM features to existing SaaS, trying to safely expose internal data to agents or are just tired of bespoke MCP glue layers.

I’d love feedback, especially from folks running agents in production.

GitHub: https://github.com/hyperterse/hyperterse

hyperterse.com
9 2
Show HN: Babyshark – Wireshark made easy (terminal UI for PCAPs)
eigen-vector 1 day ago

Show HN: Babyshark – Wireshark made easy (terminal UI for PCAPs)

Hey all, I built babyshark, a terminal UI for PCAPs aimed at people who find Wireshark powerful but overwhelming.

The goal is “PCAPs for humans”: Overview dashboard answers what’s happening + what to click next

Domains view (hostnames first) → select a domain → jump straight to relevant flows (works even when DNS is encrypted/cached by using observed IPs from flows)

Weird stuff view surfaces common failure/latency signals (retransmits/out-of-order hints, resets, handshake issues, DNS failures when visible)

From there you can drill down: Flows → Packets → Explain (plain-English hints) / follow stream

Commands: Offline: babyshark --pcap capture.pcap

Live (requires tshark): babyshark --list-ifaces then babyshark --live en0

Repo + v0.1.0 release: https://github.com/vignesh07/babyshark

Would love feedback on UX + what “weird detectors” you’d want next.

github.com
140 45
Summary
Show HN: X86CSS – An x86 CPU emulator written in CSS
rebane2001 1 day ago

Show HN: X86CSS – An x86 CPU emulator written in CSS

lyra.horse
257 90
Show HN: Sowbot – Open-hardware agricultural robot (ROS2, RTK GPS)
Sabrees 1 day ago

Show HN: Sowbot – Open-hardware agricultural robot (ROS2, RTK GPS)

Sowbot is an open-hardware agricultural robot designed to close the "prototype gap" that kills most agri-robotics startups and research projects — the 18+ months spent on drivers, networking, safety watchdogs, and UI before you can even start on the thing you actually care about.

The hardware is built around a stackable 10×10cm compute module with two ARM Cortex-A55 SBCs — one for ROS 2 navigation/EKF localisation, one dedicated to vision/YOLO inference — connected via a single ethernet cable.

Centimetre-level positioning via dual RTK GNSS, CAN bus for field comms, and real-time motor control via ESP32 running Lizard firmware.

Everything — schematics, PCB layouts, firmware — is under open licences. The software stack runs on RoSys/Field Friend (for teams who want fast iteration) or DevKit ROS (for teams already in the ROS ecosystem). The idea is that a lab in one country can reproduce another lab's experiment by sharing a Docker image.

Current status: the Open Core brain is largely fabricated, the full-size Sowbot body has a detailed BOM but isn't yet assembled, and we have two smaller dev platforms (Mini and Pico) in various stages of testing.

We're a small volunteer team and we're looking for contributors — hardware, ROS, firmware, docs, whatever you can offer.

The best place to start is our Discord: https://discord.gg/SvztEBr4KZ — we have a weekly call if you'd prefer to just show up and chat.

GitHub: https://github.com/Agroecology-Lab/feldfreund_devkit_ros/tre...

sowbot.co.uk
178 45
Summary
Show HN: ProdRescue AI – Turn Slack war-rooms and raw logs into incident reports
devrimozcay about 10 hours ago

Show HN: ProdRescue AI – Turn Slack war-rooms and raw logs into incident reports

Hi HN,

Most of us have been there: It’s 3 AM, there’s an outage, and the #incident channel is exploding with 200+ messages. Once the fix is deployed, the real pain begins—spending 4 hours reconstructing the timeline for the post-mortem.

I built ProdRescue AI to automate this. It’s an incident intelligence engine that correlates technical logs with human context from Slack.

How it works:

Native Slack Integration: Connect via OAuth 2.0. We only access channels you explicitly invite the bot to.

Contextual Correlation: It maps Slack timestamps to log events, identifying not just what failed, but who made which decision and why.

4-Layer Intelligence: We use a pipeline to Sanitize (mask PII), Correlate (logs + chat), Infer (RCA), and Verify (link every claim to a source log line).

Security: We use ephemeral processing. No log retention, no training on your data.

I’m really interested in your thoughts on the "Evidence-Backed" approach. Instead of just generating a narrative, we link every finding to a specific evidence tag ([1], [2], etc.) to eliminate AI hallucinations.

Check it out here: https://prodrescueai.com

Would love to hear your feedback on the Slack-to-Timeline flow!

prodrescueai.com
4 0
Show HN: Steerling-8B, a language model that can explain any token it generates
adebayoj 1 day ago

Show HN: Steerling-8B, a language model that can explain any token it generates

Anthropic has released Steerling, an 8-billion parameter language model, aimed at providing a more aligned and truthful AI assistant that can engage in open-ended dialogue and assist with a variety of tasks while adhering to Anthropic's principles of ethical AI development.

guidelabs.ai
315 86
Summary
ai_bot 2 days ago

Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)

Interactive timeline of every major Large Language Model. Filterable by open/closed source, searchable, 54 organizations tracked.

llm-timeline.com
169 57
Summary
Show HN: Cellarium: A Playground for Cellular Automata
andrewosh 4 days ago

Show HN: Cellarium: A Playground for Cellular Automata

Hey HN, just wanted to share a fun, vibe-coded Friday night experiment: a little playground for writing cellular automata in a subset of Rust, which is then compiled into WGSL.

Since it lets you dynamically change parameters while the simulation is running via a TUI, it's easy to discover weird behaviors without remembering how you got there. If you press "s", it will save the complete history to a JSON file (a timeline of the parameters that were changed at given ticks), so you can replay it and regenerate the discovery.

You can pan/zoom, and while the main simulation window is in focus, the arrow keys can be used to update parameters (which are shown in the TUI).

Claude deserves all the credit and criticism for any technical elements of this project (beyond rough guidelines). I've just always wanted something like this, and it's a lot of fun to play with. Who needs video games these days.

github.com
39 1
Summary
Show HN: Ghist – Task management that lives in your repo
nxnze about 13 hours ago

Show HN: Ghist – Task management that lives in your repo

The article discusses the creation of a website that allows users to explore the history of GitHub, including its founding, growth, and key milestones. The website provides a comprehensive timeline and interactive features to help users understand the evolution of this influential software development platform.

github.com
16 1
Summary
Show HN: Mnemosyne – Cognitive memory OS for AI agents (zero LLM calls)
mnemosy about 8 hours ago

Show HN: Mnemosyne – Cognitive memory OS for AI agents (zero LLM calls)

Mnemosyne is an open-source flashcard program designed to help users efficiently learn and retain information. The software utilizes a spaced repetition algorithm to optimize the review process and improve long-term memory.

github.com
4 1
Summary
Show HN: CIA World Factbook Archive (1990–2025), searchable and exportable
MilkMp 2 days ago

Show HN: CIA World Factbook Archive (1990–2025), searchable and exportable

A structured archive of CIA World Factbook data spanning 1990–2025. It currently includes: 36 editions 281 entities ~1.06M parsed fields full-text + boolean search country/year comparisons map/trend/ranking analysis views CSV/XLSX/PDF export The goal is to preserve long-horizon public-domain government data and make cross-year analysis practical. Live: https://cia-factbook-archive.fly.dev About/method details: https://cia-factbook-archive.fly.dev/about Data source is the CIA World Factbook (public domain). Not affiliated with the CIA or U.S. Government.

cia-factbook-archive.fly.dev
485 99
Summary
Show HN: Brainstorm-MCP – Let GPT, DeepSeek, and Groq Brainstorm Together
spranab about 9 hours ago

Show HN: Brainstorm-MCP – Let GPT, DeepSeek, and Groq Brainstorm Together

The article discusses the BrainStorm Medical Coding Platform (BrainStorm MCP), an open-source project that aims to simplify medical coding by providing a user-friendly interface and powerful features for healthcare professionals.

github.com
2 1
Summary
lababidi about 9 hours ago

Show HN: Disk Inventory X updated for Apple Silicon

The article discusses the Disk Inventory X, an open-source disk space analyzer for macOS that provides a visual representation of disk usage, allowing users to easily identify and manage large files and folders on their system.

diskinv.github.io
4 2
Summary
Show HN: Bookie – Conquer the bookkeeping and accounting chaos of freelancing
nxnze about 9 hours ago

Show HN: Bookie – Conquer the bookkeeping and accounting chaos of freelancing

We’ve given bookkeeping a complete overhaul, and we've put your experience at the ♥ of everything

bookie.tax
2 0
Summary
Show HN: MiniVim a Minimal Neovim Configuration
kppjeuring about 9 hours ago

Show HN: MiniVim a Minimal Neovim Configuration

I built MiniVim, a small and minimal Neovim configuration focused on keeping things simple and readable.

The goal was to have a setup that:

starts fast

uses only essential plugins

avoids heavy frameworks

remains easy to understand and extend

The structure is intentionally small:

It’s not meant to compete with full Neovim distributions, but rather serve as a clean base configuration that can be extended gradually.

I use it across multiple machines (laptop, WSL, and servers), so reproducibility and simplicity were priorities.

Feedback is welcome.

github.com
5 0
Summary
Show HN: CharityVerify – Trust scores for 138K Canadian charities
buchler about 9 hours ago

Show HN: CharityVerify – Trust scores for 138K Canadian charities

I built CharityVerify to make Canadian charity data actually usable.

The Canada Revenue Agency publishes T3010 forms for every registered charity, but they're scattered across clunky databases with no standardization or comparability. I collected 15 years of filings for all 138,203 charities and built a trust scoring system on top.

Stack: - Python + Playwright for CRA data collection (4s rate-limited) - PostgreSQL (Supabase) — 12 T3010 tables, 138K charities, 457K directors, 362K directorship links - Express.js REST API on Fly.io - Daily GitHub Actions sync for new filings - On-demand narrative generation via Claude Haiku

Scoring algorithm: Three 0-100 scores per charity: 1. Legitimacy (filing consistency, directorship stability, CRA compliance) 2. Effectiveness (program spending ratio, overhead, donation efficiency) 3. Compliance (sanctions screening, FATF risk, political activity limits)

Each charity gets a letter grade (A+ to F, or NR for insufficient data).

Findings: - Only 186 out of 85,507 registered charities scored A+ - Average effectiveness score: 51.6/100 - 487,692 flags generated (directorship overlap, compensation issues, filing gaps, etc.)

The core search/view is free. I'm building a tiered REST API for professional use cases (due diligence firms, grant-making orgs, etc.).

Code is closed-source for now, but the underlying CRA data is public domain. Happy to discuss the data pipeline, scoring methodology, or data collection approach.

charityverify.com
2 0
Summary
Show HN: Cost per Outcome for AI Workflows
deborahjacob about 10 hours ago

Show HN: Cost per Outcome for AI Workflows

The article introduces the Botanu AI SDK for Python, a toolkit that enables developers to build and deploy AI-powered applications. It covers the key features and capabilities of the SDK, as well as how to get started with using it for your projects.

github.com
4 1
Summary
moonwizard about 10 hours ago

Show HN: MantleDB – Anonymous JSON storage for your side projects

For years, I’ve been building small apps and prototypes that needed persistent cloud data, but I couldn't be bothered to set up a full database, manage an ORM, or deal with auth. Most of the projects were just too small to justify the overhead.

So I built MantleDB. It’s a simple JSON storage server designed for speed and zero-friction. There is no UI—even registration is handled via the API.

Get started instantly:

curl -s https://mantledb.sh/api/auth/register

You’ll get an AID (Admin ID) for reads/writes and an RID (Read ID) for public-facing reads.

Write to a bucket. Note: Buckets are created on write.

curl -X POST https://mantledb.sh/api/b/YOUR_AID/<bucketname> -d '{"score": 42}'

Read the data back:

curl https://mantledb.sh/api/b/YOUR_RID/<bucketname>

How it works:

Ephemeral by default: To keep things lean, a "scavenger" cron runs daily. On the free tier, buckets with no activity for 72 hours are deleted. Accounts with no buckets are cleared after one week.

Pro Plan: Removes the scavenger, increases bucket limits, and adds atomic operations (Increment, Append, etc.).

Tech Stack: Node.js + SQLite (running on AWS Lightsail).

If the free tier feels too tight or the Pro version feels too pricey, let me know! I’m happy to hand out discount codes or adjust things based on feedback.

I’m mostly looking for people to try and break it or tell me what features would make this their go-to for the next weekend hackathon.

mantledb.sh
3 0
Summary
pklym about 10 hours ago

Show HN: I built an iOS app that turns EPUBs into audiobooks

I had a bunch of ebooks with no audiobook version available. So I built an iOS app that converts EPUB files into audiobooks using text-to-speech.

Two voice options: - Free on-device voices (processed locally, no server needed) - Natural cloud voices (one-time purchase per book, no subscription)

Cloud conversion runs chunk by chunk. You can start listening other chapters generate in the background. Once done, the audiobook lives on your device.

No account required. No subscription. You import your own EPUBs and either use device TTS for free or pay per book for the cloud voices.

Nothing stored on backend, neither books or audio files.

apps.apple.com
6 3
Show HN: Claude Code Canvas
raulriera about 10 hours ago

Show HN: Claude Code Canvas

This article introduces Claude, a new AI language model by Anthropic that can generate and manipulate code in real-time using a visual interface. The article discusses the potential applications of Claude, such as rapid prototyping and interactive code exploration.

github.com
4 1
Summary
Show HN: Interactive 3D Moon with real NASA data and WebGPU
oddurs about 10 hours ago

Show HN: Interactive 3D Moon with real NASA data and WebGPU

A photorealistic Moon viewer running entirely in the browser. WebGPU primary renderer with WebGL 2 fallback.

- NASA CGI Moon Kit textures served via a quadtree LOD tile system - Oren-Nayar BRDF (lunar regolith is non-Lambertian with strong backscatter) - Sun position calculated from astronomy-engine (±1 arcminute) - Scrub through the full lunation cycle or watch in real time - Earth and Tycho-2 starfield in the background

Tech: Three.js with TSL shaders (compile to both WGSL and GLSL), React Three Fiber, Vite. The shading model was the most interesting part — standard PBR looks completely wrong for the Moon because regolith doesn't have a specular lobe; it actually gets brighter at opposition (the "opposition surge"). Oren-Nayar gets close enough for a web visualization.

Tile system is a geodetic quadtree similar to CesiumJS's approach. Zoom level picks based on screen-space error. Currently 7 levels deep which gets you to ~4 km/pixel at max zoom.

Would love feedback, especially from anyone who's worked with lunar data or WebGPU in production.

moon.oddurs.com
3 0