Show HN: I built an interactive HN Simulator
Hey HN! Just for fun, I built an interactive Hacker News Simulator.
You can submit text posts and links, just like the real HN. But on HN Simulator, all of the comments are generated by LLMs + generate instantly.
The best way to use it (IMHO) is to submit a text post or a curl-able URL here: https://news.ysimulator.run/submit. You don't need an account to post.
When you do that, various prompts will be built from a library of commenter archetypes, moods, and shapes. The AI commenters will actually respond to your text post and/or submitted link.
I really wanted it to feel real, and I think the project mostly delivers on that. When I was developing it, I kept getting confused between which tab was the "real" HN and which was the simulator, and accidentally submitted some junk to HN. (Sorry dang and team – I did clean up after myself).
The app itself is built with Node + Express + Postgres, and all of the inference runs on Replicate.
Speaking of Replicate, they generously loaded me up with some free credits for the inference – so shoutout to the team there.
The most technically interesting part of the app is how the comments work. You can read more about it here, as well as explore all of the available archetypes, moods, and shapes that get combined into prompts: https://news.ysimulator.run/comments.html
I hope you all have as much fun playing with it as I did making it!
Show HN: OCR Arena – A playground for OCR models
I built OCR Arena as a free playground for the community to compare leading foundation VLMs and open-source OCR models side-by-side.
Upload any doc, measure accuracy, and (optionally) vote for the models on a public leaderboard.
It currently has Gemini 3, dots.ocr, DeepSeek, GPT5, olmOCR 2, Qwen, and a few others. If there's any others you'd like included, let me know!
Show HN: Datamorph – A clean JSON ⇄ CSV converter with auto-detect
Hi everyone,
I built a small web tool called Datamorph because I kept running into JSON/CSV converters that either broke with nested data, required login, or added weird formatting.
Datamorph is a minimal, fast, no-login tool that can:
• Convert JSON → CSV and CSV → JSON • Auto-detect structure (arrays, nested objects, mixed data) • Handle uploads or manual text input • Beautify / fix invalid JSON • Give clean, flat CSV output for real-world messy data
It’s built with React + Supabase + serverless functions. Everything runs client-side except file parsing, so nothing is stored.
I know there are many similar tools, but I tried focusing on:
• better handling of nested JSON, • simpler UI, • zero ads / zero login, • instant conversion without waiting.
Would love feedback on edge cases it fails on, or features you think would make this actually useful for devs and analysts.
Live tool: https://datamorphio.vercel.app/
Thanks for checking it out!
Show HN: The Wiki Game - reach target Wikipedia page by clicking hyperlinks only
The Wiki Game is a popular Wikipedia-based game that challenges players to navigate from one Wikipedia page to another in as few clicks as possible, promoting exploration and discovery of the vast online encyclopedia.
Show HN: Hypercamera – a browser-based 4D camera simulator
This article explores the concept of a 4D camera, which would capture images in four dimensions (three spatial dimensions and one time dimension), allowing viewers to experience scenes from different perspectives and time frames. The article discusses the potential applications and technical challenges of developing such a camera.
Show HN: Search London StreetView panoramas by text
Inspired by All Text in NYC (https://alltext.nyc) by Yufeng Zhang I thought I would replicate something similar for London.
A searchable tool that lets you explore text captured across Google Street View imagery in London; shop signs, posters, graffiti, van numbers etc
Show HN: I wrote my lecture notes in Typst
Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows
Easy to use, portable app to play midi music files on all flavours of Microsoft Windows.
Brief Background - Used midi playback way back in the days of Windows 95 for some fun and entertaining apps, but as Windows progressed, it seemed their midi support (for Win32 anyway) regressed in both startup speed and reliability. Midi playback used to be near instant on Windows 95, but on later versions of Windows this was delayed to about 5-7 seconds. And reliability became somewhat patchy. This made working with midi a real headache.
Cynthia was built to test and enjoy midi music once again. It's taken over a year of solid coding, recoding, testing, re-testing, and a lot more testing, and some hair pulling along the way, but finally Cynthia works pretty solidly on Windows now.
Some of Cynthia's Key Features: * 25 built-in sample midis on a virtual disk - play right out-of-the box * Play Modes: Once, Repeat One, Repeat All, All Once, Random * Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats * Realtime track data indicators, channel output volume indicators with peak hold, 128 note usage indicators * Volume Bars to display realtime average volume and bass volume levels * Use an Xbox Controller to control Cynthia's main functions * Large list capacity for handling thousands of midi files * Switch between up to 10 midi playback devices in realtime * Playback through a single midi device, or multiple simultaneous midi devices with lag and channel output support * Custom built midi playback engine for high playback stability * Custom built codebase for low-level work to GUI level * Also runs on Linux/Mac (including apple silicon) via Wine * Smart Source Code - compiles in Borland Delphi 3 and Lazarus 2 * MIT License
YouTube Video of Cynthia playing a midi: https://youtu.be/IDEOQUboTvQ
GitHub Repo: https://github.com/blaiz2023/Cynthia
Show HN: Supabase-Test – Fast Isolated Postgres DBs for Testing Supabase RLS
Hi HN — we've built a testing framework for Supabase that spins up fast, isolated Postgres databases for each test case. It’s designed to make RLS policies easy to validate with real database state, without global test fixtures or mock auth.
Features: - Instant isolated Postgres DBs per test - Automatic rollback after each test - RLS-native testing with `.setContext()` for auth simulation - Flexible seeding (SQL, CSV, JSON, JS) - Works with Jest, Mocha, and any async test runner - CI-friendly (runs cleanly in GitHub Actions)
We also published example projects and a free set of tutorials: https://launchql.com/learn/supabase
Package: https://www.npmjs.com/package/supabase-test
Source + full test suite: https://github.com/launchql/supabase-test-suite
Happy to answer questions and get feedback, cheers :)
Show HN: Radius.today – Local-first personal CRM
Radius.Today is a news platform that covers a variety of topics, including technology, business, and lifestyle. The website provides readers with in-depth articles and analysis on current events and emerging trends.
Show HN: Stun LLMs with thousands of invisible Unicode characters
I made a free tool that stuns LLMs with invisible Unicode characters.
*Use cases:* Anti-plagiarism, text obfuscation against LLM scrapers, or just for fun!
Even just one word's worth of “gibberified” text is enough to block most LLMs from responding coherently.
Show HN: I wrote a minimal memory allocator in C
A fun toy memory allocator (not thread safe, that's a future TODO). I also wanted to explain how I approached it, so I also wrote a tutorial blog post (~20 minute read) covering the code which you can find the link to in the README.
Show HN: Kibun (気分) – a decentralized status.cafe alternative I made
I’ve been using status.cafe for a while and I love it, but if there is one thing that has been bugging me is that there’s no way to export all the status updates I’ve posted over the years. If the site goes down someday, that whole history is just gone.
With that thought bugging me, I built kibun.social, a minimal status.cafe-like service built on top of atmosphere, the same open social protocol used by Bluesky.
Because it’s decentralized, every status you post is stored directly inside your PDS. You can export them, move to another app, or build your own frontend in the future. The platform is basically just a viewer/writer on top of your data.
It's very straightforward and takes seconds to update your status. You login with your atproto handle, select an emoji and post your status. You also gets your own RSS feed for your statuses if you want to follow them elsewhere.
It’s still early but I’d love to hear what people think - especially people who enjoy small social spaces and decentralized web stuff.
Show HN: Virtual SLURM HPC cluster in a Docker Compose
I'm the main developer behind vHPC, a SLURM HPC cluster in a docker compose.
As part of my job, I'm working on a software solution that needs to interact with one of the largest Italian HPC clusters (Cineca Leonardo, 270 PFLOPS). Of course developing on the production system was out of question, as it would have led to unbearably long feedback loops. I thus started looking around for existing containerised solutions, which were always lacking some key ingredient in order to suitably mock our target system (accounting, MPI, out of date software, ...).
I thus decided that it was worth it to make my own virtual cluster from scratch, learning a thing or two about SLURM in the process. Even though it satisfies the particular needs of the project I'm working on, I tried to keep vHPC as simple and versatile as possible.
I proposed the company to open source it, and as of this morning (CET) vHPC is FLOSS for others to use and tweak. I am around to answer any question.
Show HN: I built an interactive map of jobs at top AI companies
I built a live interactive map that shows where top AI companies hire around the world. I collected this data for a hackathon project. Many ATS providers have a public API that you can hit with the slug of the companies to get open jobs. The hardest part was finding the companies. I tried Firecrawl but it returned around 200 companies per provider which wasn’t enough for me. Then, I tried SERPAPI but it was expensive. I ended up using SearXNG to discover companies by ATS type and fetch their job postings. This produced a large dataset of 200k+ jobs (I only use a subset as it would have taken too much time processing). A few days ago, I decided to build a visualization of the data as I didn’t know what to do with it and wanted people to benefit.
I kept catching myself wanting to ask simple questions like “show only research roles in Europe” or “filter for remote SWE positions” (and had plenty of free ai credits) so I added a small LLM interface that translates natural language into filters on the map.
The map is built with Vite + React + Mapbox. Live demo: https://map.stapply.ai GitHub (data): https://github.com/stapply-ai/jobs
Would love feedback, ideas for improvement, or contributions.
Show HN: Image to STL – Free AI-powered image to 3D printable model converter
Hi HN,
We built Image to STL, a AI tool that converts 2D images into 3D printable STL files in seconds. Upload a PNG or JPG → get a ready-to-print STL model — no 3D modeling skills required.
Key Features AI-Powered Conversion Automatically generates 3D geometry from a single image Handles photos, logos, artwork, and product images
Instant Processing Get your STL file in seconds, not hours No software installation needed
3D Print Ready Optimized mesh output for FDM/SLA printers Clean, watertight STL files
Why We Built This 3D printing is everywhere, but creating 3D models is still a barrier for most people. We wanted to make it dead simple: upload an image → download an STL → start printing.
Perfect for: Makers who want to print custom designs Product designers prototyping ideas Artists turning 2D work into 3D sculptures Hobbyists experimenting with 3D printing
Try it free: https://imagetostl.org
Feedback We'd Love What types of images would you want to convert? Any mesh settings that matter most (resolution, thickness, etc.)? Would API access be useful for your workflow? Other output formats you need beyond STL?
Thanks for checking it out! Happy to answer any questions.
Show HN: TX-2 ECS – A web framework that treats your app as a world
I’ve been building a different kind of web framework and would love feedback.
TX-2 ECS is a TypeScript-first framework where your app is modeled as an ECS world (entities, components, systems) instead of a tree of UI components + ad-hoc state.
A few things that might interest HN:
- Single world model shared across server and client; systems run in both places. - Rendering is “just another system” that produces DOM; SSR + hydration are built in. - Built-in RPC + state sync that ships only deltas, on a tunable rate limit (aimed at reducing egress/CPU for real-time apps). - Designed for long-lived products where you care about dev velocity 5+ years in (features are usually new systems, not surgery on existing code).
It’s aimed at apps that feel more like living systems than CRUD: multiplayer tools, dashboards, agents, simulations, collaborative editors, etc.
Repo: https://github.com/IreGaddr/tx2-ecs
I’m especially interested in: - “This will/ won’t work in production because…” from people who run real-time systems. - Critiques of the ECS-centered architecture for web. - Benchmarks or experiments you’d want to see before considering something like this.
Show HN: My first published app – track contraception ring cycle
My wife said she wished there was a big widget on her phone that told her when to take her Nuvaring out. So I vibe coded one. What other problems can it solve?
Show HN: Axe - A Systems Programming Language with Builtin Parallelism and No GC
I'm writing a compiler for a systems language focused on concurrency and parallelism. It’s a re-engineering of a prior work, with an explicit emphasis on memory management and type safety, plus first-class parallel primitives at the language level.
The language is now capable of compiling a substantial portion of its own source code to tokens using a single-pass C back-end. The self-hosted compiler includes a handwritten lexer and a parser, with an arena-based allocator to support fast compilation and eliminate GC complexity.
The primary goals for the project are: First-class parallel and concurrent constructs built directly into the language, strong static memory and type guarantees, and a toolchain suitable for building high-performance software
Example:
def main() {
parallel local(mut arena: Arena) {
arena = Arena.create(1024);
val tid = Parallel.thread_id();
val result = worker(ref_of(arena), tid);
println $"Thread {tid} computed {result}";
Arena.destroy(ref_of(arena));
}
}
You can find the repository here: https://github.com/axelang/axe
Show HN: Build the habit of writing meaningful commit messages
Too often I find myself being lazy with commit messages. But I don't want AI to write them for me... only i truly know why i wrote the code i did.
So why don't i get AI to help me get that into words from my head?
That's what i built: smartcommit asks you questions about your changes, then helps you articulate what you already know into a proper commit message. Captures the what, how, and why.
Built this after repeatedly being confused 6 months in a project as to why i made the change i had made...
Would love feedback!
Show HN: Forty.News – Daily news, but on a 40-year delay
This started as a reaction to a conversational trope. Despite being a tranquil place, even conversations at my yoga studio often start with, "Can you believe what's going on right now?" with that angry/scared undertone.
I'm a news avoider, so I usually feel some smug self-satisfaction in those instances, but I wondered if there was a way to satisfy the urge to doomscroll without the anxiety.
My hypothesis: Apply a 40-year latency buffer. You get the intellectual stimulation of "Big Events" without the fog of war, because you know the world didn't end.
40 years creates a mirror between the Reagan Era and today. The parallels include celebrity populism, Cold War tensions (Soviets vs. Russia), and inflation economics.
The system ingests raw newspaper scans and uses a multi-step LLM pipeline to generate the daily edition:
OCR & Ingestion: Converts raw pixels to text.
Scoring: Grades events on metrics like Dramatic Irony and Name Recognition to surface stories that are interesting with hindsight. For example, a dry business blurb about Steve Jobs leaving Apple scores highly because the future context creates a narrative arc.
Objective Fact Extraction: Extracts a list of discrete, verifiable facts from the raw text.
Generation: Uses those extracted facts as the ground truth to write new headlines and story summaries.
I expected a zen experience. Instead, I got an entertaining docudrama. Historical events are surprisingly compelling when serialized over weeks.
For example, on Oct 7, 1985, Palestinian hijackers took over the cruise ship Achille Lauro. Reading this on a delay in 2025, the story unfolded over weeks: first they threw an American in a wheelchair overboard, then US fighter jets forced the escape plane to land, leading to a military standoff between US Navy SEALs and the Italian Air Force. Unbelievably, the US backed down, but the later diplomatic fallout led the Italian Prime Minister to resign.
It hits the dopamine receptors of the news cycle, but with the comfort of a known outcome.
Stack: React, Node.js (Caskada for the LLM pipeline orchestration), Gemini for OCR/Scoring.
Link: https://forty.news (No signup required, it's only if you want the stories emailed to you daily/weekly)
Show HN: Wealthfolio 2.0- Open source investment tracker. Now Mobile and Docker
Hi HN, creator of Wealthfolio here.
A year ago, I posted the first version. Since then, the app has matured significantly with two major updates:
1. Multi-platform Support: Now available on Mobile (iOS), Desktop (macOS, Windows, Linux), and as a Self-hosted Docker image. (Android coming soon).
2. Addons System: We added explicit support for extensions so you can hack around, vibe code your own integrations, and customize the app to fit your needs.
The core philosophy remains the same: Always private, transparent, and open source.
Show HN: Runbooks – Shareable Claude Code Sessions
When we asked developers from large engineering teams, almost everyone is using Claude Code, Cursor, or Copilot. But adoption is still inconsistent. Some of us would have 15 agents running in Claude Code at the same time; some would still refuse to use any of them and write code manually.
There's a fragmented AI development problem: - Five developers building similar features, and all of them start from scratch - No way to share "here's how we prompt AI to follow our architecture" - No trail of how the AI changes were generated - People leave the company, so does the their knowledge
Aviator is a developer productivity platform with tools like MergeQueue, Stacked PRs, Releases (https://docs.aviator.co/). Runbooks came from watching our own team and our customers struggle with fragmented AI adoption.
With Runbooks 1. Create executable specs - Plan (with AI) that captures intent, constraints, and steps before AI touches code 2. Version control everything - Specs, AI conversations, and generated changes are all versioned. Fork, improve, roll back 3. Make it multiplayer - Multiple engineers collaborate in the same AI coding session. 4. Build a template library - Migrate one test from Enzyme to React Testing library → use that Runbook to batch migrate the entire test suite.
We're not replacing Claude Code or Cursor. Runbooks is powered by Claude Code. We’re just making it work at team scale.
--
Explore our prebuilt template from our open-source library: https://github.com/aviator-co/runbooks-library
Templates cover migrations, refactoring, and modernization. They're code-agnostic starting points that generate Runbooks using your code context.
Docs and quickstart: https://docs.aviator.co/runbooks
--
About the name: Yes, we know "runbooks" makes you think incident management. But technically a runbook is just a documented, step-by-step procedure—which is exactly what these are for AI agents. We're keeping it!
Happy to get feedback, answer questions about architecture, context management, sandboxes.
Show HN: Sphere-Base-One– A Python Kernel for Integer-Based Physics Optimization
The article discusses the Hahn Optimization Core, an open-source library that provides a flexible and efficient framework for optimization problems. The library supports a wide range of optimization algorithms and can be used in various applications, including machine learning, engineering, and finance.
Show HN: Built a tool solve the nightmare of chunking tables in PDF vs. Markdown
Hey HN, solo dev here. After years of frustration with how LLMs handle complex documents, especially PDFs with tables, I decided to build a solution myself. My approach uses a Markdown conversion step to preserve the table structure, which seems to work surprisingly well for chunking. This little parser is the first public piece of a much larger, privacy-focused AI platform I'm building. I'm pretty much running on fumes financially, so any feedback, critique, or support is massively appreciated. Happy to answer any questions about the approach!
Show HN: Pg-aiguide – Write better PostgreSQL code with AI
Hi HN,
I built a suite of tools to help ai generate better PostgreSQL code. The most interesting part is an opinionated set of skills to help it design better Postgres schemas. Also includes search over the manual.
Deployeable as both an MCP server and as a Claude Code Plugin.
I want to also include ecosystem docs and skills. Timescale (where I work) is already included. Looking for help with PostGIS and pgvector.
Full open source.
Show HN: I built a wizard to turn ideas into AI coding agent-ready specs
I created vibescaffold.dev. It is a wizard-style AI tool that will guide you from idea → vision → tech spec → implementation plan. It will generate all the documents necessary for AI coding agents to understand & iteratively execute on your vision.
How it works: - Step 1: Define your product vision and MVP - Step 2: AI helps create technical architecture and data models - Step 3: Generate a staged development plan - Step 4: Create an AGENTS.md for automated workflows
I've used AI coding tools for awhile. Before this workflow (and now, this tool), I kept getting "close but not quite" results from AI coding tools. I learned that the more context & guidance I gave these tools up front, the better results I got.
The other thing I have found with most tools that attempt to improve on "vibe coding" is that they add abstraction. To me, this just adds to the problem. AI coding agents are valuable, but they are error-prone - you need to be an active participation in their work. This workflow is designed to provide a scaffolding for these AI agents, while minimizing additional abstraction.
Would love feedback on the workflow - especially curious if others find the upfront planning helpful or constraining.
Show HN: Numr – A Vim-style TUI calculator for natural language math expressions
Features:
Natural language math: percentages, units, currencies Live exchange rates (152 currencies + BTC) Vim keybindings (Normal/Insert modes, hjkl, dd, etc.) Variables and running totals Syntax highlighting
Stack: Ratatui + Pest (PEG parser) + Tokio Install: # macOS brew tap nasedkinpv/tap && brew install numr
# Arch yay -S numr GitHub: https://github.com/nasedkinpv/numr Would love feedback on the code structure—it's a workspace with separate crates for core, editor, TUI, and CLI.
Show HN: I made a tool to export TikTok comments
Hey HN!
I'm Jack, the maker of ExportTok.
I built this tool because I spent hours manually copy-pasting TikTok comments for market research and wanted an easier way. With ExportTok, you can quickly export all comments from any TikTok video to CSV or Excel for analysis.
You do need to sign up to use it, but you don't need a TikTok account or credentials. The data includes usernames, timestamps, and engagement metrics. So far, users have exported over 500,000 comments.
Would love feedback:
1. What other data fields would be helpful? 2. Any privacy concerns? 3. Suggestions for use cases or improvements?
Thanks for checking it out!
Link: https://exporttok.com/