Show HN: Algorithmically Finding the Longest Line of Sight on Earth
We're Tom and Ryan and we teamed up to build an algorithm with Rust and SIMD to exhaustively search for the longest line of sight on the planet. We can confirm that a previously speculated view between Pik Dankova in Kyrgyzstan and the Hindu Kush in China is indeed the longest, at 530km.
We go into all the details at https://alltheviews.world
And there's an interactive map with over 1 billion longest lines, covering the whole world at https://map.alltheviews.world Just click on any point and it'll load its longest line of sight.
Some of you may remember Tom's post[1] from a few months ago about how to efficiently pack visibility tiles for computing the entire planet. Well now it's done. The compute run itself took 100s of AMD Turin cores, 100s of GBs of RAM, a few TBs of disk and 2 days of constant runtime on multiple machines.
If you are interested in the technical details, Ryan and I have written extensively about the algorithm and pipeline that got us here:
* Tom's blog post: https://tombh.co.uk/longest-line-of-sight
* Ryan's technical breakdown: https://ryan.berge.rs/posts/total-viewshed-algorithm
This was a labor of love and we hope it inspires you both technically and naturally, to get you out seeing some of these vast views for yourselves!
1. https://news.ycombinator.com/item?id=45485227
Show HN: Browse Internet Infrastructure
I'm launching Wirewiki.com today!
Wirewiki makes the internet’s hidden infrastructure browsable.
I quit my job 5 years ago to scale Nslookup.io. But after reaching 600k monthly users, I hit a ceiling. I couldn't naturally expand beyond DNS because of the domain name.
So I went back to the drawing board: how would I make it today? Not as a collection of tools, but as a browsable graph.
I've spent hundreds of hours and commits building that. It's not even at 10% of what I want it to be, but more than enough to be useful, and (in my biased opinion) much better than what's out there.
Wirewiki launches with DNS lookup, propagation, zone transfer and SPF checking. It also scans the entire IPv4 space for DNS servers and indexes them. I'm working on adding more data and tools.
I feel like I've developed tunnel vision, so if you see anything that feels off, let me know!
I'll keep Wirewiki open and free. Once it has a substantial amount of users, I'll open it up to sponsorship / brand integration from hosting providers, registrars and CDNs, as users will likely be in the market for those. But my goal is to keep Wirewiki free from display ads. I'm confident that's viable.
Show HN: Minimal NIST/OWASP-compliant auth implementation for Cloudflare Workers
This is an educational reference implementation showing how to build reasonably secure, standards-compliant authentication from first principles on Cloudflare Workers.
Stack: Hono, Turso (libSQL), PBKDF2-SHA384 + normalization + common-password checks, JWT access + refresh tokens with revocation support, HTTP-only SameSite cookies, device tracking.
It's deliberately minimal — no OAuth, no passkeys, no magic links, no rate limiting — because the goal is clarity and auditability.
I wrote it mainly to deeply understand edge-runtime auth constraints and to have a clean Apache-2.0 example that follows NIST SP 800-63B / SP 800-132 and OWASP guidance.
For production I'd almost always reach for Better Auth instead (https://www.better-auth.com) — this repo is not trying to compete with it.
Live demo: https://private-landing.vhsdev.workers.dev/
Repo: https://github.com/vhscom/private-landing
Happy to answer questions about the crypto choices, the refresh token revocation pattern, Turso schema, constant-time comparison, unicode pitfalls, etc.
Show HN: A custom font that displays Cistercian numerals using ligatures
The article discusses the creation of a Cistercian font, a specialized font based on the handwritten script used by Cistercian monks in the Middle Ages. The font is designed to faithfully recreate the distinctive characters and layout of the original Cistercian manuscripts.
Show HN: I created a Mars colony RPG based on Kim Stanley Robinson’s Mars books
I built a desktop Mars colony survival game called Underhill, in homage to Kim Stanley Robinson's Mars trilogy. Land on Mars, build solar panels and greenhouses, and try not to pass out during dust storms. Eventually your colonists split into factions: Greens who want to terraform and Reds who want to preserve Mars.
There’s Chill Mode for players that just want to build & hang, and Conflict Mode that introduces the Red v. Green factions. Reds sabotage, the terrain slowly turns green as the world gets more terraformed.
Feedback welcome, especially on performance and gameplay!
Show HN: Slack CLI for Agents
Our team lives in Slack, but we don’t have access to the Slack MCP and couldn’t find anything out there that worked for us, so we coded our own agent-slack CLI
* Can paste in Slack URLs
* Token efficient
* Zero-config (auto auth if you use Slack Desktop)
Auto downloads files/snippets.
Also can read Slack canvases as markdown!MIT License
Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods
I think the very first video game I ever played was Bugdom by Pangea Software, which came with the original iMac. There was also a shooter called Nanosaur, but my 7-year-old heart belonged to the more peaceable Bugdom, which featured a roly-poly named Rollie McFly needing to rescue ladybugs from evil fire ants and bees.
Upon seeing the port to modern systems (https://github.com/jorio/Bugdom), I figured it should be able to run entirely in-browser nowadays, and also that AI coding tools "should" be able to do this entire project for me. I ended up spending perhaps 20 hours on it with Claude Code, but we got there.
Once ported, I added a half-dozen mods that would have pleased my childhood self (like low-gravity mode and flying slugs & caterpillars mode), and a few that please my current self (like Dance Party mode).
EDIT: Here are some mod/level combinations I recommend
* https://reallyeli.com/bugdom/Bugdom.html?gravity=0.3&video=s...
* https://reallyeli.com/bugdom/Bugdom.html?level=3&visual=2&gr...
* https://reallyeli.com/bugdom/Bugdom.html?level=8&flying_slug...
Show HN: GitWriter – mobile Markdown editor for writers
Written in Rust! (not really)
I've been using Git for writing recently and was lacking a good UI for editing markdown files when away from the laptop. I created GitWriter to fill that gap.
It's built with Expo and I added caching for offline editing with local sqlite, writing goals and a bunch of other features.
Markdown is having a huge moment thanks to AI and more and more people are starting to see Git is a really solid choice for creative and technical writers.
I've recently launched for iOS and would appreciate feedback!
Show HN: ArkWatch – Uptime monitoring with zero dependencies
I'm a solo dev, and I got tired of signing up for monitoring services that require installing agents, browser extensions, or wiring up Slack/PagerDuty just to know if my side project is down.
So I built ArkWatch: a free uptime monitoring API with zero dependencies. No SDK, no npm package, no webhook setup. Just curl + your email.
One command to start monitoring:
curl -X POST https://watch.arkforge.fr/monitors \
-H "Content-Type: application/json" \
-d '{"url":"https://yoursite.com","email":"you@example.com"}'
That's it. Your URL gets checked every 5 minutes. If it goes down, you get an email. No dashboard to check, no account to manage, no vendor lock-in.It also has an AI layer (Mistral) that summarizes what actually changed on a page – useful for tracking competitor pricing or changelog updates. But the core use case is dead-simple uptime alerts.
Stack: Python/FastAPI, hosted on Hetzner EU. Free tier: 3 URLs, 5-min checks. Paid starts at €9/month for more URLs and faster intervals.
I'd love feedback from HN – especially on what you'd want from a zero-dependency monitoring tool. Try it, break it, tell me what's missing.
Show HN: Agentseed – Generate Agents.md from a Codebase
npx agentseed init
AGENTS.md (https://agents.md) is a standard file used by AI coding agents to understand a repo (stack, commands, conventions).
Agentseed generates it directly from the codebase using static analysis. Optional LLM augmentation is supported by bringing your own API key.
Extracts languages, frameworks, dependencies, build/test commands, directory structure, and monorepo boundaries.
Show HN: Emergent – Artificial life simulation in a single HTML file
I built an artificial life simulator that fits in one HTML file (~1400 lines ~50KB) with Claude Opus 4.6, interestingly, I didn't even ask it to build the game itself, I've requested to create the software that didn't exist before and not use any dependencies, third-party libraries or ask questions.
Firstly Opus 4.6 made photoshop clone that barely worked, but had nice and neat UI with all common features for the image editor, then I've called that crap and asked to really build someting that wasn't there, it made Emergent there. Please check it out and tell what you think, of course it's a Game of Life in a nutshell, but look at the rest of the UI, game stats, and other features like genome mutation and species evolvement.
Features: - Continuous diet gene (0–1) drives herbivore/carnivore/omnivore specialization - Spatial hash grid for performant collision detection - Pinch-to-zoom and tap-to-feed on mobile - Real-time population graphs, creature inspector, and event log - Drop food, trigger plagues, cause mutation bursts
Amazes me in some kind of a degree, going to continue dig into the abyss of the vibes :)
Show HN: Horizons – OSS agent execution engine
I'm Josh, founder of Synth. We've been working on coding agent optimization with method like GEPA and MIPRO (the latter of which, I helped to originally develop), agent evaluation via methods like RLMs, and large scale deployment for training and inference. We've also worked on patterns for memory, processing live context, and managing agent actions, combining it all in a single stack called Horizons. With the release of OpenAI's Frontier and the consumer excitement around OpenClaw, we think the timing is right to release a v0.
It integrates with our sdk for evaluation and optimization but also comes batteries-included with self-hosted implementations. We think Horizons will make building agent-based products a lot easier and help builders focus on their proprietary data, context, and algorithms
Some notes:
- you can configure claude code, codex, opencode to run in the engine. on-demand or on a cron
- we're striving to make it simple to integrate with existing backends via a 2-way event driven interface, but I'm 99.9% sure it'll change as there are a ton of unknown unknowns
- support for mcp, and we are building with authentication (rbac) in mind, although it's a long-journey
- all self-host able via docker
A very simplistic way to think about it - an OSS take on Frontier, or maybe OpenClaw for prod
Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory
I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).
It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.
Key features:
- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0
Install: `cargo install localgpt`
I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.
GitHub: https://github.com/localgpt-app/localgpt Website: https://localgpt.app
Would love feedback on the architecture or feature ideas.
Show HN: Physical swipe typing for your computer
made a faster way to type with one finger (STT aside), uses a DTW algo to compute and compare paths. engine written in Rust and compiled to WASM and FFI for mac.
Show HN: It took 4 years to sell my startup. I wrote a book about it
The article discusses the author's book titled 'Ma Book', which explores the author's mother's life and their relationship. It provides a personal and insightful perspective on the challenges and complexities of family dynamics.
Show HN: Frop – AirDrop alternative for any device (no app required)
Show HN: WhatsApp Chat Viewer – exported chats as HTML
I built this to make it easier to review exported WhatsApp conversations. It generates an HTML page with embedded images, videos, and audio players. It can also transcribe audio messages using OpenAI's API and correct them using an LLM with conversation context for better accuracy.
Show HN: Smooth CLI – Token-efficient browser for AI agents
Hi HN! Smooth CLI (https://www.smooth.sh) is a browser that agents like Claude Code can use to navigate the web reliably, quickly, and affordably. It lets agents specify tasks using natural language, hiding UI complexity, and allowing them to focus on higher-level intents to carry out complex web tasks. It can also use your IP address while running browsers in the cloud, which helps a lot with roadblocks like captchas (https://docs.smooth.sh/features/use-my-ip).
Here’s a demo: https://www.youtube.com/watch?v=62jthcU705k Docs start at https://docs.smooth.sh.
Agents like Claude Code, etc are amazing but mostly restrained to the CLI, while a ton of valuable work needs a browser. This is a fundamental limitation to what these agents can do.
So far, attempts to add browsers to these agents (Claude’s built-in --chrome, Playwright MCP, agent-browser, etc.) all have interfaces that are unnatural for browsing. They expose hundreds of tools - e.g. click, type, select, etc - and the action space is too complex. (For an example, see the low-level details listed at https://github.com/vercel-labs/agent-browser). Also, they don’t handle the billion edge cases of the internet like iframes nested in iframes nested in shadow-doms and so on. The internet is super messy! Tools that rely on the accessibility tree, in particular, unfortunately do not work for a lot of websites.
We believe that these tools are at the wrong level of abstraction: they make the agent focus on UI details instead of the task to be accomplished.
Using a giant general-purpose model like Opus to click on buttons and fill out forms ends up being slow and expensive. The context window gets bogged down with details like clicks and keystrokes, and the model has to figure out how to do browser navigation each time. A smaller model in a system specifically designed for browsing can actually do this much better and at a fraction of the cost and latency.
Security matters too - probably more than people realize. When you run an agent on the web, you should treat it like an untrusted actor. It should access the web using a sandboxed machine and have minimal permissions by default. Virtual browsers are the perfect environment for that. There’s a good write up by Paul Kinlan that explains this very well (see https://aifoc.us/the-browser-is-the-sandbox and https://news.ycombinator.com/item?id=46762150). Browsers were built to interact with untrusted software safely. They’re an isolation boundary that already works.
Smooth CLI is a browser designed for agents based on what they’re good at. We expose a higher-level interface to let the agent think in terms of goals and tasks, not low-level details.
For example, instead of this:
click(x=342, y=128)
type("search query")
click(x=401, y=130)
scroll(down=500)
click(x=220, y=340)
...50 more steps
Your agent just says: Search for flights from NYC to LA and find the cheapest option
Agents like Claude Code can use the Smooth CLI to extract hard-to-reach data, fill-in forms, download files, interact with dynamic content, handle authentication, vibe-test apps, and a lot more.Smooth enables agents to launch as many browsers and tasks as they want, autonomously, and on-demand. If the agent is carrying out work on someone’s behalf, the agent’s browser presents itself to the web as a device on the user’s network. The need for this feature may diminish over time, but for now it’s a necessary primitive. To support this, Smooth offers a “self” proxy that creates a secure tunnel and routes all browser traffic through your machine’s IP address (https://docs.smooth.sh/features/use-my-ip). This is one of our favorite features because it makes the agent look like it’s running on your machine, while keeping all the benefits of running in the cloud.
We also take away as much security responsibility from the agent as possible. The agent should not be aware of authentication details or be responsible for handling malicious behavior such as prompt injections. While some security responsibility will always remain with the agent, the browser should minimize this burden as much as possible.
We’re biased of course, but in our tests, running Claude with Smooth CLI has been 20x faster and 5x cheaper than Claude Code with the --chrome flag (https://www.smooth.sh/images/comparison.gif). Happy to explain further how we’ve tested this and to answer any questions about it!
Instructions to install: https://docs.smooth.sh/cli. Plans and pricing: https://docs.smooth.sh/pricing.
It’s free to try, and we'd love to get feedback/ideas if you give it a go :)
We’d love to hear what you think, especially if you’ve tried using browsers with AI agents. Happy to answer questions, dig into tradeoffs, or explain any part of the design and implementation!
Show HN: Poisson – Chrome extension that buries your browsing in decoy traffic
I built a Chrome extension that generates noise traffic to dilute your browsing profile. Instead of trying to hide what you do online (increasingly difficult), it buries your real activity in a flood of decoy searches, page visits, and ad clicks across dozens of site categories.
The core idea is signal dilution — the same principle behind chaff in radar countermeasures and differential privacy in data science. If you visit 50 pages today and Poisson
visits 500 more on your behalf, anyone analyzing your traffic (ISP, data broker, ad-tech) sees noise, not signal.
How it works:
- Uses a Poisson process for scheduling, so timing looks like natural human browsing rather than mechanical intervals
- Opens background tabs (never steals focus), injects a content script that scrolls, hovers, and clicks links to look realistic
- Batches tasks within Chrome's 1-minute alarm minimum, dispatching at calculated Poisson offsets
- Four intensity levels: ~18/hr to ~300/hr
- Configurable search engines, task mix (search/browse/ad-click ratio), and site categories
What it explicitly does NOT do:
- No data collection, telemetry, or analytics
- No external server communication
- No access to your cookies, history, or real tabs
- No accounts or personal information required
Every URL it will ever visit is hardcoded in the source. Every action is logged in a live feed you can inspect. The whole thing is ~2,500 lines of commented JS.
I know this approach has real limitations — it doesn't defeat browser fingerprinting, your ISP can still see the noise domains, and a sufficiently motivated adversary could
potentially distinguish real traffic from generated traffic through timing analysis or behavioral patterns. This is one layer in a defense-in-depth approach, not a complete
solution.
Similar prior art: TrackMeNot (randomized search queries since 2006) and AdNauseam (clicks all ads to pollute profiles). Both from NYU researchers. Google banned AdNauseam from
the Chrome Web Store, which says something.
Code: https://github.com/Daring-Designs/poisson-extension
Not on the Chrome Web Store — you load it unpacked. MIT licensed.
Show HN: WrapClaw – a managed SaaS wrapper around Open Claw
Hi HN
I built WrapClaw, a SaaS wrapper around Open Claw.
Open Claw is a developer-first tool that gives you a dedicated terminal to run tasks and AI workflows (including WhatsApp integrations). It’s powerful, but running it as a hosted, multi-user product requires a lot of infra work.
WrapClaw focuses on that missing layer.
What WrapClaw adds:
A dedicated terminal workspace per user
Isolated Docker containers for each workspace
Ability to scale CPU and RAM per user (e.g. 2GB → 4GB)
A no-code UI on top of Open Claw
Managed infra so users don’t deal with Docker or servers
The goal is to make Open Claw usable as a proper SaaS while keeping the developer flexibility.
This is early, and I’d love feedback on:
What infra controls are actually useful
Whether no-code on top of terminal tools makes sense
Pricing expectations for managed compute
Link: https://wrapclaw.com
Happy to answer questions.
Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox
Example repo: https://github.com/valdanylchuk/breezydemo
The underlying ESP-IDF component: https://github.com/valdanylchuk/breezybox
It is something like Raspberry Pi, but without the overhead of a full server-grade OS.
It captures a lot of the old school DOS era coding experience. I created a custom fast text mode driver, plan to add VGA-like graphics next. ANSI text demos run smooth, as you can see in the demo video featured in the Readme.
App installs also work smoothly. The first time it installed 6 apps from my git repo with one command, felt like, "OMG, I got homebrew to run on a toaster!" And best of all, it can install from any repo, no approvals or waiting, you just publish a compatible ELF file in your release.
Coverage:
Hackaday: https://hackaday.com/2026/02/06/breezybox-a-busybox-like-she...
Hackster.io: https://www.hackster.io/news/valentyn-danylchuk-s-breezybox-...
Reddit: https://www.reddit.com/r/esp32/comments/1qq503c/i_made_an_in...
Show HN: If you lose your memory, how to regain access to your computer?
Due to bike-induced concussions, I've been worried for a while about losing my memory and not being able to log back in.
I combined shamir secret sharing (hashicorp vault's implementation) with age-encryption, and packaged it using WASM for a neat in-browser offline UX.
The idea is that if something happens to me, my friends and family would help me get back access to the data that matters most to me. 5 out of 7 friends need to agree for the vault to unlock.
Try out the demo in the website, it runs entirely in your browser!
Show HN: Fine-tuned Qwen2.5-7B on 100 films for probabilistic story graphs
Hi HN, I'm a computer systems engineering student in Mexico who switched from film school. I built CineGraphs because my filmmaker friends and I kept hitting the same wall—we'd have a vague idea for a film but no structured way to explore where it could go. Every AI writing tool we tried output generic, formulaic slop. I didn't want to build another ChatGPT wrapper, so I went a different route.
The idea is simple: you input a rough concept, and the tool generates branching narrative paths visualized as a graph. You can sculpt those branches into a structured screenplay format and export to Fountain for use in professional screenwriting software.
Most AI writing tools are trained on generic internet text, which is why they output generic results. I wanted something that understood actual cinematic storytelling—not plot summaries or Wikipedia synopses, but the actual structural DNA of films. So I spent a month curating 100 films I consider high-quality cinema. Not just popular films, but works with distinctive narrative structures: Godard's jump cuts and essay-film digressions, Kurosawa's parallel character arcs, Brakhage's non-linear visual poetry, Tarkovsky's slow-burn temporal structures. The selection was deliberately eclectic because I wanted the model to learn that "story" can mean many things.
Getting useful training data from films is harder than it sounds. I built a 1000+ line Python pipeline using Qwen3-VL to analyze each film with subtitles enabled. The pipeline extracts scene-level narrative beats, character relationships and how they evolve, thematic threads, and dialogue patterns. The tricky part was getting Qwen3-VL to understand cinematic structure rather than just summarizing plot. I had to iterate on the prompts extensively to get it to identify things like "this scene functions as a mirror to the opening" or "this character's arc inverts the protagonist's." That took weeks and I'm still not fully satisfied with it, but it's good enough to produce useful training data.
From those extractions I generated a 10K example dataset of prompt-to-branching-narrative pairs, then fine-tuned Qwen2.5-7B-Instruct with a LoRA optimized for probabilistic story branching. The LoRA handles the graph generation—exploring possible narrative directions—while the full 7B model generates the actual technical screenplay format when you export. I chose the 7B model because I wanted something that could run affordably at scale while still being capable enough for nuanced generation. The whole thing is served on a single 4090 GPU using vLLM. The frontend uses React Flow for the graph visualization. The key insight was that screenwriting is fundamentally about making choices—what if the character goes left instead of right?—but most writing tools force you into a linear document too early. The graph structure lets you explore multiple paths before committing, which matches how writers actually think in early development.
The biggest surprise was how much the film selection mattered. Early versions trained on more mainstream films produced much more formulaic outputs. Adding experimental and international cinema dramatically improved the variety and interestingness of the generations. The model seemed to learn that narrative structure is a design space, not a formula.
We've been using it ourselves to break through second-act problems—when you know where you want to end up but can't figure out how to get there. The branching format forces you to think in possibilities rather than committing too early.
You can try it at https://cinegraphs.ai/ — no signup required to test it out. You get a full project with up to 50 branches without registering, though you'll need to create an account to save it. Registered users get 3 free projects. I'd love feedback on whether the generation quality feels meaningfully different from generic AI tools, and whether the graph interface adds value or just friction.
Show HN: R3forth, a ColorForth-inspired language with a tiny VM
r3 is a high-performance, open-source programming language and environment that focuses on simplicity, efficiency, and creativity. It provides a powerful set of tools for developing a wide range of applications, from games and graphics to data visualization and automation.
Show HN: A luma dependent chroma compression algorithm (image compression)
This article presents a novel image compression algorithm that adapts the block size and luma-dependent chroma compression based on the spatial domain, leading to improved compression efficiency while maintaining image quality.
Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework
Wrote a thing about what makes Elysia stand out in a performance benchmark game
Basically, there's a JIT "compiler" embedded into a framework
This approach has been used by ajv and TypeBox before for input validation, making it faster than other competitors
Elysia basically does the same, but scales that into a full backend framework
This gave Elysia an unfair advantage in the performance game, making Elysia the fastest framework on Bun runtime, but also faster than most on Node, Deno, and Cloudflare Worker as well, when using the same underlying HTTP adapter
There is an escape hatch if necessary, but for the past 3 years, there have been no critical reports about the JIT "compiler"
What do you think?
Show HN: I spent 4 years building a UI design tool with only the features I use
Hello everyone!
I'm a solo developer who's been doing UI/UX work since 2007. Over the years, I watched design tools evolve from lightweight products into bloated feature-heavy platforms. I kept finding myself using a small amount of the features while the rest just mostly got in the way.
So a few years ago I set out to build a design tool just like I wanted. So I built Vecti with what I actually need: pixel-perfect grid snapping, a performant canvas renderer, shared asset libraries, and export/presentation features. No collaborative whiteboarding. No plugin ecosystem. No enterprise features. Just the design loop.
Four years later, I can proudly show it off. Built and hosted in the EU with European privacy regulations. Free tier available (no credit card, one editor forever).
On privacy: I use some basic analytics (page views, referrers) but zero tracking inside the app itself. No session recordings, no behavior analytics, no third-party scripts beyond the essentials.
If you're a solo designer or small team who wants a tool that stays out of your way, I'd genuinely appreciate your feedback: https://vecti.com
Happy to answer questions about the tech stack, architecture decisions, why certain features didn't make the cut, or what's next.
Show HN: Freeciv 3D with hex map tiles and WebGPU renderer
FreeCivWorld is an open-source project that aims to create a browser-based version of the classic Civilization game. The project seeks to provide a free and accessible way for players to enjoy the strategic gameplay of Civilization in a web-based format.
Show HN: IsHumanCadence – Bot detection via keystroke dynamics (no CAPTCHAs)
The article discusses a project called 'isHumanCadence' which explores the idea of using a computer program to determine if a person's typing cadence is human or machine-generated. It explores the technical details and potential applications of this approach.
Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust
I'm a software engineer who keeps getting pulled into DevOps no matter how hard I try to escape it. I recently moved into a Lead DevOps Engineer role writing tooling to automate a lot of the pain away. On my own time outside of work, I built Artifact Keeper — a self-hosted artifact registry that supports 45+ package formats. Security scanning, SSO, replication, WASM plugins — it's all in the MIT-licensed release. No enterprise tier. No feature gates. No surprise invoices.
Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.
Why I built it:
Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.
Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.
The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.
The AI story (I'm going to be honest about this):
I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.
AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.
Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.
Try it:
git clone https://github.com/artifact-keeper/artifact-keeper.git
cd artifact-keeper
docker compose up -d
Then visit http://localhost:30080Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/
I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.
https://github.com/artifact-keeper