Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
Hi HN,
AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.
For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.
Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I
Rowboat has two parts:
(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.
(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.
Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.
Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.
Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.
We’d love to hear your thoughts and welcome contributions!
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
We needed a JS-first WYSIWYG DOCX editor and couldn't find a solid OSS option, most were either commercial or abandoned.
As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.
Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks.
Here's a demo video: https://youtu.be/cyEgW7wElcs
Why is this useful? (1) You don't have to figure out which webhooks you need or write listeners for each one. The library handles all of that. This follows the approach of libraries like dj-stripe in the Django world (https://dj-stripe.dev/). (2) Stripe's API has a 100 rpm rate limit. If you're checking subscription status frequently or building internal tools, you'll hit it. Querying your own Postgres doesn't have this problem. (3) You can give an AI agent read access to the stripe.* schema to debug payment issues—failed charges, refunds, whatever—without handing over Stripe dashboard access. (4) You can join Stripe data with your own tables for custom analytics, LTV calculations, etc.
It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.
It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:
billing.subscriptions.get({ userId });
billing.credits.consume({ userId, key: "api_calls", amount: 1 });
billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.
You define your plan in TypeScript:
{
name: "Pro",
description: "Cursor Pro plan",
price: [{ amount: 2000, currency: "usd", interval: "month" }],
features: {
api_completion: {
pricePerCredit: 1, // 1 cent per unit
trackUsage: true, // Enable usage billing
credits: { allocation: 500 },
displayName: "API Completions",
},
tab_completion: {
credits: { allocation: 2000 },
displayName: "Tab Completions",
},
},
}
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.Consume code would look like this:
await billing.credits.consume({
userId: user.id,
key: "api_completion",
amount: 1,
});
And if they want to allow manual top-ups by the user: await billing.credits.topUp({
userId: user.id,
key: "api_completion",
amount: 500, // buy 500 credits, charges $5.00
});
Similarly, we have APIs for wallets and usage.This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.
This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.
I vibe-coded a little toy app for testing: https://snw-test.vercel.app. There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.
Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx
Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.
Show HN: Distr 2.0 – A year of learning how to ship to customer environments
A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.
It turned out things get messy when your software is running in places you can't simply SSH into.
Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.
We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.
Other notable features we’ve implemented since our first launch:
- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.
- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.
- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.
- Secret Management, so database passwords don’t show up in configuration steps or logs.
Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.
For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.
Distr is fully open source and self-hostable: https://github.com/distr-sh/distr
Docs: https://distr.sh/docs
We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.
Show HN: Sol LeWitt-style instruction-based drawings in the browser
Sol LeWitt was a conceptual artist who never touched his own walls.
He wrote instructions and other people executed them, the original prompt engineer!
I bookmarked a project called "Solving Sol" seven years ago and made a repo in 2018. Committed a README. Never pushed anything else.
Fast forward to 2026, I finally built it.
https://intervolz.com/sollewitt/
Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure
Hey HN,
I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles.
Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser.
What's in there:
- 12 courses across 11 kingdoms (PHP basics to deployment)
- 100+ interactive exercises with real-time code validation using AST analysis
- AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers
- Full gamification: XP, levels, streaks, achievements, guilds, leaderboard
- Multilingual (EN/FR/NL)
The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content.
Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators.
Show HN: Multimodal perception system for real-time conversation
I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.
One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.
So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.
It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.
Some quick specs:
- Runs in real-time per conversation
- Processing at ~15fps video + overlapping audio alongside the conversation
- Handles nuanced emotions, whispers vs shouts
- Trained on synthetic + internal convo data
Happy to answer questions or go deeper on architecture/tradeoffs
More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...
Show HN: I built a macOS tool for network engineers – it's called NetViews
Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.
I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.
About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.
NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.
As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.
Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.
Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.
Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.
Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).
Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.
I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?
Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.
Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.
Show HN: I made paperboat.website, a platform for friends and creativity
Show HN: Clawe – open-source Trello for agent teams
We recently started to use agents to update some documentation across our codebase on a weekly basis, and everything quickly turned into cron jobs, logs, and terminal output.
it worked, but was hard to tell what agents were doing, why something failed, or whether a workflow was actually progressing.
We thought it would be more interesting to treat agents as long-lived workers with state and responsibilities and explicit handoffs. Something you can actually see and reason about, instead of just tailing logs.
So we built Clawe, a small coordination layer on top of OpenClaw that lets agent workflows run, pause, retry, and hand control back to a human at specific points.
This started as an experiment in how agent systems might feel to operate, but we're starting to see real potential for it, especially for content review and maintenance workflows in marketing. Curious what abstractions make sense, what feels unnecessary, and what breaks first.
Repo: https://github.com/getclawe/clawe
Show HN: Goxe 19k Logs/S on an I5
Show HN: LLMs are getting pretty good at Active Directory exploitation
One thing I will say is that at Vulnetic we will basically jumble different misconfigurations into a network and the agent always seems to find a way to exploit them. We have tried making them esoteric, and we are even now using EDR and tools like Wazuh to evaluate how our agent evades detection. These models are improving at hacking fast.
Show HN: Ktop – a themed terminal system monitor with charts and OOM tracking
Built this because I wanted nvtop + btop in one view while tuning local hybrid LLM inference. Supports themes, charts, and OOM kill tracking. Written in Python.
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks
I've done this same println debugging thing so many times, along with some sed/awk stuff to figure out which call was causing the issue. Now it's a small Go package.
With some `runtime.Callers` I can usually find the spot by just swapping the existing Mutex or RWMutex for this one.
Sometimes I switch the
mu.Lock()
defer mu.Unlock()
with the LockFunc/RLockFunc to get more detail defer mu.LockFunc()()
I almost always initialize it with `deadlog.New(deadlog.WithTrace(1))` and that's plenty.Not the most polished library, but it's not supposed to land in any commit, just a temporary debugging aid. I find it useful.
Show HN: NOOR – A Sovereign AI Core Built from the Heart of Suffering in Yemen
I am a developer coding under siege. I successfully encrypted the core logic of NOOR locally. My goal is to secure $400 for a laptop to build the full 7th Node. Check my proof in the link
Show HN: Showboat and Rodney, so agents can demo what they've built
The article discusses the similarities and differences between the programming practices of 'showboating' and 'rodneying', with the former focusing on impressive displays of coding prowess and the latter emphasizing pragmatic problem-solving and maintainable code.
Show HN: HN Companion – web app that enhances the experience of reading HN
HN is all about the rich discussions. We wanted to take the HN experience one step further - to bring the familiar keyboard-first navigation, find interesting viewpoints in the threads and get a gist of long threads so that we can decide which rabbit holes to explore. So we built HN Companion a year ago, and have been refining it ever since.
Try it: https://app.hncompanion.com or available as an extension for Firefox / Chrome: [0].
Most AI summarization strips the voices from conversations by flattening threads into a wall of text. This kills the joy of reading HN discussions. Instead, HN Companion works differently - it understands the thread hierarchy, the voting patterns and contrasting viewpoints - everything that makes HN interesting. Think of it like clustering related discussions across multiple hierarchies into a group and surfacing the comments that represent each cluster. It keeps the verbatim text with backlinks so that you never lose context and can continue the conversation from that point. Here is how the summarization works under the hood [1].
We first built this as an open source browser extension. But soon we learned that people hesitate to install it. So we built the same experience as a web app with all the features. This helped people see how it works, and use it on mobile too (in the browser or as PWA). This is now a playground to try new features before taking them to the browser extension.
We did a Show HN a year ago [2] and we have added these features based on user feedback:
* cached summaries - summaries are generated and cached on our servers. This improved the speed significantly. You still have the option to use your own API key or use local models through Ollama.
* our system prompt is available in the Settings page of the extension. You can customize it as you wish.
* sort the posts in the feed pages (/home, /show etc.) based on points, comments, time or the default sorting order.
* We tried fine tuning an open weights model to summarize, but learned that with a good system prompt and user prompt, the frontier models deliver results of similar quality. So we didn’t use the fine-tuned model, but you can run them locally.
The browser extension does not track any usage or analytics. The code is open source[3].
We want to continue to improve HN Companion, specifically add features like following an author, notes about an author, draft posts etc.
See it in action for a post here https://app.hncompanion.com/item?id=46937696
We would love to get your feedback on what would make this more useful for your HN reading.
[0] https://hncompanion.com/#download
[1] https://hncompanion.com/how-it-works
[2] https://news.ycombinator.com/item?id=42532374
[3] https://github.com/hncompanion/browser-extension
Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework
Wrote a thing about what makes Elysia stand out in a performance benchmark game
Basically, there's a JIT "compiler" embedded into a framework
This approach has been used by ajv and TypeBox before for input validation, making it faster than other competitors
Elysia basically does the same, but scales that into a full backend framework
This gave Elysia an unfair advantage in the performance game, making Elysia the fastest framework on Bun runtime, but also faster than most on Node, Deno, and Cloudflare Worker as well, when using the same underlying HTTP adapter
There is an escape hatch if necessary, but for the past 3 years, there have been no critical reports about the JIT "compiler"
What do you think?
Show HN: Total Recall – write-gated memory for Claude Code
built this because I got tired of re-teaching Claude Code the same context every session. Preferences, decisions, “we already tried X,” “don’t touch this file,” etc. After a few days it starts to feel like onboarding the same coworker every morning.
Most “agent memory” tools auto-save everything. That feels good briefly, then memory turns into a junk drawer and retrieval gets noisy. Total Recall takes the opposite approach: a write gate. Before anything gets promoted, it asks one question: “Will this change future behavior?” If not, it doesn’t get saved.
How it works:
Daily log first (raw notes)
Promote durable stuff into registers (decisions, preferences, people, projects)
Small working memory loads every session (kept intentionally lean)
Hooks fail open. SessionStart can surface open loops + recent context. PreCompact writes to disk (not model-visible stdout)
The holy shit moment is simple: tell Claude one important preference or decision once, come back tomorrow, and it behaves correctly without you repeating yourself.
Would love feedback from heavy Claude Code users:
Does the write gate feel right or too strict?
Does this actually reduce repetition over multiple days?
Any workflow/privacy footguns I’m missing?
Show HN: Open-Source SDK for AI Knowledge Work
GitHub: https://github.com/ClioAI/kw-sdk
Most AI agent frameworks target code. Write code, run tests, fix errors, repeat. That works because code has a natural verification signal. It works or it doesn't.
This SDK treats knowledge work like an engineering problem:
Task → Brief → Rubric (hidden from executor) → Work → Verify → Fail? → Retry → Pass → Submit
The orchestrator coordinates subagents, web search, code execution, and file I/O. then checks its own work against criteria it can't game (the rubric is generated in a separate call and the executor never sees it directly).
We originally built this as a harness for RL training on knowledge tasks. The rubric is the reward function. If you're training models on knowledge work, the brief→rubric→execute→verify loop gives you a structured reward signal for tasks that normally don't have one.
What makes Knowledge work different from code? (apart from feedback loop) I believe there is some functionality missing from today's agents when it comes to knowledge work. I tried to include that in this release. Example:
Explore mode: Mapping the solution space, identifying the set level gaps, and giving options.
Most agents optimize for a single answer, and end up with a median one. For strategy, design, creative problems, you want to see the options, what are the tradeoffs, and what can you do? Explore mode generates N distinct approaches, each with explicit assumptions and counterfactuals ("this works if X, breaks if Y"). The output ends with set-level gaps ie what angles the entire set missed. The gaps are often more valuable than the takes. I think this is what many of us do on a daily basis, but no agent directly captures it today. See https://github.com/ClioAI/kw-sdk/blob/main/examples/explore_... and the output for a sense of how this is different.
Checkpointing: With many ai agents and especially multi agent systems, i can see where it went wrong, but cant run inference from same stage. (or you may want multiple explorations once an agent has done some tasks like search and is now looking at ideas). I used this for rollouts a lot, and think its a great feature to run again, or fork from a specific checkpoint.
A note on Verification loop: The verify step is where the real leverage is. A model that can accurately assess its own work against a rubric is more valuable than one that generates slightly better first drafts. The rubric makes quality legible — to the agent, to the human, and potentially to a training signal.
Some things i like about this: - You can pass a remote execution environment (including your browser as a sandbox) and it would work. It can be docker, e2b, your local env, anything, the model will execute commands in your context, and will iterate based on feedback loop. Code execution is a protocol here.
- Tool calling: I realize you don't need complex functions. Models are good at writing terminal code, and can iterate based on feedback, so you can just pass either functions in context and model will execute or you can pass docs and model will write the code. (same as anthropic's programmatic tool calling). Details: https://github.com/ClioAI/kw-sdk/blob/main/TOOL_CALLING_GUID...
Lastly, some guides: - SDK guide: https://github.com/ClioAI/kw-sdk/blob/main/SDK_GUIDE.md - Extensible. See bizarro example where i add a new mode: https://github.com/ClioAI/kw-sdk/blob/main/examples/custom_m... - working with files: https://github.com/ClioAI/kw-sdk/blob/main/examples/with_fil... - this is simple but i love the csv example: https://github.com/ClioAI/kw-sdk/blob/main/examples/csv_rese... - remote execution: https://github.com/ClioAI/kw-sdk/blob/main/examples/with_cus...
And a lot more. This was completely refactored by opus and given the rework, probably would have taken a lot of time to release it.
MIT licensed. Would love your feedback.
Show HN: Creature – Desktop Client for Building and Sharing MCP Apps Within Orgs
Creature.run is a platform that allows users to create and share unique digital creatures, with a focus on interoperability and community-driven development. The article discusses the platform's features, such as on-chain breeding, accessory minting, and the upcoming decentralized autonomous organization (DAO) for community governance.
Show HN: Deidentify data before LLM with Go
A Go library for detecting and removing personally identifiable information (PII) from text and structured data.
Show HN: Algorithmically finding the longest line of sight on Earth
We're Tom and Ryan and we teamed up to build an algorithm with Rust and SIMD to exhaustively search for the longest line of sight on the planet. We can confirm that a previously speculated view between Pik Dankova in Kyrgyzstan and the Hindu Kush in China is indeed the longest, at 530km.
We go into all the details at https://alltheviews.world
And there's an interactive map with over 1 billion longest lines, covering the whole world at https://map.alltheviews.world Just click on any point and it'll load its longest line of sight.
Some of you may remember Tom's post[1] from a few months ago about how to efficiently pack visibility tiles for computing the entire planet. Well now it's done. The compute run itself took 100s of AMD Turin cores, 100s of GBs of RAM, a few TBs of disk and 2 days of constant runtime on multiple machines.
If you are interested in the technical details, Ryan and I have written extensively about the algorithm and pipeline that got us here:
* Tom's blog post: https://tombh.co.uk/longest-line-of-sight
* Ryan's technical breakdown: https://ryan.berge.rs/posts/total-viewshed-algorithm
This was a labor of love and we hope it inspires you both technically and naturally, to get you out seeing some of these vast views for yourselves!
1. https://news.ycombinator.com/item?id=45485227
Show HN: Kanban-md – File-based CLI Kanban built for local agents collaboration
I built kanban-md because I wanted a simple local task tracker that works well for the agent loop: drop tasks in, run multiple agents in parallel, avoid collisions, and observe progress easily.
Tasks are just Markdown files (with YAML frontmatter) in a `kanban/` next to your code — no server, no DB, no API tokens. Simple, transparent, future-proof.
What makes it useful for multi-agent workflows:
- *Atomic `pick --claim`* so two agents don’t grab the same task.
- *Token-efficient `--compact` output* (one-line-per-task) for cheap polling in agent loops.
- *Skills included* -- just run `kanban-md skill install --global`; There is a skill for CLI use, and a skill for the development loop using the CLI (might need some additional work to be more general though, but works quite well)
- *Live TUI (`kanban-md-tui`)* for control ~~and dopamine hits~~.
I'd love feedback from anyone running multi-agent coding workflows (especially around claim semantics, dependencies, and what makes you feel in control).
I had a blast using it myself for the last few days.
Tech stack: Go, Cobra, Bubbletea (TUI), fsnotify (file watching). ~85% test coverage across unit + e2e tests. After developing webapps, the simplicity of testing CLI and TUI was so freeing.
Show HN: Hyperspectra – Desktop tool for exploring AVIRIS-3 hyperspectral images
I've been working in GIS/mapping for a few years and found myself increasingly adjacent to machine learning and computer vision, which got me thinking about what I've been calling "broad spectrum" computer vision, object and anomaly identification beyond the visible range of light.
This is my first pass at building a tool to understand the physics involved, from electromagnetic absorption and reflectance of sunlight through to corrected sensor observation. I've been focused on building out and validating existing spectral indices to understand the fundamentals before exploring my own based on molecular properties of materials from first principles.
So far the tool includes:
-An atmospheric correction processor with three methods: empirical band-ratio, Py6S radiative transfer, and ISOFIT optimal estimation
-An interactive viewer for both radiance and reflectance data with RGB composites, 23 spectral indices, and ROI-based spectral signature extraction with reference material matching
-A learning suite that explains each stage of the observation chain from solar irradiance to sensor capture
So far I've tested on AVIRIS-3 data from Santa Barbara Island, San Joaquin Valley and Cuprite, NV. I'd love a sanity check on the direction and general utility. If anyone works with hyperspectral data and wants to take a crack at stress testing, install requires Python 3.9+ and optionally conda for Py6S.
Show HN: Fix your CSV's files problems
As a data analysis student, working with csv and excel sheets is important, in the phase cleaning you face several problems that can break the process, so I've built a free tools website + chrome extension to solve this problem, give it a try
Show HN: Track your input data and create colourful renders with it
The first version of MouseTracks I made all the way back in 2017. It got a lot of interest, but I never had the skill to actually complete it. I finally made a start on 2.0 a bit over a year ago, and I've been chipping away at it for fun ever since.
Some key features: - Track mouse movements, clicks, keyboard, and controller inputs (you can optionally disable any of these). - Switch profiles depending on what game / application is loaded. - Live render preview in the GUI. -It's designed to flawlessly handle multiple monitors and different resolutions. - Older mouse movements gradually fade to keep focus on the most recent activity. - Data can be recorded for years and rendered at any time.
It's designed as a "run and forget" type application, where you tick an option to load in the background on startup, and it'll silently keep recording until you're ready to render (it doesn't do that by default though - it just acts as a normal portable application if you don't change any settings).
It's all open source and compatible with Windows/Linux/macOS. The executables are built automatically by GitHub Actions, and there's also instructions on how to build or run locally with Python.
Feel free to ask any questions. I've got a bunch of example renders on the GitHub page which should hopefully demonstrate it properly.
Show HN: I spent 3 years reverse-engineering a 40 yo stock market sim from 1986
Hello my name is Ben Ward for the past 3 years I have been remastering the financial game Wall Street Raider created by Michael Jenkins originally on DOS in 1986.
It has been a rough journey but I finally see the light at the end of the tunnel. I just recently redid the website and thought maybe the full story of how this project came to be would interest you all. Thank you for reading.
Show HN: VillageSQL = MySQL and Extensions
INSTALL EXTENSION vsql-complex; CREATE TABLE t (val COMPLEX);
Look, MySQL is awesome [flamewar incoming?]. But the ecosystem has stagnated. Why?
No permissionless innovation. Postgres has flourished because people can change the core of the database (look at pgvector and pg_textsearch), without having to get their changes accepted upstream.
(This, btw, is what powered GitHub's early success: you can fork a repo and make changes without needing the owners' approval)
VillageSQL is a tracking fork of MySQL (open source, ofc) that adds an extension framework: * Drop-in replacement
* Add custom data types and functions (with indexes coming soon)
* we wrote example extensions (vsql-ai, -uuid, crypto, etc.)
* you have a better idea for an extension
* my CEO submitted a Show HN post but linked to the announcement blog; help me show him hackers want code first
* I'm particularly proud of the friendly C++ API to add custom functions (in func_builder.h)
That link again is https://github.com/villagesql/villagesql-server
(Oh, and I get to work with the former TL of Google's BigTable and Colossus, so we care about doing databases Right)
Show HN: I just want *one page* to see all investments, so that's what I built
I wasn't able to find any other app that successfully combined TradFi and DeFi into a single page. So I built this for myself, and I hope you folks can find it useful as well!
My app supports anything in the Yahoo Finance API or in a Bitcoin, Ethereum & EVM, or Solana wallet. Can you think of anything else it should track?