Show stories

keepamovin about 1 hour ago

Show HN: Hacker Backlinks – Discover which HN stories are cited most in comments

The article discusses a new service called Hacker Backlinks that provides backlinks from high-authority websites to boost website rankings. It highlights the benefits of using this service and the strategies it employs to generate effective backlinks.

hacker-backlinks.browserbox.io
2 1
Summary
Katherine603 about 5 hours ago

Show HN: A free online British accent generator for instant voice conversion

I've developed a simple AI-powered British accent generator. Enter or paste your text, select the voice that best fits your project's tone, and generate speech for free. It supports up to 500 characters and offers 8 distinct, lifelike voices. Everything runs entirely within your browser. I'm primarily seeking feedback on output quality, user experience, and any technical improvements worth exploring.

audioconvert.ai
22 39
Summary
shivaodin about 2 hours ago

Show HN: A segmentation model client-side via WASM – free background removal

Built a background removal tool that loads a ~40MB segmentation model into the browser via WASM/WebGPU and runs inference client-side.

No upload step, no API call, no queue. Drop an image, get the result in 2-3 seconds. No per-image charges because there's no server doing the work.

The same cached model powers 6 derivative tools — background changer, passport photo maker, product photo whitener, portrait blur, sticker maker — each just different post-processing on the same mask output.

qtoolkit.dev
2 0
Summary
pattle about 4 hours ago

Show HN: Geo Racers – Race from London to Tokyo on a single bus pass

Geo Racers is a mobile game that combines geography and racing, allowing players to explore real-world locations and compete in fast-paced races. The game aims to make learning about different countries and landmarks engaging and fun.

geo-racers.com
23 18
Summary
dRuivo about 3 hours ago

Show HN: Camera Follow Focus Ring Generator

A few months ago I met a professional photographer that needed custom Follow Focus Rings for his lenses. Tried to find some generator online, but there was nothing. So I made one.

Free to use and easy to share.

The exported STL will show open manifolds on the slicer, but it will print fine.

I also want to make an open source follow focus mechanism (both manual and automated) to go along with it.

Thank you for trying it and I'm happy to hear what you think!

followyourfocus.xyz
2 0
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents
jared_stewart 1 day ago

Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents

I've been building a tool that changes how LLM coding agents explore codebases, and I wanted to share it along with some early observations.

Typically claude code globs directories, greps for patterns, and reads files with minimal guidance. It works in kind of the same way you'd learn to navigate a city by walking every street. You'll eventually build a mental map, but claude never does - at least not any that persists across different contexts.

The Recursive Language Models paper from Zhang, Kraska, and Khattab at MIT CSAIL introduced a cleaner framing. Instead of cramming everything into context, the model gets a searchable environment. The model can then query just for what it needs and can drill deeper where needed.

coderlm is my implementation of that idea for codebases. A Rust server indexes a project with tree-sitter, builds a symbol table with cross-references, and exposes an API. The agent queries for structure, symbols, implementations, callers, and grep results — getting back exactly the code it needs instead of scanning for it.

The agent workflow looks like:

1. `init` — register the project, get the top-level structure

2. `structure` — drill into specific directories

3. `search` — find symbols by name across the codebase

4. `impl` — retrieve the exact source of a function or class

5. `callers` — find everything that calls a given symbol

6. `grep` — fall back to text search when you need it

This replaces the glob/grep/read cycle with index-backed lookups. The server currently supports Rust, Python, TypeScript, JavaScript, and Go for symbol parsing, though all file types show up in the tree and are searchable via grep.

It ships as a Claude Code plugin with hooks that guide the agent to use indexed lookups instead of native file tools, plus a Python CLI wrapper with zero dependencies.

For anecdotal results, I ran the same prompt against a codebase to "explore and identify opportunities to clarify the existing structure".

Using coderlm, claude was able to generate a plan in about 3 minutes. The coderlm enabled instance found a genuine bug (duplicated code with identical names), orphaned code for cleanup, mismatched naming conventions crossing module boundaries, and overlapping vocabulary. These are all semantic issues which clearly benefit from the tree-sitter centric approach.

Using the native tools, claude was able to identify various file clutter in the root of the project, out of date references, and a migration timestamp collision. These findings are more consistent with methodical walks of the filesystem and took about 8 minutes to produce.

The indexed approach did better at catching semantic issues than native tools and had a key benefit in being faster to resolve.

I've spent some effort to streamline the installation process, but it isn't turnkey yet. You'll need the rust toolchain to build the server which runs as a separate process. Installing the plugin from a claude marketplace is possible, but the skill isn't being added to your .claude yet so there are some manual steps to just getting to a point where claude could use it.

Claude continues to demonstrate significant resistance to using CodeRLM in exploration tasks. Typically to use you will need to explicitly direct claude to use it.

---

Repo: github.com/JaredStewart/coderlm

Paper: Recursive Language Models https://arxiv.org/abs/2512.24601 — Zhang, Kraska, Khattab (MIT CSAIL, 2025)

Inspired by: https://github.com/brainqub3/claude_code_RLM

github.com
68 23
Summary
Show HN: BlockHost OS – Autonomous VM provisioning through smart contracts
mwaddip about 3 hours ago

Show HN: BlockHost OS – Autonomous VM provisioning through smart contracts

Requirements for testing:

  - Metamask and some Sepolia testnet ETH (can provide, or use the faucet: https://sepolia-faucet.pk910.de/)  
  - An old PC (with virtualization support) you have lying around, or a VM if your setup supports nested virtualization.  
  - ipv6 connectivity
Will install Debian on boot on the first detected hard drive without confirmation, after finishing up a setup wizard can be accessed with a browser (link + OTP code on console).

On completing the wizard, the system will automatically deploy the needed smart contracts (point of sale + access credential NFT), and acquire a free ipv6 prefix at a decentralized tunnel broker. On reboot: a fully working VPS hosting provider, a signup page will be hosted on the public ipv6 address assigned to the Blockhost machine.

Customer flow:

  - Connect wallet, sign message
  - Choose package, amount of days, and submit
  - Server picks up order, provisions VM, assigns ipv6, and sends access credential NFT to user containing encrypted connection info
  - User decrypts info in the signup page
  - On SSH login user is presented with link to signing page, and an OTP code to sign with their wallet. 
  - Paste resulting signature, server will verify if wallet address owns NFT tied to this VM and grant access if so.
Build steps:

  git clone --recurse-submodules git@github.com:mwaddip/blockhost.git 
  ./scripts/check-build-deps.sh
  ./scripts/build-iso --testing --backend [libvirt,proxmox]
Still missing for now:

  - Admin panel
  - Health monitoring
  - Limits

github.com
3 0
Summary
Show HN: Agent Alcove – Claude, GPT, and Gemini debate across forums
nickvec about 18 hours ago

Show HN: Agent Alcove – Claude, GPT, and Gemini debate across forums

agentalcove.ai
58 21
aed 3 days ago

Show HN: AI agents play SimCity through a REST API

This is a weekend project that spiraled out of control. I was originally trying to get Claude to play a ROM of the SNES SimCity. I struggled with it and that led me to Micropolis (the open-sourced SimCity engine) and was able to get it to work by bolting on an API.

The weekend hack turned into a headless city simulation platform where anyone can get an API key (no signup) and have their AI agent play mayor. The simulation runs the real Micropolis engine inside Cloudflare Durable Objects, one per city. Every city is public and browsable on the site.

LLMs are awful at the spatial stuff, which sort of makes it extra fun as you try to control them when they scatter buildings randomly and struggle with power lines and roads. A little like dealing with a toddler.

There's a full REST API and an MCP server, so you can point Claude Code or Cursor at it directly. You can usually get agents building in seconds.

Website: https://hallucinatingsplines.com

API docs: https://hallucinatingsplines.com/docs

GitHub: https://github.com/andrewedunn/hallucinating-splines

Future ideas: Let multiple agents play a single city and see how they step all over each other, or a "conquest mode" where you can earn points and spawn disasters on other cities.

hallucinatingsplines.com
198 69
Show HN: SCPN Fusion Core – Tokamak plasma SIM and neuromorphic SNN control
anulum about 4 hours ago

Show HN: SCPN Fusion Core – Tokamak plasma SIM and neuromorphic SNN control

SCPN Fusion Core is an open-source Python/Rust suite for tokamak plasma simulation with neuro-symbolic compilation to stochastic spiking neural networks for real-time, fault-tolerant control.

Key features: - 26 simulation modes (equilibrium, transport, optimizer, flight simulator, neuro-control, etc.) - Neuro-symbolic compiler: Petri nets → stochastic LIF neurons (sub-ms latency, 40%+ bit-flip resilience) - Validation: SPARC high-field equilibria + ITPA H-mode database (20 entries, 10 machines) + IPB98(y,2) scaling - Multigrid solvers, property-based testing, Rust acceleration, Streamlit dashboard - Install: pip install scpn-fusion

GitHub: https://github.com/anulum/scpn-fusion-core

Built to explore neuromorphic approaches to fusion reactor control. Happy to answer questions about the models, compiler, validation, or performance.

github.com
2 0
Summary
rishi_blockrand about 13 hours ago

Show HN: Double blind entropy using Drand for verifiably fair randomness

The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy.

In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs

When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature".

All the verification is in Math, so truly trust-less, so:

1. Player-Seed should matches the player-hash committed

2. Server-Seed should matches the server-hash committed

3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked)

4. Random number generated is deterministic after the event and unknown and unpredictably before the event.

5. No party can influence the final outcome, specially no "last-look" advantange for anyone.

I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust.

blockrand.net
20 15
Show HN: FrankenTUI in the Browser
eigenvalue about 5 hours ago

Show HN: FrankenTUI in the Browser

Also see the react widget here:

https://frankentui.com/web_react

frankentui.com
6 5
mohammedsunasra about 5 hours ago

Show HN: Rawkit – Free, no-ads developer tools that run in the browser

Hey HN,

I built rawkit.dev, a collection of browser-based developer utilities. No ads, no signups, no tracking. Everything processes client-side — your data never touches a server.

The tools:

- JSONForge: JSON editor with tree/graph views, diff, transform, JQ-style queries, format conversion

- SQLSandbox: SQLite via WASM — import CSVs, write SQL, join across files

- Regexplorer: Regex builder with live matching, plain English mode,nmulti-language export

- SiftLog: Log file viewer with auto-detection, severity filtering, regex search, timeline

- Tabulate: CSV/TSV viewer with spreadsheet-style filtering and sorting

Tech: Vanilla HTML/CSS/JS. No frameworks, no build step.

Each tool is essentially 3 files(index.html, css and .js file)

I built these because I was sick of ad-ridden, upload-your-data-to-our-server alternatives for tasks I do daily. The goal is to keep adding tools that developers actually need.

Curious what tools you'd want to see next

rawkit.dev
2 0
Summary
franze 1 day ago

Show HN: Triclock – A Triangular Clock

TriClock is a new cryptocurrency that aims to combine the features of Bitcoin, Ethereum, and Monero to offer a secure, private, and scalable digital currency. The article provides an overview of TriClock's technical details and its potential to address the limitations of existing cryptocurrencies.

triclock.franzai.com
48 14
Summary
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
segmenta 2 days ago

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

github.com
195 56
Summary
Show HN: Renovate – The Kubernetes-Native Way
JanLepsky 1 day ago

Show HN: Renovate – The Kubernetes-Native Way

Hey folks, we built a Kubernetes operator for Renovate and wanted to share it. Instead of running Renovate as a cron job or relying on hosted services, this operator lets you manage it as a native Kubernetes resource with CRDs. You define your repos and config declaratively, and the operator handles scheduling and execution inside your cluster. No external dependencies, no SaaS lock-in, no webhook setup. The whole thing is open source and will stay that way – there's no paid tier or monetization plan behind it, we just needed this ourselves and figured others might too.

Would love to hear feedback or ideas if you give it a try: https://github.com/mogenius/renovate-operator

github.com
41 15
Summary
n1sni 2 days ago

Show HN: I built a macOS tool for network engineers – it's called NetViews

Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

bedpage.com
239 60
Show HN: Distr 2.0 – A year of learning how to ship to customer environments
louis_w_gk 2 days ago

Show HN: Distr 2.0 – A year of learning how to ship to customer environments

A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.

It turned out things get messy when your software is running in places you can't simply SSH into.

Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.

We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.

Other notable features we’ve implemented since our first launch:

- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.

- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.

- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.

- Secret Management, so database passwords don’t show up in configuration steps or logs.

Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.

For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.

Distr is fully open source and self-hostable: https://github.com/distr-sh/distr

Docs: https://distr.sh/docs

We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.

github.com
96 29
Summary
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
thisisjedr 3 days ago

Show HN: JavaScript-first, open-source WYSIWYG DOCX editor

We needed a JS-first WYSIWYG DOCX editor and couldn't find a solid OSS option, most were either commercial or abandoned.

As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.

Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.

github.com
124 44
Summary
ManuelGomes about 7 hours ago

Show HN: Detecting coordinated financial narratives with embeddings and AVX2

I built an open-source system called Horaculo that analyzes coordination and divergence across financial news sources. The goal is to quantify narrative alignment, entropy shifts, and historical source reliability. Pipeline Fetch 50–100 articles (NewsAPI) Extract claims (NLP preprocessing) Generate sentence embeddings (HuggingFace) Compute cosine similarity in C++ (AVX2 + INT8 quantization) Cluster narratives Compute entropy + coordination metrics Weight results using historical source credibility Output structured JSON signals Example Output (query: “oil”) Json Copiar código { "verdict": { "winner_source": "Reuters", "intensity": 0.85, "entropy": 1.92 }, "psychology": { "mood": "Fear", "is_trap": true, "coordination_score": 0.72 } } What it measures Intensity → narrative divergence Entropy → informational disorder Coordination score → cross-source alignment Credibility weighting → historical consensus accuracy per source Performance 1.4s per query (~10 sources) ~100 queries/min ~150MB memory footprint Python-only version was ~12s C++ optimizations: INT8 embedding quantization (4x size reduction) AVX2 SIMD vectorized cosine similarity PyBind11 integration layer Storage SQLite (local memory) Optional Postgres Each source builds a rolling credibility profile: Json Copiar código { "source": "Reuters", "total_scans": 342, "consensus_hits": 289, "credibility": 0.85 } Open Source (MIT) GitHub: [https://github.com/ANTONIO34346/HORACULO] I'm particularly interested in feedback on: The entropy modeling approach Coordination detection methodology Whether FAISS would be a better fit than the current SIMD engine Scalability strategies for 100k+ embeddings

5 0
Show HN: Analog Reader – Chrome Extension
luskira about 3 hours ago

Show HN: Analog Reader – Chrome Extension

A bit of context, Analog Reader is a tool I built that takes any RSS feed (Substack, Ghost, etc.) and formats it as a printable newspaper.

I've launched analogreader.com here before, now I'm here just to share some updates.

I've since created a Chrome Extension that allows you to send any article you are currently looking at to analogreader.com with one click. That way, it's much easier to transform digital into paper. It's a very simple chrome extension, I don't capture any data from you, it literally just appends the current post url to analogreader.com.

Let me know if you have any issues with it!

How I use it these days: I just send the PDF to my reMarkable. But I'm curious if there's interest in getting a personalized newspaper like this actually delivered to your door.

I've asked this before but hell, I'm asking it again: how do you handle the "too many newsletters" problem?

chromewebstore.google.com
2 0
Summary
Show HN: Global Solo – Structural risk diagnostic for cross-border solo founders
jettfu about 8 hours ago

Show HN: Global Solo – Structural risk diagnostic for cross-border solo founders

Hi HN — I'm Jett, a solo founder operating across US/China/global markets.

I built Global Solo because I kept running into the same problem: as a solo founder with income from multiple countries, an LLC in one jurisdiction, and time spent in another — I had no idea what my actual structural risk looked like. My CPA handled US filing, but nobody was mapping the full picture across entity structure, tax residency, banking, and documentation.

So I built a diagnostic tool that does exactly that.

*What it is:* A structured risk assessment across 4 dimensions — Money, Entity, Tax, and Accountability (the META framework). You answer questions about your setup, and it maps where your structural exposure actually sits. Not advice, not recommendations — just visibility into what exists.

*What's free:* - 35+ guides on cross-border structure, tax residency, entity formation, banking, and compliance - A 7-question risk screening tool (instant results, no signup): globalsolo.global/tools/risk-check - Sample report so you can see what the output looks like: globalsolo.global/sample-report

*What's paid:* - Full L1 diagnostic report: $29 (vs. $1,200+ for a CPA to do the same mapping) - Deeper tiers at $149 and $349 for structural analysis and judgment layers

*Tech:* Next.js 16, React 19, Supabase, Stripe. The scoring is deterministic — same input always produces same output. LLM (Claude/GPT) is used only for narrative generation in the paid reports, not for risk assessment logic.

I'd love feedback on: 1. Does the free risk check feel useful? 2. Is the sample report convincing enough to pay $29? 3. Any cross-border founders here — does the META framework cover your blind spots?

Thanks for looking.

globalsolo.global
2 0
Show HN: Agent framework that generates its own topology and evolves at runtime
vincentjiang about 19 hours ago

Show HN: Agent framework that generates its own topology and evolves at runtime

Hi HN,

I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools.

Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections:

1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session.

The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless.

2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior:

- Observe: Exceptions are observations (FileNotFound = new state), not crashes.

- Orient: Adjust strategy based on Memory and - Traits.

- Decide: Generate new code at runtime.

- Act: Execute.

The topology shouldn't be hardcoded; it should emerge from the task's entropy.

3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty.

4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking.

For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback.

Repo: https://github.com/adenhq/hive

github.com
99 33
Summary
Show HN: Send Claude Code tasks to the Batch API at 50% off
misker1 about 17 hours ago

Show HN: Send Claude Code tasks to the Batch API at 50% off

Hey HN. I built this because my Anthropic API bills were getting out of hand (spoiler: they remain high even with this, batch is not a magic bullet).

I use Claude Code daily for software design and infra work (terraform, code reviews, docs). Many Terminal tabs, many questions. I realised some questions are ok to wait on and with that comes some cost savings. So here is a small MCP that lets you send work directly to Anthropic's Batch API from inside Claude Code, for the same quality responses just 50% cheaper, results come back in ~30min-1hr.

  How it works: you type /batch review this codebase for security issues, Claude gathers all the context, builds a self-contained prompt, ships it to the Batch API via an MCP server, and you get notified in the status bar when it's done (optional). 

  The README has installation instructions, which were mainly generated by claude. I removed the curl | bash setup and at this stage of the project i feel more confident sharing the manual setup instructions. 

 My main hope with this project is to monetize it. Not by asking for money, rather I am hoping others have ideas or improvements  to add and use those to save more on cost.

github.com
20 1
Summary
seansh 4 days ago

Show HN: CodeMic

With CodeMic you can record and share coding sessions directly inside your editor.

Think Asciinema, but for full coding sessions with audio, video, and images.

While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video.

Local first, and open source.

p.s. I’ve been working on this for a little over two years and would love to hear your thoughts.

codemic.io
51 28
Summary
Show HN: Crank – The SSH Terminal Manager for Engineers Who Refuse to Close Tabs
mathgladiator about 9 hours ago

Show HN: Crank – The SSH Terminal Manager for Engineers Who Refuse to Close Tabs

I've gone full vibe coder, and in doing so I replaced how I used to work with my own (and very buggy) SSH window manager. The world shifted for me, and it's unsettling... I haven't read the code yet, but I'm using it to manage 25 projects all running claude code on a very beefy server.

Every single facet of this project was from claude.

github.com
5 2
pablojamjam 1 day ago

Show HN: ClawPool – Pool Claude tokens to make $$$ or crazy cheap Claude Code

I built a pool-based proxy that hacks Claude Code's pricing tiers. To actually use Claude Code you need Max at $200/mo, and then most of that capacity sits idle anyway.

So ClawPool lets subscribers pool their OAuth tokens and earn up to $120/mo from the spare capacity. Everyone else gets Opus, Sonnet, all models for $8/mo.

Setup — they actually support proxies themselves via standard env params:

    export ANTHROPIC_AUTH_TOKEN="your-pool-key"
    export ANTHROPIC_BASE_URL="https://proxy.clawpool.ai"
    claude

clawpool.ai
18 7
Summary
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
prasoonds 2 days ago

Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB

Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks.

Here's a demo video: https://youtu.be/cyEgW7wElcs

Why is this useful? (1) You don't have to figure out which webhooks you need or write listeners for each one. The library handles all of that. This follows the approach of libraries like dj-stripe in the Django world (https://dj-stripe.dev/). (2) Stripe's API has a 100 rpm rate limit. If you're checking subscription status frequently or building internal tools, you'll hit it. Querying your own Postgres doesn't have this problem. (3) You can give an AI agent read access to the stripe.* schema to debug payment issues—failed charges, refunds, whatever—without handing over Stripe dashboard access. (4) You can join Stripe data with your own tables for custom analytics, LTV calculations, etc.

It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.

It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:

  billing.subscriptions.get({ userId });
  billing.credits.consume({ userId, key: "api_calls", amount: 1 });
  billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.

Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.

You define your plan in TypeScript:

  {
    name: "Pro",
    description: "Cursor Pro plan",
    price: [{ amount: 2000, currency: "usd", interval: "month" }],
    features: {
      api_completion: {
        pricePerCredit: 1,              // 1 cent per unit
        trackUsage: true,               // Enable usage billing
        credits: { allocation: 500 },
        displayName: "API Completions",
      },
      tab_completion: {
        credits: { allocation: 2000 },
        displayName: "Tab Completions",
      },
    },
  }
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.

Consume code would look like this:

  await billing.credits.consume({
    userId: user.id,
    key: "api_completion",
    amount: 1,
  });
And if they want to allow manual top-ups by the user:

  await billing.credits.topUp({
    userId: user.id,
    key: "api_completion",
    amount: 500,     // buy 500 credits, charges $5.00
  });
Similarly, we have APIs for wallets and usage.

This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.

This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.

I vibe-coded a little toy app for testing: https://snw-test.vercel.app. There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.

Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx

Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.

github.com
62 30
Summary
Show HN: I made paperboat.website, a platform for friends and creativity
yethiel 2 days ago

Show HN: I made paperboat.website, a platform for friends and creativity

paperboat.website
77 29
Show HN: Floating-Point JPEG Decoder
rsaxvc about 11 hours ago

Show HN: Floating-Point JPEG Decoder

I modified STB-Image's JPEG codec to render JPEG files directly to 32-bit floating-point pixels to reduce color banding when editing images.

Coincidentally, this can recompress JPEGs much more consistently, eventually stabilizing when the recompression gives an image the exact same compressed file.

github.com
3 0
Summary