Show HN: Guts – convert Golang types to TypeScript
Show HN: RowboatX – open-source Claude Code for everyday automations
Claude Code is great, but it’s focused on coding. The missing piece is a native way to build and run custom background agents for non-code tasks. We built RowboatX as a CLI tool modeled after Claude Code that lets you do that. It uses the file system and unix tools to create and monitor background agents for everyday tasks, connect them to any MCP server for tools, and reason over their outputs.
Because RowboatX runs locally with shell access, the agents can install tools, execute code, and automate anything you could do in a terminal with your explicit permission. It works with any compatible LLM, including open-source ones.
Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://youtu.be/cyPBinQzicY
For example, you can connect RowboatX to the ElevenLabs MCP server and create a background workflow that produces a NotebookLM-style podcast every day from recent AI-agent papers on arXiv. Or you can connect it to Google Calendar and Exa Search to research meeting attendees and generate briefs before each event.
You can try these with: `npx @rowboatlabs/rowboatx`
We combined three simple ideas:
1. File system as state: Each agent’s instruction, memory, logs, and data are just files on disk, grepable, diffable, and local. For instance, you can just run: grep -rl '"agent":"<agent-name>"' ~/.rowboat/runs to list every run for a particular workflow.
2. The supervisor agent: A Claude Code style agent that can create and run background agents. It predominantly uses Unix commands to monitor, update, and schedule agents. LLMs handle Unix tools better than backend APIs [1][2], so we leaned into that. It can also probe any MCP server and attach the tools to the agents.
3. Human-in-the-loop: Each background agent can emit a human_request message when needed (e.g. drafting a tricky email or installing a tool) that pauses execution and waits for input before continuing. The supervisor coordinates this.
I started my career over a decade ago building spam detection models at Twitter, spending a lot of my time in the terminal with Unix commands for data analysis [0] and Vowpal Wabbit for modeling. When Claude Code came along, it felt familiar and amazing to work with. But trying to use it beyond code always felt a bit forced. We built RowboatX to bring that same workflow to everyday tasks. It is Apache-2.0 licensed and easily extendable.
While there are many agent builders, running on the user's terminal enables unique use cases like computer and browser automation that cloud-based tools can't match. This power requires careful safety design. We implemented command-level allow/deny lists, with containerization coming next. We’ve tried to design for safety from day one, but we’d love to hear the community’s perspective on what additional safeguards or approaches you’d consider important here.
We’re excited to share RowboatX with everyone here. We’d love to hear your thoughts and welcome contributions!
—
[0] https://web.stanford.edu/class/cs124/kwc-unix-for-poets.pdf [1] https://arxiv.org/pdf/2405.06807 [2] https://arxiv.org/pdf/2501.10132
Show HN: Tokenflood – simulate arbitrary loads on instruction-tuned LLMs
Hi everyone, I just released an open source load testing tool for LLMs:
https://github.com/twerkmeister/tokenflood
=== What is it and what problems does it solve? ===
Tokenflood is a load testing tool for instruction-tuned LLMs hat can simulate arbitrary LLM loads in terms of prompt, prefix, and output lengths and requests per second. Instead of first collecting prompt data for different load types, you can configure the desired parameters for your load test and you are good to go. It also let's you assess the latency effects of potential prompt parameter changes before spending the time and effort to implement them.
I believe it's really useful for developing latency sensitive LLM applications and * load testing self-hosted LLM model setups * Assessing the latency benefit of changes to prompt parameters before implementing those changes * Assessing latency and intraday variation of latency on hosted LLM services before sending your traffic there
=== Why did I built it? ===
Over the course of the past year, part of my work has been helping my clients to meet their latency, throughput and cost targets for LLMs (PTUs, anyone? ). That process involved making numerous choices about cloud providers, hardware, inference software, models, configurations and prompt changes. During that time I found myself doing similar tests over and over with a collection of adhoc scripts. I finally had some time on my hands and wanted to properly put it together in one tool.
=== What am I looking for? ===
I am sharing this for three reasons: Hoping this can make other's work for latency-sensitive LLM applications simpler, learning and improving from feedback, and finding new projects to work on.
So please check it out on github (https://github.com/twerkmeister/tokenflood), comment, and reach out at thomas@werkmeister.me or on linkedin(https://www.linkedin.com/in/twerkmeister/) for professional inquiries.
=== Pics ===
image of cli interface: https://github.com/twerkmeister/tokenflood/blob/main/images/...
result image: https://github.com/twerkmeister/tokenflood/blob/main/images/...
Show HN: We built a generator for Vue+Laravel that gives you a clean codebase
Hey HN, My team and I built a tool to scratch our own itch. We were tired of spending the first few days of every new project setting up the same Vue + Laravel boilerplate: writing migrations, models, basic CRUD controllers, and wiring up forms and tables on the frontend.
So we built Codecannon. It’s a web app where you define your data models, columns, and relationships, and it generates a full-stack application for you.
To be clear, the code isn't AI-generated. It's produced deterministically by our own code generators, so the output is always predictable, clean, and follows conventional best practices.
The key difference from other tools is that it’s not a no-code platform you get locked into. When you're done, it pushes a well-structured codebase to your GitHub repo (or you can download a .zip file). You own it completely and can start building your real features on top of it right away.
What it generates: - Laravel Backend: Migrations, models with relationships, factories, seeders, and basic CRUD API endpoints.
- Vue Frontend: A SPA with PrimeVue components. It includes auth pages, data tables, and create/edit forms for each of your models, with all the state management wired up.
- Dev Stuff: Docker configs, a CI/CD pipeline starter, linters, and formatters are all included.
The idea is to skip the repetitive work and get straight to the interesting parts of a project.
It's free to use the builder, see a live preview, and download the full codebase for apps up to 5 modules. For larger apps, you only pay if you decide you want the source code.We’re in an early alpha and would love to get some honest feedback from the community. Does the generated code look sensible? Are we missing any obvious features? Is this something you would find useful or know anyone who might? Let me know what you think.
Show HN: I built a synth for my daughter
The article discusses the author's decision to build a custom synthesizer for their young daughter, focusing on the educational and bonding benefits of introducing her to music and electronics at an early age.
Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files
I built a Rust-based CLI/terminal UI for inspecting Parquet files—data, metadata, and row-group-level structure—right from the terminal. If someone sent me a Parquet file, I used to open DuckDB or Polars just to see what was inside. Now I can do it with one command.
Repo: https://github.com/kaushiksrini/parqeye
Show HN: Copus – Internet gem marketplace for bookmark collectors (x402-powered)
Hey HN!
We’re a small team of artists, developers, and coffee lovers who’ve watched a lot of websites we love shut down over the years. We’ve been looking for a way to support them with income and exposure.
We see that more people are interacting with the web through AI instead of visiting sites directly, so the ad-based model is breaking. The open web needs a new business model.
Our take is to incentivize people (and, in the future, AI agents) to find and share valuable content (links), with both the finder and the original creator rewarded.
Along the way we were inspired by discussions like:
Pocket shut down: https://news.ycombinator.com/item?id=44063662
x402 protocol: https://news.ycombinator.com/item?id=45347335
“To survive the AI age, the web needs a new business model”: https://news.ycombinator.com/item?id=44598248
Key features
Social bookmarking It’s like a decentralized Digg or a Pinterest-for-websites. You can share (curate) any URI (URL) through the site or the browser extension. Others can collect and build on your collections.
Pay-to-visit Finding valuable content is valuable. You can set a stablecoin price for visiting a link you shared. Payments are powered by the x402 protocol.
Support sites/content you love Half of the pay-to-visit revenue goes to the author of the original content, claimable after they opt into x402 or register a Copus account.
Permanent storage Your collections (bookmarks) are automatically stored on the Arweave blockchain. We pay the storage fees so you’ll never lose them.
Other features we have in mind
Spaces Like Pinterest boards, to organize your collections and collaborate with others.
Weave If a link reminds you of another link, you can “weave” them together in a “you may also like” section. It’s a bit like a collective Obsidian graph where standalone websites become a connected map and every site is a rabbit hole.
AI agent support You can train agents to curate and purchase for you.
Social features Follow accounts with great taste.
Who we imagine this is for
If you’ve been bookmarking over the years, you already have tons of internet gems in hand! Please pick the best ones to share with the world. They’re valuable for both readers and original creators.
Were you a Pocket user? Save your best bookmarks here and never lose them. (We plan to support putting a copy of the whole website on-chain once the project scales. Right now we put the link, category info, and your recommendation notes on-chain for free.)
Some other things
Copus is open source, with the frontend built using Claude Code.
We plan to launch a governance token to put ownership of the project into the hands of the people who use it.
We don’t mess with rights and privacy. Aside from some essential terms needed to keep the project running, your rights remain yours.
Copus has a Chinese version (Copus.io), which is a haven for around 150k Chinese fan-fiction lovers rn. We might merge the two sites once the English content reaches scale or we might not.
How we plan to make money
We’re still figuring it out. The first idea is:
Take a 10% fee on each payment.
Put unclaimed creator earnings into low-risk investments (similar to how stablecoins earn yield).
Hope you enjoy Copus, and thank you in advance for trying it out early!
Show HN: ESPectre – Motion detection based on Wi-Fi spectre analysis
Hi everyone, I'm the author of ESPectre.
This is an open-source (GPLv3) project that uses Wi-Fi signal analysis to detect motion using CSI data, and it has already garnered almost 2,000 stars in two weeks.
Key technical details:
- The system does NOT use Machine Learning, it relies purely on Math. — Runs in real-time on a super affordable chip like the ESP32. - It integrates seamlessly with Home Assistant via MQTT.
Show HN: Continuous Claude – run Claude Code in a loop
Continuous Claude is a CLI wrapper I made that runs Claude Code in an iterative loop with persistent context, automatically driving a PR-based workflow. Each iteration creates a branch, applies a focused code change, generates a commit, opens a PR via GitHub's CLI, waits for required checks and reviews, merges if green, and records state into a shared notes file.
This avoids the typical stateless one-shot pattern of current coding agents and enables multi-step changes without losing intermediate reasoning, test failures, or partial progress.
The tool is useful for tasks that require many small, serial modifications: increasing test coverage, large refactors, dependency upgrades guided by release notes, or framework migrations.
Blog post about this: https://anandchowdhary.com/blog/2025/running-claude-code-in-...
Show HN: Reversing a Cinema Camera's Peripherals Port
The article explores the process of reversing the communication protocol used by the FS7 camera system, providing insights into the technical implementation and potential applications for developers and enthusiasts in the field of camera control and automation.
Show HN: PrinceJS – 19,200 req/s Bun framework in 2.8 kB (built by a 13yo)
Hey HN,
I'm 13, from Nigeria, and I just released PrinceJS — the fastest web framework for Bun right now.
• 19,200 req/s (beats Hono/Elysia/Express) • 2.8 kB gzipped • Tree-shakable (cache, AI, email, cron, SSE, queue, test, static...) • Zero deps. Zero config.
Built in < 1 week. No team. Just me and Bun.
Try it: `bun add princejs` GitHub: https://github.com/MatthewTheCoder1218/princejs Docs: https://princejs.vercel.app
Brutal feedback welcome. What's missing?
– @Lil_Prince_1218
Show HN: Strawk – I implemented Rob Pike's forgotten Awk
Rob Pike wrote a paper, Structural Regular Expressions (https://doc.cat-v.org/bell_labs/structural_regexps/se.pdf), that criticized the Unix toolset for being excessively line oriented. Tools like awk and grep assume a regular record structure usually denoted by newlines. Unix pipes just stream the file from one command to another, and imposing the newline structure limits the power of the Unix shell.
In the paper, Mr. Pike proposed an awk of the future that used structural regular expressions to parse input instead of line by line processing. As far as I know, it was never implemented. So I implemented it. I attempted to imitate AWK and it's standard library as much as possible, but some things are different because I used Golang under the hood.
Live Demo: https://ahalbert.github.io/strawk/demo/strawk.html
Github: https://github.com/ahalbert/strawk
Show HN: Kalendis – Scheduling API (keep your UI, we handle timezones/DST)
Kalendis is an API-first scheduling backend. You keep your UI; we handle the gnarly parts (recurrence, time zones, DST, conflict-safe bookings).
What it does: • MCP tool: generates typed clients and API route handlers (Next.js/Express/Fastify/Nest) so you can scaffold calls straight from your IDE/agent tooling. • Availability engine: recurring rules + one-off exceptions/blackouts, returned in a clean, queryable shape. • Bookings: conflict-safe endpoints for creating/updating/canceling slots.
Why we built it: We kept rebuilding the same "hard parts" of scheduling: time zones/DST edge cases, recurring availability, conflict-aware booking, etc. We wanted a boring, reliable backend so we could ship product features without adopting a hosted scheduling UI.
How it's helped: We stopped re-implementing DST/recurrence math and shipped booking flows faster. One small team (just 2 developers) built a robust booking platform for their business using Kalendis—they kept full control of their UX without spending lots of cycles on scheduling infrastructure. The MCP generator cut the glue code: drop in a typed client or route, call the API, move on.
Some tech details: • REST API with ISO-8601 timestamps and IANA time zones • Recurring availability + one-off exceptions (designed to compose cleanly) • Focused scope: users, availability, exceptions, bookings (not a monolithic suite)
The MCP server exposes tools like generate-frontend-client, generate-backend-client, generate-api-routes, and list-endpoints. Add to your MCP settings:
{
"mcpServers": {
"kalendis": {
"command": "npx",
"args": ["-y", "@kalendis/mcp"]
}
}
}
How to try it: Create a free account → get an API key. (https://kalendis.dev). Then hit an endpoint: curl -H "x-api-key: $KALENDIS_API_KEY" \
"https://api.kalendis.dev/v1/availability/getAvailability?userId=<user-id>&start=2025-10-07T00:00:00Z&end=2025-10-14T00:00:00Z&includeExceptions=true"
Happy to answer questions and post example snippets in the thread. Thanks for taking a look!
Show HN: My hobby OS that runs Minecraft
Astral OS, a newly launched operating system, has announced support for running Minecraft on its platform. This integration aims to provide a seamless and optimized Minecraft experience for users of the Astral OS ecosystem.
Show HN: Building WebSocket in Apache Iggy with Io_uring and Completion Based IO
The article discusses the use of IO_URING, a modern Linux kernel feature, to improve the performance of WebSocket communication. It explores how IO_URING can be leveraged to provide efficient and scalable WebSocket handling, addressing the challenges of traditional socket-based approaches.
Show HN: Bsub.io – zero-setup batch execution for command-line tools
I built bsub because I was tired of wiring up Docker images, Python environments, GPUs, sandboxing, and resource limits every time I needed to run heavy command-line tools from web apps. I wanted: send files -> run job in the cloud -> get output -> done.
https://www.bsub.io
bsub lets you execute tools like Whisper, Typst, Pandoc, Docling, and FFmpeg as remote batch jobs with no environment setup. You can try them locally via the CLI or integrate via a simple REST API.
Example (PDF extraction):
bsubio submit -w pdf/extract *.pdf
Works like running the tool locally, but the compute and isolation happen in the cloud.Technical details: - Each job runs in an isolated container with defined CPU/GPU/RAM limits. - Files are stored ephemerally for the duration of the job and deleted after completion. - REST API returns job status, logs, and results. - Cold start for light processors (Typst, Pandoc) is low; Whisper/FFmpeg take longer due to model load/encoding time. - Backend scales horizontally; more workers can be added during load spikes.
Current processors:
SST/Whisper -- speech-to-text
Typography -- Typst, Pandoc
PDF extraction -- Docling
Video transcoding -- FFmpeg
More coming; suggestions welcome for tools that are painful to set up locally.Looking for testers! CLI is open source: https://github.com/bsubio/cli. Installers available for Linux/macOS; Windows testing is in progress. Free during early testing; pricing TBD.
If you’re on Windows, feedback is especially helpful: contact@bsub.io
If you try it, I’d appreciate feedback on API design, latency, missing processors, or anything rough around the edges.
Show HN: Octopii, a framework for building distributed applications in Rust
it won't let me put url for some reason, here it is: https://github.com/octopii-rs/octopii
Show HN: Agfs – Aggregated File System, a modern tribute to the spirit of Plan9
The article describes AGFS, a secure, decentralized file storage system built on top of the Ethereum blockchain. AGFS aims to provide a censorship-resistant and transparent alternative to traditional cloud storage services, allowing users to store and share files while maintaining control over their data.
Show HN: How are Markov chains so different from tiny LLMs?
I polished a Markov chain generator and trained it on an article by Uri Alon and al [0].
It generates text that seems to me at least on par with tiny LLMs, such as demonstrated by NanoGPT. Here is an example:
jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM10b_train UriAlon.txt 3
Training model with order 3...
Skip-gram detection: DISABLED (order < 5)
Pruning is disabled
Calculating model size for JSON export...
Will export 29832 model entries
Exporting vocabulary (1727 entries)...
Vocabulary export complete.
Exporting model entries...
Processed 12000 contexts, written 28765 entries (96.4%)...
JSON export complete: 29832 entries written to model.jsonModel trained and saved to model.json
Vocabulary size: 1727
jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM9_gen model.json
Aging cell model requires comprehensive incidence data. To obtain such a large medical database of the joints are risk factors. Therefore, the theory might be extended to describe the evolution of atherosclerosis and metabolic syndrome. For example, late‐stage type 2 diabetes is associated with collapse of beta‐cell function. This collapse has two parameters: the fraction of the senescent cells are predicted to affect disease threshold . For each individual, one simulates senescent‐cell abundance using the SR model has an approximately exponential incidence curve with a decline at old ages In this section, we simulated a wide range of age‐related incidence curves. The next sections provide examples of classes of diseases, which show improvement upon senolytic treatment tends to qualitatively support such a prediction. model different disease thresholds as values of the disease occurs when a physiological parameter ϕ increases due to the disease. Increasing susceptibility parameter s, which varies about 3‐fold between BMI below 25 (male) and 54 (female) are at least mildly age‐related and 25 (male) and 28 (female) are strongly age‐related, as defined above. Of these, we find that 66 are well described by the model as a wide range of feedback mechanisms that can provide homeostasis to a half‐life of days in young mice, but their removal rate slows down in old mice to a given type of cancer have strong risk factors should increase the removal rates of the joint that bears the most common biological process of aging that governs the onset of pathology in the records of at least 104 people, totaling 877 disease category codes (See SI section 9), increasing the range of 6–8% per year. The two‐parameter model describes well the strongly age‐related ICD9 codes: 90% of the codes show R 2 > 0.9) (Figure 4c). This agreement is similar to that of the previously proposed IMII model for cancer, major fibrotic diseases, and hundreds of other age‐related disease states obtained from 10−4 to lower cancer incidence. A better fit is achieved when allowing to exceed its threshold mechanism for classes of disease, providing putative etiologies for diseases with unknown origin, such as bone marrow and skin. Thus, the sudden collapse of the alveoli at the outer parts of the immune removal capacity of cancer. For example, NK cells remove senescent cells also to other forms of age‐related damage and decline contribute (De Bourcy et al., 2017). There may be described as a first‐passage‐time problem, asking when mutated, impair particle removal by the bronchi and increase damage to alveolar cells (Yang et al., 2019; Xu et al., 2018), and immune therapy that causes T cells to target senescent cells (Amor et al., 2020). Since these treatments are predicted to have an exponential incidence curve that slows at very old ages. Interestingly, the main effects are opposite to the case of cancer growth rate to removal rate We next consider the case of frontline tissues discussed above.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7963340/
Show HN: Unflip – a puzzle game about XOR patterns of squares
UnFlip is a unique puzzle game where players must flip and rotate tiles to uncover hidden images. The game features a minimalist design, challenging levels, and an addictive gameplay loop that encourages players to keep playing and solving increasingly complex puzzles.
Show HN: I have created an alternative for Miro
Hey HN
This project took almost two years and is probably one the best alternatives for tools like Miro and MindMeister
Let me know what you think
Show HN: Blindfold Chess App
I am building a chess app solely dedicate to practicing and mastering Blindfold chess. The first version of the app is now on the stores, will be adding puzzles and trainings to teach Blindfold next.
Show HN: I build a strace clone for macOS
Ever since I tested software on macOS, I deeply missed my beloved strace that I use when programs are missbehaving. macOS has dtruss but it's getting locked down and more unusable with every machine. My approach uses the signed lldb binary on the system and re-implements the output you are know from the wonderful strace tool. I just created the tool yesterday evening, so it may have a few bugs, but I already got quiet a few integration tests and I am happy so far with it.
Show HN: Discussion of ICT Model – Linking Information, Consciousness and Time
Hi HN,
I’ve been working on a conceptual framework that tries to formalize the relationship between:
– informational states, – their minimal temporal stability (I_fixed), – the rate of informational change (dI/dT), – and the emergence of time, processes, and consciousness-like dynamics.
This is not a final theory, and it’s not metaphysics. It’s an attempt to define a minimal, falsifiable vocabulary for describing how stable patterns persist and evolve in time.
Core ideas:
– I_fixed = any pattern that remains sufficiently stable across time to allow interaction/measurement. – dI/dT = the rate at which such patterns change.
Time is defined as a relational metric of informational change (dI/dT), but the arrow of time does not arise from within the system — it emerges from an external temporal level, a basic temporal background.
The model stays strictly physicalist: it doesn’t require spatial localization of information and doesn’t assume any “Platonic realm.” It simply reformulates what it means for a process to persist long enough to be part of reality.
Why I’m posting here
I’m looking for rigorous critique from physicists, computer scientists, mathematicians, and anyone interested in foundational models. If you see flaws, ambiguities, or missing connections — I’d really appreciate honest feedback.
A full preprint (with equations, phenomenology, and testable criteria) and discussion is here:
https://www.academia.edu/s/8924eff666
DOI: 10.5281/zenodo.17584782
Thanks in advance to anyone willing to take a look.
Show HN: UltraLocked – iOS file vault using Secure Enclave and PFS
Show HN: Model-agnostic cognitive architecture for LLMs
Hi HN,
A couple weeks ago I shared an early version of a side project I’ve been tinkering with called Persistent Mind Model. I built it at home on an i7-10700K / 32GB RAM / RTX 3080 because I was curious whether an AI could keep a stable “mind” over time, that could "think" about it's own identity as an LLM, instead of resetting every session.
After a lot more tinkering, I think the architecture is finally in a solid place.
Basically, it saves everything the AI does, thoughts, decisions, updates as a chain of events in a local SQLite database. Because the “identity” is stored in that ledger (and not inside the model), you can swap between OpenAI, Ollama, or other backends and it just keeps going from where it left off, and reasons about its own history/identity development.
I cleaned up the runtime and added things like: a better control loop, a simple concept system for organizing ideas, graph-based telemetry so you can inspect how it evolves, and a draft whitepaper (for now), and several full sessions you can replay to see how the behavior develops.
It's basically my experiment to develop persistent memories, and self-evolving "identities" for LLMs.
The whole system is only a few MB of code, plus about 1.3 MB for the example ledger I’m sharing.
If you’re interested in AI systems that can grow over time, or you want to experiment with persistent reasoning/memory/ verifiable mechanical cognition, I’d love feedback.
Repo: https://github.com/scottonanski/persistent-mind-model-v1.0
It’s open-source, free to use, and still early. but it’s already producing some surprisingly interesting results.
Would love to see what others do with it.
Show HN: I ditched Grafana for my home server and built this instead
Frustrated by the complexity and resource drain of multi service monitoring stacks, I built Simon. I wanted a single, lightweight dashboard to replace the heavy stack and the constant need for an SSH client for routine tasks. The result is a resource efficient dashboard in a single Rust binary, just a couple of megabytes in size. Its support for various architectures on Linux also makes it ideal for embedded systems and lightweight SBCs.
It integrates: Comprehensive Monitoring: Realtime and historical metrics for the host system and Docker containers (CPU, memory, disk usage, and network activity). Integrated File & Log Management: A web UI for file operations and for viewing container logs, right where you need them. Flexible Alerting: A system to set rules on any metric, with templates for sending notifications to Telegram, ntfy, and webhooks. My goal was to create a cohesive, lightweight tool for self hosters and resource constrained environments. I'd love to get your feedback.
https://github.com/alibahmanyar/simon
Show HN: Encore – Type-safe back end framework that generates infra from code
Show HN: DBOS Java – Postgres-Backed Durable Workflows
Hi HN - I’m Peter, here with Harry (devhawk), and we’re building DBOS Java, an open-source Java library for durable workflows, backed by Postgres.
https://github.com/dbos-inc/dbos-transact-java
Essentially, DBOS helps you write long-lived, reliable code that can survive failures, restarts, and crashes without losing state or duplicating work. As your workflows run, it checkpoints each step they take in a Postgres database. When a process stops (fails, restarts, or crashes), your program can recover from those checkpoints to restore its exact state and continue from where it left off, as if nothing happened.
In practice, this makes it easier to build reliable systems for use cases like AI agents, payments, data synchronization, or anything that takes hours, days, or weeks to complete. Rather than bolting on ad-hoc retry logic and database checkpoints, durable workflows give you one consistent model for ensuring your programs can recover from any failure from exactly where they left off.
This library contains all you need to add durable workflows to your program: there's no separate service or orchestrator or any external dependencies except Postgres. Because it's just a library, you can incrementally add it to your projects, and it works out of the box with frameworks like Spring. And because it's built on Postgres, it natively supports all the tooling you're familiar with (backups, GUIs, CLI tools) and works with any Postgres provider.
If you want to try it out, check out the quickstart:
https://docs.dbos.dev/quickstart?language=java
We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions.