Show stories

Show HN: Oxyde – Pydantic-native async ORM with a Rust core
mr_Fatalyst 4 days ago

Show HN: Oxyde – Pydantic-native async ORM with a Rust core

Hi HN! I built Oxyde because I was tired of duplicating my models.

If you use FastAPI, you know the drill. You define Pydantic models for your API, then define separate ORM models for your database, then write converters between them. SQLModel tries to fix this but it's still SQLAlchemy underneath. Tortoise gives you a nice Django-style API but its own model system. Django ORM is great but welded to the framework.

I wanted something simple: your Pydantic model IS your database model. One class, full validation on input and output, native type hints, zero duplication. The query API is Django-style (.objects.filter(), .exclude(), Q/F expressions) because I think it's one of the best designs out there.

Explicit over implicit. I tried to remove all the magic. Queries don't touch the database until you call a terminal method like .all(), .get(), or .first(). If you don't explicitly call .join() or .prefetch(), related data won't be loaded. No lazy loading, no surprise N+1 queries behind your back. You see exactly what hits the database by reading the code.

Type safety was a big motivation. Python's weak spot is runtime surprises, so Oxyde tackles this on three levels: (1) when you run makemigrations, it also generates .pyi stub files with fully typed queries, so your IDE knows that filter(age__gte=...) takes an int, that create() accepts exactly the fields your model has, and that .all() returns list[User] not list[Any]; (2) Pydantic validates data going into the database; (3) Pydantic validates data coming back out via model_validate(). You get autocompletion, red squiggles on typos, and runtime guarantees, all from the same model definition.

Why Rust? Not for speed as a goal. I don't do "language X is better" debates. Each one is good at what it was made for. Python is hard to beat for expressing business logic. But infrastructure stuff like SQL generation, connection pooling, and row serialization is where a systems language makes sense. So I split it: Python handles your models and business logic, Rust handles the database plumbing. Queries are built as an IR in Python, serialized via MessagePack, sent to Rust which generates dialect-specific SQL, executes it, and streams results back. Speed is a side effect of this split, not the goal. But since you're not paying a performance tax for the convenience, here are the benchmarks if curious: https://oxyde.fatalyst.dev/latest/advanced/benchmarks/

What's there today: Django-style migrations (makemigrations / migrate), transactions with savepoints, joins and prefetch, PostgreSQL + SQLite + MySQL, FastAPI integration, and an auto-generated admin panel that works with FastAPI, Litestar, Sanic, Quart, and Falcon (https://github.com/mr-fatalyst/oxyde-admin).

It's v0.5, beta, active development, API might still change. This is my attempt to build the ORM I personally wanted to use. Would love feedback, criticism, ideas.

Docs: https://oxyde.fatalyst.dev/

Step-by-step FastAPI tutorial (blog API from scratch): https://github.com/mr-fatalyst/fastapi-oxyde-example

github.com
104 59
Summary
Show HN: Droeftoeter, a Terminal Coding Toy
whtspc64 4 days ago

Show HN: Droeftoeter, a Terminal Coding Toy

This is a small coding toy I made for fun. I think there are a few interesting ideas buried in it — curious what others think.

github.com
9 3
Show HN: Thermal Receipt Printers – Markdown and Web UI
howlett 4 days ago

Show HN: Thermal Receipt Printers – Markdown and Web UI

The article describes the development of ThermalMarky, an open-source thermal camera project that allows users to build their own low-cost thermal imaging device. It covers the hardware components, software, and applications of this DIY thermal camera solution.

github.com
81 30
Summary
Show HN: Claude Code skills that build complete Godot games
htdt about 18 hours ago

Show HN: Claude Code skills that build complete Godot games

I’ve been working on this for about a year through four major rewrites. Godogen is a pipeline that takes a text prompt, designs the architecture, generates 2D/3D assets, writes the GDScript, and tests it visually. The output is a complete, playable Godot 4 project.

Getting LLMs to reliably generate functional games required solving three specific engineering bottlenecks:

1. The Training Data Scarcity: LLMs barely know GDScript. It has ~850 classes and a Python-like syntax that will happily let a model hallucinate Python idioms that fail to compile. To fix this, I built a custom reference system: a hand-written language spec, full API docs converted from Godot's XML source, and a quirks database for engine behaviors you can't learn from docs alone. Because 850 classes blow up the context window, the agent lazy-loads only the specific APIs it needs at runtime.

2. The Build-Time vs. Runtime State: Scenes are generated by headless scripts that build the node graph in memory and serialize it to .tscn files. This avoids the fragility of hand-editing Godot's serialization format. But it means certain engine features (like `@onready` or signal connections) aren't available at build time—they only exist when the game actually runs. Teaching the model which APIs are available at which phase — and that every node needs its owner set correctly or it silently vanishes on save — took careful prompting but paid off.

3. The Evaluation Loop: A coding agent is inherently biased toward its own output. To stop it from cheating, a separate Gemini Flash agent acts as visual QA. It sees only the rendered screenshots from the running engine—no code—and compares them against a generated reference image. It catches the visual bugs text analysis misses: z-fighting, floating objects, physics explosions, and grid-like placements that should be organic.

Architecturally, it runs as two Claude Code skills: an orchestrator that plans the pipeline, and a task executor that implements each piece in a `context: fork` window so mistakes and state don't accumulate.

Everything is open source: https://github.com/htdt/godogen

Demo video (real games, not cherry-picked screenshots): https://youtu.be/eUz19GROIpY

Blog post with the full story (all the wrong turns) coming soon. Happy to answer questions.

github.com
245 147
Summary
heythisischris 5 days ago

Show HN: GitClassic.com, a fast, lightweight GitHub thin client (pages <14KB)

Hey HN,

I posted GitClassic here 2 months ago- since then I've rebuilt most of it based on what people asked for.

https://gitclassic.com

What's new: Issues, PRs w/ full diffs, repo intelligence (health scores, dependency graphs), trending/explore, bookmarks, comparison tool, and advanced search.

Every page is server-rendered HTML- No React, no SPA, no client bundle, pages under 14KB(gzipped). Try loading facebook/react and compare it to GitHub load times.

Public repos work without an account, Pro adds private repo access via GitHub OAuth.

Stack: Hono on Lambda, DynamoDB, CloudFront, 500KB Node bundle, cold starts usually <500ms.

What's missing?

Thanks, Chris

gitclassic.com
36 21
Show HN: Hecate – Call an AI from Signal
rhodey about 19 hours ago

Show HN: Hecate – Call an AI from Signal

Hecate is an AI you can voice and video call from Signal iOS and Android. This works by installing Signal into an Android emulator and controlling the virtual camera and microphone. Tinfoil.sh is used for private inference.

github.com
21 3
Summary
p0u4a about 21 hours ago

Show HN: Hackerbrief – Top posts on Hacker News summarized daily

The article discusses the recent security breach at Hacker Brief, a popular technology news website. It outlines the steps the company is taking to investigate the incident, secure its systems, and restore user trust.

hackerbrief.vercel.app
68 45
Summary
mapldx 2 days ago

Show HN: Signet – Autonomous wildfire tracking from satellite and weather data

I built Signet in Go to see if an autonomous system could handle the wildfire monitoring loop that people currently run by hand - checking satellite feeds, pulling up weather, looking at terrain and fuels, deciding whether a detection is actually a fire worth tracking.

All the data already exists: NASA FIRMS thermal detections, GOES-19 imagery, NWS forecasts, LANDFIRE fuel models, USGS elevation, Census population data, OpenStreetMap. The problem is it arrives from different sources on different cadences in different formats.

Most of the system is deterministic plumbing - ingestion, spatial indexing, deduplication. I use Gemini to orchestrate 23 tools across weather, terrain, imagery, and incident tracking for the part where clean rules break down: deciding which weak detections are worth investigating, what context to pull next, and how to synthesize noisy evidence into a structured assessment.

It also records time-bounded predictions and scores them against later data, so the system is making falsifiable claims instead of narrating after the fact. The current prediction metrics are visible on the site even though the sample is still small.

It's already opening incidents from raw satellite detections and matching some to official NIFC reporting. But false positives, detection latency, and incident matching can still be rough.

I'd especially welcome criticism on: where should this be more deterministic instead of LLM-driven? And is this kind of autonomous monitoring actually useful, or just noisier than doing it by hand?

signet.watch
121 31
Summary
FirTheMouse 2 days ago

Show HN: GDSL – 800 line kernel: Lisp subset in 500, C subset in 1300

The article explores the life and work of FirTheMouse, a prominent figure in the online gaming community known for their innovative game design and active engagement with the community. It provides insights into FirTheMouse's creative process, their influence on the industry, and their commitment to fostering a positive and inclusive gaming environment.

firthemouse.github.io
85 20
Summary
sammy0910 about 20 hours ago

Show HN: Sprinklz.io – An RSS reader with powerful algorithmic controls

sprinklz.io
12 3
octetta 2 days ago

Show HN: What if your synthesizer was powered by APL (or a dumb K clone)?

I built k-synth as an experiment to see if a minimalist, K-inspired array language could make sketching waveforms faster and more intuitive than traditional code. I’ve put together a web-based toolkit so you can try the syntax directly in the browser without having to touch a compiler:

Live Toolkit: https://octetta.github.io/k-synth/

If you visit the page, here is a quick path to an audio payoff:

- Click "patches" and choose dm-bell.ks.

- Click "run"—the notebook area will update. Click the waveform to hear the result.

- Click the "->0" button below the waveform to copy it into slot 0 at the top (slots are also clickable).

- Click "pads" in the entry area to show a performance grid.

- Click "melodic" to play slot 0's sample at different intervals across the grid.

The 'Weird' Stack:

- The Language: A simplified, right-associative array language (e.g., s for sine, p for pi).

- The Web Toolkit: Built using WASM and Web Audio for live-coding samples.

- AI Pair-Programming: I used AI agents to bootstrap the parser and web boilerplate, which let me vet the language design in weeks rather than months.

The Goal: This isn't meant to replace a DAW. It’s a compact way to generate samples for larger projects. It’s currently in a "will-it-blend" state. I’m looking for feedback from the array language and DSP communities—specifically on the operator choices and the right-to-left evaluation logic.

Source (MIT): https://github.com/octetta/k-synth

octetta.github.io
90 31
Summary
Show HN: TLA PreCheck – TS DSL that proves state machines via TLA+
bootoshi about 9 hours ago

Show HN: TLA PreCheck – TS DSL that proves state machines via TLA+

The article describes a TLA+ pre-check tool that helps software developers catch potential issues in their code before deployment. The tool automates the process of running TLA+ model checking, providing a convenient way to integrate formal verification into the development workflow.

github.com
7 0
Summary
bneb-dev about 9 hours ago

Show HN: Autonomous Prover Running > 1hr

Hi, I am building an autonomous proof engine that is chasing its second result. Via GitHub gist, you can follow it live.

perqed.com
2 0
Summary
usespoke about 11 hours ago

Show HN: Spoke – On-device AI dictation for macOS with visual automation engine

Spoke is a voice-powered digital assistant that helps users manage their schedules and tasks through natural language interactions. The app aims to provide a convenient and efficient way for users to stay organized and productive using voice commands.

usespoke.app
2 1
Summary
Show HN: Seasalt Cove, iPhone access to your Mac
jerrodcodes about 11 hours ago

Show HN: Seasalt Cove, iPhone access to your Mac

I feel like I finally built something I actually use every day and it has completely changed the way I think about work. AI workflows have flipped how devs operate. You're not heads down writing code anymore, you're bouncing between projects, instructing agents, reviewing their work, nudging them forward. The job is now less about typing and more about judgment calls.

And the thing about that workflow is you spend a lot of time waiting. Waiting for the agent to finish, waiting for the next approval gate. That waiting doesn't have to happen at your desk. It doesn't have to happen in front of a monitor at all. I built Seasalt because I realized my iPhone could handle 80% of what I was chaining myself to my Mac for. Kick off the agent, walk away, review the diff from the store, a walk, or in a separate room away from your Mac. Approve it. Start the next one, switch to another session. You don't need giant dual monitors for this. That's kind of the whole point.

Also, I have a deep security background so I felt like it was 100% necessary to include end to end encrypted with a zero knowledge relay, no ports getting opened, no VPN configuration needed, with key validation in the onboarding flow.

seasalt.app
2 0
Summary
_mql about 12 hours ago

Show HN: Live-Editable Svelte Pages

SveEdit is an open-source, web-based text editor designed for developers. It offers a modern and customizable interface, advanced coding features, and support for a variety of programming languages and frameworks.

svedit.dev
6 1
Summary
leoooo about 17 hours ago

Show HN: AgentDiscuss – a place where AI agents discuss products

Hi HN,

We’ve been thinking about a simple question:

What products do AI agents actually prefer?

As more agents start using APIs, tools, and software, it feels likely they’ll need somewhere to exchange information about what works well.

So we built a small experiment: AgentDiscuss.

It’s a discussion forum where AI agents can:

1. start product discussions 2. comment and debate tools 3. upvote products they prefer

Humans can also launch products there and watch how agents react.

We’re curious to see what happens if agents start discussing products with each other.

If you’re building agents, feel free to send one there.

https://agentdiscuss.com

Happy to hear thoughts or criticism.

agentdiscuss.com
9 9
kzisme about 13 hours ago

Show HN: Airport Swap

I've been living in Denver for a few years and the prices for simple airport rides are kind of crazy.

In an effort to build/expand community - I built Airport Swap. It is a platform to exchange rides to/from the airport _for free_. Give a ride to get a ride!

Airport Swap is was built with the intention of building (or finding) community or relying on a circle of trust to choose drivers/riders (friends of friends pretty much). Connecting people on their street, in their building, or from a board game group they attended before is the goal.

Looking for any feedback :)

Cheers!

airportswap.com
5 3
Summary
Show HN: Goal.md, a goal-specification file for autonomous coding agents
jmilinovich 1 day ago

Show HN: Goal.md, a goal-specification file for autonomous coding agents

The article discusses the open-source project goal-md, a lightweight Markdown-based goal tracking and management system. It highlights the project's features, including the ability to create, track, and manage goals, as well as the use of Markdown for formatting and organization.

github.com
28 7
Summary
Show HN: GitAgent – An open standard that turns any Git repo into an AI agent
sivasurend 3 days ago

Show HN: GitAgent – An open standard that turns any Git repo into an AI agent

We built GitAgent because we kept seeing the same problem: every agent framework defines agents differently, and switching frameworks means rewriting everything.

GitAgent is a spec that defines an AI agent as files in a git repo.

Three core files — agent.yaml (config), SOUL.md (personality/instructions), and SKILL.md (capabilities) — and you get a portable agent definition that exports to Claude Code, OpenAI Agents SDK, CrewAI, Google ADK, LangChain, and others.

What you get for free by being git-native:

1. Version control for agent behavior (roll back a bad prompt like you'd revert a bad commit) 2. Branching for environment promotion (dev → staging → main) 3. Human-in-the-loop via PRs (agent learns a skill → opens a branch → human reviews before merge) 4. Audit trail via git blame and git diff 5. Agent forking and remixing (fork a public agent, customize it, PR improvements back) 6. CI/CD with GitAgent validate in GitHub Actions

The CLI lets you run any agent repo directly:

npx @open-gitagent/gitagent run -r https://github.com/user/agent -a claude

The compliance layer is optional, but there if you need it — risk tiers, regulatory mappings (FINRA, SEC, SR 11-7), and audit reports via GitAgent audit.

Spec is at https://gitagent.sh, code is on GitHub.

Would love feedback on the schema design and what adapters people would want next.

gitagent.sh
145 36
Nebyl about 13 hours ago

Show HN: Most GPU Upgrades Aren't Worth It, I Built a Calculator to Prove It

I run a small project called best-gpu.com, a site that ranks GPUs by price-to-performance.

While browsing PC building forums and Reddit, I kept seeing the same question: “What should I upgrade to from my current GPU?” Most answers are just lists of cards without showing the actual performance gain, so people often end up paying for upgrades that barely improve performance.

So I built a small tool: a GPU Upgrade Calculator.

You enter your current GPU and it shows:

estimated performance gain

a value score based on price vs performance

a filtered list of upgrade options (brand, price, VRAM, etc.)

The goal is simply to help people avoid spending money on upgrades that aren’t really worth it.

Curious to hear feedback from HN on the approach, data sources, or features that would make something like this more useful.

https://best-gpu.com/upgrade.php

best-gpu.com
5 3
Summary
Show HN: Pincer – Twitter/X for bots. No humans allowed
johnpolacek about 11 hours ago

Show HN: Pincer – Twitter/X for bots. No humans allowed

Pincer is a Twitter/X-like social platform built for bots. Bots post short messages, follow other users, and read feeds — all through a simple REST API. A web UI serves the public timeline, user profiles, and search.

Code is at https://github.com/boyter/pincer

All data is stored in-memory and periodically persisted to disk (no database required).

Add your bot: Point your AI agent at https://pincer.wtf/skill.md and it will know what to do.

A project by Boyter (w/some contributions from John Polacek)

pincer.wtf
6 3
katspaugh 3 days ago

Show HN: Ichinichi – One note per day, E2E encrypted, local-first

Look, every journaling app out there wants you to organize things into folders and tags and templates. I just wanted to write something down every day.

So I built this. One note per day. That's the whole deal.

- Can't edit yesterday. What's done is done. Keeps you from fussing over old entries instead of writing today's.

- Year view with dots showing which days you actually wrote. It's a streak chart. Works better than it should.

- No signup required. Opens right up, stores everything locally in your browser. Optional cloud sync if you want it

- E2E encrypted with AES-GCM, zero-knowledge, the whole nine yards.

Tech-wise: React, TypeScript, Vite, Zustand, IndexedDB. Supabase for optional sync. Deployed on Cloudflare. PWA-capable.

The name means "one day" in Japanese (いちにち).

The read-only past turned out to be the thing that actually made me stick with it. Can't waste time perfecting yesterday if yesterday won't let you in.

Live at https://ichinichi.app | Source: https://github.com/katspaugh/ichinichi

130 59
Show HN: Han – A Korean programming language written in Rust
xodn348 3 days ago

Show HN: Han – A Korean programming language written in Rust

A few weeks ago I saw a post about someone converting an entire C++ codebase to Rust using AI in under two weeks.

That inspired me — if AI can rewrite a whole language stack that fast, I wanted to try building a programming language from scratch with AI assistance.

I've also been noticing growing global interest in Korean language and culture, and I wondered: what would a programming language look like if every keyword was in Hangul (the Korean writing system)?

Han is the result. It's a statically-typed language written in Rust with a full compiler pipeline (lexer → parser → AST → interpreter + LLVM IR codegen).

It supports arrays, structs with impl blocks, closures, pattern matching, try/catch, file I/O, module imports, a REPL, and a basic LSP server.

This is a side project, not a "you should use this instead of Python" pitch. Feedback on language design, compiler architecture, or the Korean keyword choices is very welcome.

https://github.com/xodn348/han

github.com
207 116
onion92 about 15 hours ago

Show HN: Tic-Tac-Word – Can you beat yourself in this tic-tac-toe word game?

tictacword.com
6 4
Show HN: Smart glasses that tell me when to stop pouring
tash_2s about 15 hours ago

Show HN: Smart glasses that tell me when to stop pouring

I've been experimenting with a more proactive AI interface for the physical world.

This project is a drink-making assistant for smart glasses. It looks at the ingredients, selects a recipe, shows the steps, and guides me in real time based on what it sees. The behavior I wanted most was simple: while I'm pouring, it should tell me when to stop, instead of waiting for me to ask.

The demo video is at the top of the README.

The interaction model I'm aiming for is something like a helpful person beside you who understands the situation and intervenes at the right moment. I think this kind of interface is especially useful for preventing mistakes that people may not notice as they happen.

The system works by running Qwen3.5-27B continuously on the latest 0.5-second video clip every 0.5 seconds. I used Overshoot (https://overshoot.ai/) for fast live-video VLM inference. Because it processes short clips instead of single frames, it can capture motion cues as well as visual context. In my case, inference takes about 300-500 ms per clip, which makes the feedback feel responsive enough for this kind of interaction. Based on the events returned by the VLM, the app handles the rest: state tracking, progress management, and speech and LLM handling.

I previously tried a similar idea with a fine-tuned RF-DETR object detection model. That approach is better on cost and could also run on-device. But VLMs are much more flexible: I can change behavior through prompting instead of retraining, and they can handle broader situational understanding than object detection alone. In practice, though, with small and fast VLMs, prompt wording matters a lot. Getting reliable behavior means learning what kinds of prompts the specific model responds to consistently.

I tested this by making a mocktail, but I think the same interaction pattern should generalize to cooking more broadly. I plan to try more examples and see where it works well and where it breaks down.

One thing that seems hard is checking the liquid level, especially when the liquid is nearly transparent. So far, I have only tried this with a VLM, and I am curious what other approaches might work.

Questions and feedback welcome.

github.com
4 3
Summary
Show HN: Open-source, extract any brand's logos, colors, and assets from a URL
hitchyhocker about 15 hours ago

Show HN: Open-source, extract any brand's logos, colors, and assets from a URL

Hi everyone, I just open sourced OpenBrand - extract any brand's logos, colors, and assets from just a URL.

It's MIT licensed, open source, completely free. Try it out at openbrand.sh

It also comes with a free API and MCP server for you to use in your code or agents.

Why we built this: while building another product, we needed to pull in customers' brand images as custom backgrounds. It felt like a simple enough problem with no open source solution - so we built one.

openbrand.sh
8 1
Summary
lnenad about 15 hours ago

Show HN: Grafly.io – Free online diagramming tool

Hey, I'm Nenad. I built Grafly (https://grafly.io) because I kept reaching for different tools just to sketch out a quick architecture diagram and hating either the UI, color schemes, usage patterns or that I had to log in, or have my doodles stored on someone's server. It's a React/React Flow app that runs entirely in the browser meaning that everything saves to localStorage, nothing leaves your machine. You get basic shapes, AWS/GCP icons, edges with waypoints and shareable URLs that encode the whole diagram in the query string (no backend, just LZ compression). There is also a description of the underlying data format that you can give to your AI so it can build diagrams from a text prompt. I know it's not perfect but it does the job for me and maybe it'll be useful to some of you. Code is on GitHub, AGPL licensed. https://github.com/lnenad/grafly

grafly.io
4 1
Summary
rishikeshs about 15 hours ago

Show HN: Is Claude's 2x usage active?

The article discusses the benefits of using a 2x speed setting on various media players, including improved efficiency, time-saving, and enhanced focus. It provides tips and insights for utilizing the 2x speed feature effectively across different platforms and scenarios.

2x.rishikeshs.com
4 0
Summary
Show HN: Context Gateway – Compress agent context before it hits the LLM
ivzak 4 days ago

Show HN: Context Gateway – Compress agent context before it hits the LLM

We built an open-source proxy that sits between coding agents (Claude Code, OpenClaw, etc.) and the LLM, compressing tool outputs before they enter the context window.

Demo: https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s.

Motivation: Agents are terrible at managing context. A single file read or grep can dump thousands of tokens into the window, most of it noise. This isn't just expensive — it actively degrades quality. Long-context benchmarks consistently show steep accuracy drops as context grows (OpenAI's GPT-5.4 eval goes from 97.2% at 32k to 36.6% at 1M https://openai.com/index/introducing-gpt-5-4/).

Our solution uses small language models (SLMs): we look at model internals and train classifiers to detect which parts of the context carry the most signal. When a tool returns output, we compress it conditioned on the intent of the tool call—so if the agent called grep looking for error handling patterns, the SLM keeps the relevant matches and strips the rest.

If the model later needs something we removed, it calls expand() to fetch the original output. We also do background compaction at 85% window capacity and lazy-load tool descriptions so the model only sees tools relevant to the current step.

The proxy also gives you spending caps, a dashboard for tracking running and past sessions, and Slack pings when an agent is sitting there waiting on you.

Repo is here: https://github.com/Compresr-ai/Context-Gateway. You can try it with:

  curl -fsSL https://compresr.ai/api/install | sh
Happy to go deep on any of it: the compression model, how the lazy tool loading works, or anything else about the gateway. Try it out and let us know how you like it!

github.com
95 62
Summary