Show HN: My high school team’s space probe
Me and a few friends made this design document as part of our entry to the UK CanSat competition where a high school team is required to build a probe to be launched. The probe must serve some purpose, and ours was to map the temperature and pressure of the air at different altitudes.
We had the opportunity to launch it a week ago and you can find the video of our launch here: https://drive.google.com/file/d/16bsLzxjP7OWRqVvCB62cLv7QYLR...
During the launch we reached 400m above sea level, and the can pulled 70gs successfully. The parachute and can stayed intact. Unfortunately, on the day, we were unable to successfully implement GPS.
The raw results are here: https://drive.google.com/file/d/1oK1vukjcNcsaXMAPeFlzZ66aHlR... And a slightly cleaned up version is here:https://drive.google.com/file/d/1xYhkp3sWoJF0bCkkvFs1AygSdLU...
I used my data presentation software to present our results here: https://drive.google.com/file/d/1-r7lT0J4MDLiYfuaasDXJsr5rCA... The software (in the form of a python script to be executed in blender) can be found here: https://drive.google.com/file/d/1LHP7OwgI_O8t6-NBI0ZPn9JUt2G... It's not pretty, but it works.
The differences in temperature and pressure results were exaggerated in the so that the gradient could be clearly seen.
Unfortunately, we did not get into the final (judged on this document) but it was an awesome experience nevertheless. The judges used this form to mark us: https://drive.google.com/file/d/1eZnum5zuJvkLzY7RLtm9A-NNzxw... We would love to get any feedback from more experienced people, as we intend to do similar projects in the future and at least two of us want to be professional engineers. I'm happy to reply to any comments.
Show HN: Nash, I made a standalone note with single HTML file
Hello HN, I hope it will posted as well. I made a note in single html file. This does not require a separate membership or installation of the software, and if you download and modify an empty file, you can modify and read it at any time, regardless of online or offline. It can be shared through messengers such as Telegram, so it is also suitable to share contents with long articles and images. It is also possible to host and blog because it is static html file content.
Show HN: Computer – Build Your Manus AI Agent with an OSS macOS Sandbox
We just open-sourced Computer, a Computer-Use Interface (CUI) framework that enables AI agents to interact with isolated macOS and Linux sandboxes, with near-native performance on Apple Silicon. Computer provides a PyAutoGUI-compatible interface that can be plugged into any AI agent system (OpenAI Agents SDK , Langchain, CrewAI, AutoGen, etc.).
Why Computer?
As CUA AI agents become more capable, they need secure environments to operate in. Computer solves this with:
• Isolation: Run agents in sandboxes completely separate from your host system.
• Reliability: Create reproducible environments for consistent agent behaviour.
• Safety: Protect your sensitive data and system resources.
• Control: Easily monitor and terminate agent workflows when needed.
How it works:
Computer uses Lume Virtualization framework under the hood to create and manage virtual environments, providing a simple Python interface:
from computer import Computer
computer = Computer(os="macos", display="1024x768", memory="8GB", cpu="4") try: await computer.run()
# Take screenshots
screenshot = await computer.interface.screenshot()
# Control mouse and keyboard
await computer.interface.move_cursor(100, 100)
await computer.interface.left_click()
await computer.interface.type("Hello, World!")
# Access clipboard
await computer.interface.set_clipboard("Test clipboard")
content = await computer.interface.copy_to_clipboard()
finally:
await computer.stop()Features:
• Full OS interaction: Control mouse, keyboard, screen, clipboard, and file system
• Accessibility tree: Access UI elements programmatically
• File sharing: Share directories between host and sandbox
• Shell access: Run commands directly in the sandbox
• Resource control: Configure memory, CPU, and display resolution
Installation:
pip install cua-computer
GitHub repo: https://github.com/trycua/computer Discord for feedback: https://discord.com/invite/mVnXXpdE85
We're excited to see you building the next Manus general agents with Computer!
We'd love to hear your thoughts, feedback, and any questions you might have. What use cases do you see for AI agents running in sandboxes? How do you see Computer being useful in your workflow?
Show HN: I Built a Customer Feedback Tool
Hey HN,
I've been making products for almost a year now. I always started projects and stopped after 2 weeks because I lost motivation. But this time, I’m determined to release it – even if it’s not perfect yet!
Let me introduce Feedlyst: a customer feedback tool where you can create boards, let customers submit & upvote feedback, and turn ideas into action.
I hope this tool will be helpful for you! Would love your feedback!
Raphael
Show HN: Aiopandas – Async .apply() and .map() for Pandas, Faster API/LLMs Calls
aiopandas is an asynchronous version of the popular Pandas data analysis library, allowing for faster and more efficient data processing in Python applications that require concurrent execution.
Show HN: Cross-platform native UI library for all OS
Show HN: A personal YouTube frontend based on yt-dlp
Show HN: Metacheck – preview how any link appears on social media and chat apps
Hey HN,
I’ve been an indie hacker for a while, but I haven’t had much success with my past projects. Recently, I came across Marc Lou’s advice about building free tools just for fun, so I decided to give it a shot.
I built Metacheck, a simple tool that lets you preview how any link will appear on Twitter/X, LinkedIn, Whatsapp, Telegram, and other platforms. No API keys, no setup—just paste a link and see the preview.
Why I built this I often ran into issues where social platforms displayed broken or unexpected link previews. Debugging Open Graph meta tags was annoying, so I made a tool to make it easier.
How it works Fetches metadata from any URL Parses Open Graph & Twitter Card tags Shows real-time previews of how the link will look when shared
Try it out: https://metacheck.appstate.co/
Show HN: Fashion Shopping with Nearest Neighbors
I made this website with my wife in mind; it makes it possible to browse for similar fashion products over many different retailers at once.
The backend is written in Swift, and is hosted on a single Mac Mini. It performs nearest neighbors on the GPU over ~3M product images.
No vector DB, just pure matrix multiplications. Since we aren't just doing approximate nearest neighbors but rather sorting all results by distance, it's possible to show different "variety" levels by changing the stride over the sorted search results.
Nearest neighbors are computed in a latent vector space. The model which produces the vectors is also something I trained in pure Swift.
The underlying data is about 2TB scraped from https://www.shopltk.com/.
All the code is at https://github.com/unixpickle/LTKlassifier
Show HN: Aracno – a distributed web crawler in Go
Aracno is a polite, distributed web crawler. The goal was to make it simple and user-friendly. There are much more powerful crawlers, but they can be excessively complex for simple tasks, especially in the distributed mode.
It uses a slightly modified version of the frontier algorithm from Heritrix3, the Internet Archive’s crawler. It is quite elegant, and suits crawler needs, although Aracno is not an incremental crawler, like Heritrix.
Aracno is fully distributed, based on the Chord DHT protocol, which means zero additional infrastructure is needed. You can join as many nodes as you want and they will just work out of the box. There is also a failure tolerance built into the Chord protocol, so nodes can leave at any time. The system uses key partitioning (where the key is the hostname of a URL) to distribute the crawling workload. Queue-based design of the heritrix frontier algorithm made it easy to repartition queues between nodes.
Persistence is handled via RocksDB, so you can stop the crawler at any point and resume where it left off.
Aracno saves crawled pages and relevant metadata as archived WARC files, which are just saved on the disk. There is intentionally no program-specific API involved, so it can be easily plugged into any system, although an endpoint for retrieving these files is planned.
Show HN: Web Audio Spring-Mass Synthesis
Hi, I'm the author of this little Web Audio toy which does physical modeling synthesis using a simple spring-mass system.
My current area of research is in sparse, event-based encodings of musical audio (https://blog.cochlea.xyz/sparse-interpretable-audio-codec-pa...). I'm very interested in decomposing audio signals into a description of the "system" (e.g., room, instrument, vocal tract, etc.) and a sparse "control signal" which describes how and when energy is injected into that system. This toy was a great way to start learning about physical modeling synthesis, which seems to be the next stop in my research journey. I was also pleasantly surprised at what's possible these days writing custom Audio Worklets!
Show HN: I built a Slack app that helped improve our PR review workflow
PullPro is a SaaS platform that streamlines the pull request review process for software developers. It provides features such as personalized review assignments, automated reminders, and insights to help teams improve their code review workflow.
Show HN: Browser-Use MCP for Claude that works without an API key
Show HN: OCR Benchmark Focusing on Automation
OCR/Document extraction field has seen lot of action recently with releases like Mixtral OCR, Andrew Ng's agentic document processing etc. Also there are several benchmarks for OCR, however all testing for something slightly different which make good comparison of models very hard.
To give an example, some models like mixtral-ocr only try to convert a document to markdown format. You have to use another LLM on top of it to get the final result. Some VLM’s directly give structured information like key fields from documents like invoices, but you have to either add business rules on top of it or use some LLM as a judge kind of system to get sense of which output needs to be manually reviewed or can be taken as correct output. No benchmark attempts to measure the actual rate of automation you can achieve.
We have tried to solve this problem with a benchmark that is only applicable for documents/usecases where you are looking for automation and its trying to measure that end to end automation level of different models or systems.
We have collected a dataset that represents documents like invoices etc which are applicable in processes where automation is needed vs are more copilot in nature where you would need to chat with document. Also have annotated these documents and published the dataset and repo so it can be extended.
Here is writeup: https://nanonets.com/automation-benchmark Dataset: https://huggingface.co/datasets/nanonets/nn-auto-bench-ds Github: https://github.com/NanoNets/nn-auto-bench
Looking for suggestions on how this benchmark can be improved further.
Show HN: LLM-docs, software documentation intended for consumption by LLMs
I was inspired by a recent tweet by Andrej Karpathy, as well as my own experience copying and pasting a bunch of html docs into Claude yesterday and bemoaning how long-winded and poorly formatted it was.
I’m trying to decide if I should make it into a full-fledged service and completely automate the process of generating the distilled documentation.
Problem is that it would cost a lot in API tokens and wouldn’t generate any revenue (plus it would have to be updated as documentation changes significantly). Maybe Anthropic wants to fund it as a public good? Let me know!
Show HN: Time Portal – Get dropped into history, guess where you landed
Hi HN! I love imagining the past, so I made Time Portal, a game where you are dropped into a historical event and see AI video footage from that moment. You have to guess where you are in time and on the map. It’s like GeoGuessr (and heavily inspired by it!) but for historical events.
The videos are all created with AI. It’s a pipeline of Flux (images), Kling (video), and mmaudio (audio). The videos aren’t always historically accurate to the last detail. They might incorporate elements of folklore or have details from popular beliefs about the way things looked rather than the latest academic research on how they looked.
I’m thinking a lot about how to make the game more interactive. One thing that makes Geoguessr so fun for me is that you can move infinitely and always find more details to help you pinpoint the location. I want Time Portal to have a similar quality. I have a few ideas to try soon that will hopefully make the game more interactive and infinite.
Show HN: CodeVideo – Two years in the making to build an event-sourced IDE
Hi everyone! I originally created CodeVideo as a little side project using FFMPEG WASM in the browser as an experiment, but it's since grown into my vision for a completely automated software educational course production system.
The idea is that you create the educational content once, then can export the course to multiple formats - as a video (of course!), but also as an interactive webpage, a blog post, or even a book, PDF, or PowerPoint! Basically a "create once, ship everywhere" concept.
Things will get more interesting as I incorporate stuff like spell check (for speech) and abstract syntax tree checking (for code), so you can quite literally check the validity of your software course in realtime as you build the course.
You can read more about the technical details and history on my Substack launch post:
https://codevideo.substack.com/p/launching-codevideo-after-t...
And here's the intro video about how to use the studio:
https://youtu.be/4nyuhWF6SS0
EDIT: added link to the mp4 created in the demo video:
https://coffee-app.sfo2.cdn.digitaloceanspaces.com/codevideo...
From an intellectual and software standpoint this product has been (and still is) an absolute blast to build - and as always, I've learned a TON along the way. Very excited to get feedback from the Hacker community - even (maybe especially?) the classic skeptical feedback ;)
As an engineer, I always suck at monetization and things like that - I already am wondering if the whole token system is too complex and perhaps a different model would be better. Again, waiting for feedback from everyone. Until then, enjoy the studio!
Show HN: I built a no-hassle Emoji search tool
Tired of clunky emoji pickers? I built a fast, minimalistic emoji search webpage—no ads, no bloat, just instant results.
Show HN: JobMatchAI reads job descriptions for you and filters out bad ones
when I was searching for jobs on Linkedin/Ziprecruiter/Indeed, they only give you a keyword search and I still had to read a bunch of job descriptions that I am not qualified for and that I didn't like. this wastes a lot of time reading descriptions. This program puts all the desirable jobs at the top. I think its the most valuable for workers early in their career where they don't have recruiters messaging them and have to search and apply manually. If you want I can run the program for you https://docs.google.com/forms/d/e/1FAIpQLSewvVIp3ElZeRydXR8i...
Show HN: Daylight – track sunrise / sunset times in your terminal
I love the sunlight and dread the long, dark winter evenings of Northern Europe. I often look up sunrise / sunset times and count off the days until the darkness is gone.
Now I've written a terminal app for this (Mac/Linux)
Features: a colorful summary of daylight times for your location; projected change over the coming days; handles NO_COLOR and a ---short flag if you dislike the output format.
The location is IP-based but you can override this if you're on a VPN. Just create a terminal alias with the --loc flag. The app supports areas in the arctic / antarctic circle too.
Check our the repository for a preview and instructions on how you can install it with Homebrew.
(There is a Windows build but it's not yet tested)
Show HN: A website that makes your text look cool anywhere online using Unicode
FontGenerator.cool is a free online tool that allows users to generate custom fonts from images, text, or SVG files. The website provides a simple and intuitive interface for creating unique typographic designs that can be downloaded for use in various projects.
Show HN: Factorio Learning Environment – Agents Build Factories
I'm Jack, and I'm excited to share a project that has channeled my Factorio addiction recently: the Factorio Learning Environment (FLE).
FLE is an open-source framework for developing and evaluating LLM agents in Factorio. It provides a controlled environment where AI models can attempt complex automation, resource management, and optimisation tasks in a grounded world with meaningful constraints.
A critical advantage of Factorio as a benchmark is its unbounded nature. Unlike many evals that are quickly saturated by newer models, Factorio's geometric complexity scaling means it won't be "solved" in the next 6 months (or possibly even years). This allows us to meaningfully compare models by the order-of-magnitude of resources they can produce - creating a benchmark with longevity.
The project began 18 months ago after years of playing Factorio, recognising its potential as an AI research testbed. A few months ago, our team (myself, Akbir, and Mart) came together to create a benchmark that tests agent capabilities in spatial reasoning and long-term planning.
Two technical innovations drove this project forward: First, we discovered that piping Lua into the Factorio console over TCP enables running (almost) arbitrary code without directly modding the game. Second, we developed a first-class Python API that wraps these Lua programs to provide a clean, type-hinted interface for AI agents to interact with Factorio through familiar programming paradigms.
Agents interact with FLE through a REPL pattern: 1. They observe the world (seeing the output of their last action) 2. Generate Python code to perform their next action 3. Receive detailed feedback (including exceptions and stdout)
We provide two main evaluation settings: - Lab-play: 24 structured tasks with fixed resources - Open-play: An unbounded task of building the largest possible factory on a procedurally generated map
We found that while LLMs show promising short-horizon skills, they struggle with spatial reasoning in constrained environments. They can discover basic automation strategies (like electric-powered drilling) but fail to achieve more complex automation (like electronic circuit manufacturing). Claude Sonnet 3.5 is currently the best model (by a significant margin).
The code is available at https://github.com/JackHopkins/factorio-learning-environment.
You'll need: - Factorio (version 1.1.110) - Docker - Python 3.10+
The README contains detailed installation instructions and examples of how to run evaluations with different LLM agents.
We would love to hear your thoughts and see what others can do with this framework!
Show HN: ClanPlan – Modern Family Planner
ClanPlan is a web-based tool that helps gaming communities and clans organize and manage their activities, events, and member information in a centralized platform. The app offers features such as event scheduling, member profiles, and communication tools to facilitate better coordination within the clan.
Show HN: Open-Source MCP Server for Context and AI Tools
Large Language Models (LLMs) are powerful, but they’re limited by fixed context windows and outdated knowledge. What if your AI could access live search, structured data extraction, OCR, and more—all through a standardized interface?
We built the JigsawStack MCP Server, an open-source implementation of the Model Context Protocol (MCP) that lets any AI model call external tools effortlessly.
Here’s what it unlocks:
- Web Search & Scraping: Fetch live information and extract structured data from web pages.
- OCR & Structured Data Extraction: Process images, receipts, invoices, and handwritten text with high accuracy.
- AI Translation: Translate text and documents while maintaining context. Image Generation: Generate images from text prompts in real-time.
Instead of stuffing prompts with static data or building custom integrations, AI models can now query MCP servers on demand—extending memory, reducing token costs, and improving efficiency.
Read the full breakdown here: https://jigsawstack.com/blog/jigsawstack-mcp-servers
If you’re working on AI-powered applications, try it out and let us know how it works for you.
Show HN: Bubbles, a vanilla JavaScript web game
Hey everybody, you might remember my older game, Lander! It made a big splash on Hacker News about 2 years ago. I'm still enjoying writing games with no dependencies. I've been working on Bubbles for about 6 months and would love to see your scores.
If you like it, you can build your own levels with my builder tool: https://ehmorris.com/bubbles/builder/ and share the levels here or via Github.
Show HN: MCPGod: Fine-grained control over MCP clients, servers, and tools
Hey everyone, I've wanted an easy way to control which mcp server tools are available to clients. So for example, I might want a gmail server to only expose the read tool (but not send, delete etc).
I figured if I create a cli for spawning mcp servers, I could intercept the stdin, stdout, stderr etc and modify what the clients see when they are making calls to list tools, resources, and prompts.
Well it worked!
In the initial version you can easily add a server to claude with a safe list of tools:
npx -y mcpgod add @modelcontextprotocol/server-everything --client claude --tools=echo,add
Now when you load Claude Desktop, it will only discover the echo and add tools from that server. It's a nice way to keep the agents in line :)
You can check it out here: https://github.com/mcpgod/cli
It will also log everything that a client is doing to ~/mcpgod/logs.
Currently it only has support for claude, but it will be easy to add cursor, cline, windsurf, etc.
With the `tools` command you can list all of a servers tools, and even call a tool directly from the command line, which is pretty fun.
I was thinking it would be nice to create a UI for it to easily enable/disable servers and tools for each client, inspect logs, view analytics, etc.
Thanks for reading!
Show HN: AliasVault – Open-source password manager with built-in email aliases
AliasVault (https://aliasvault.net) is an open-source, self-hostable, end-to-end encrypted password and (email) alias manager that protects your privacy by creating alternative identities, passwords and email addresses for every website you use. Keeping your personal information private.
My name is Lanedirt and I’m a software developer with over 15 years of experience and a privacy enthusiast. Since 2013, I've been running a public temporary email service (https://spamok.com), but I wanted to build something more privacy-centric and fully self-hostable. That's why I've spent the last year developing AliasVault from scratch. The idea behind AliasVault is simple: create unique, random identities for every website, protecting your privacy and reducing online tracking and profiling.
Key Features:
- Unique identities & passwords: Generate individual aliases and strong passwords for every site.
- Built-in email server: Create email aliases with your own domains, receive and read emails directly in AliasVault—no external dependencies.
- Zero-knowledge encryption: All data encrypted locally (using Argon2Id and AES-256-GCM); your master password never leaves your device.
- Flexible installation: Docker-based self-hosting, supports Linux VMs and ARM devices (like Raspberry Pi).
- Fully Open-source: Free to use, audit, modify, under the MIT license.
I've just released v0.14.0, which adds:
- Built-in support for Google Authenticator-compatible TOTP code generation.
- Official browser extensions now approved and live in Chrome, Firefox, Edge, Safari (macOS), and Brave app stores for easy access to your credentials, email aliases and allows for one-click alias creation.
Try the official supported cloud version: https://aliasvault.net
Github and quick self install guide: https://github.com/lanedirt/AliasVault
Full documentation including architecture: https://docs.aliasvault.net
I'd love to hear your feedback and suggestions, happy to answer any questions! Thanks for checking out AliasVault, I appreciate it a lot! :-)
Show HN: Swig – A PostgreSQL-powered job queue system for Go
I built Swig, a job queue system for Go that leverages PostgreSQL's advanced features for distributed processing. It's currently in alpha, and I'd love feedback from the community.
What is Swig? Swig is a robust job queue system for Go applications that uses PostgreSQL as its backend. Unlike many job queues that require separate infrastructure, Swig leverages your existing PostgreSQL database, making it simpler to deploy and maintain.
Key Features: - Race-free job distribution using SELECT FOR UPDATE SKIP LOCKED - Real-time job processing with LISTEN/NOTIFY - Leader election via advisory locks - Priority queues and scheduled jobs - Transactional job enqueueing (jobs can be part of your application transactions) - Multiple database driver support (pgx and database/sql)
Why I Built It: I wanted to deepen my understanding of PostgreSQL's concurrency features and distributed systems patterns. While there are other PostgreSQL-backed queues, I wanted to build something specifically for Go that embraces idiomatic patterns and provides a clean, type-safe API while fully leveraging PostgreSQL's powerful features for distributed coordination.
Current Status: This is an alpha release and a passion project. The core functionality works, but there are still rough edges and missing features. I'm actively working on improvements and would appreciate feedback, issues, and contributions or shoot me an email ogbemudiatimothy@gmail.com
Show HN: Seven39, a social media app that is only open for 3 hours every evening
I built this site as a quick test if a time boxed social media experience feels better than an endless one. So far I've just been using it with friends and it feels nice, but it seems like it is time to bring it to a larger audience.
Let me know what you think! It is just based on EST for now, sorry.
Show HN: Pi Labs – AI scoring and optimization tools for software engineers
Hey HN, after years building some of the core AI and NLU systems in Google Search, we decided to leave and build outside. Our goal was to put the advanced ML and DS techniques we’ve been using in the hands of all software engineers, so that everyone can build AI and Search apps at the same level of performance and sophistication as the big labs.
This was a hard technical challenge but we were very inspired by the MVC architecture for Web development. The intuition there was that when a data model changes, its view would get auto-updated. We built a similar architecture for AI. On one side is a scoring system, which encapsulates in a set of metrics what’s good about the AI application. On the other side is a set of optimizers that “compile” against this scorer - prompt optimization, data filtering, synthetic data generation, supervised learning, RL, etc. The scoring system can be calibrated using developer, user or rater feedback, and once it’s updated, all the optimizers get recompiled against it.
The result is a setup that makes it easy to incrementally improve the quality of your AI in a tight feedback loop: You update your scorers, they auto-update your optimizers, your app gets better, you see that improvement in interpretable scores, and then you repeat, progressing from simpler to more advanced optimizers and from off-the-shelf to calibrated scorers.
We would love your feedback on this approach. https://build.withpi.ai has a set of playgrounds to help you quickly build a scorer and multiple optimizers. No sign in required. https://code.withpi.ai has the API reference and Notebook links. Finally, we have a Loom demo [1].
More technical details
Scorers: Our scoring system has three key differences from the common LLM-as-a-judge pattern.
First, rather than a single label or metric from an LLM judge, our scoring system is represented as a tunable tree of metrics, with 20+ dimensions which get combined into a final (non-linear) weighted score. The tree structure makes scores easily interpretable (just look at the breakdown by dimension), extensible (just add/remove a dimension), and adjustable (just re-tune the weights). Training the scoring system with labeled/preference data adjusts the weights. You can automate this process with user feedback signals, resulting in a tight feedback loop.
Second, our scoring system handles natural language dimensions (great for free-form, qualitative questions requiring NLU) alongside quantitative dimensions (like computations over dates or doc length, which can be provided in Python) in the same tree. When calibrating with your labeled or preference data, the scorer learns how to balance these.
Third, for natural language scoring, we use specialized smaller encoder models rather than autoregressive models. Encoders are a natural fit for scoring as they are faster and cheaper to run, easier to fine-tune, and more suitable architecturally (bi-directional attention with regression or classification head) than similar sized decoder models. For example, we can score 20+ dimensions in sub-100ms, making it possible to use scoring everywhere from evaluation to agent orchestration to reward modeling.
Optimizers: We took the most salient ML techniques and reformulated them as optimizers against our scoring system e.g. for DSPy, the scoring system acts as its validator. For GRPO, the scoring system acts as its reward model. We’re keen to hear the community’s feedback on which techniques to add next.
Overall stack: Playgrounds next.js and Vercel. AI: Runpod and GCP for training GPUs, TRL for training algos, ModernBert & Llama as base models. GCP and Azure for 4o and Anthropic calls.
We’d love your feedback and perspectives: Our team will be around to answer questions and discuss. If there’s a lot of interest, happy to host a live session!
- Achint, co-founder of Pi Labs
[1] http://loom.com/share/c09a1fda8cdf4003a5664fa9cfbf7804