Zig – Type Resolution Redesign and Language Changes
Zig, a systems programming language, has announced its 2026 release plan, which includes a new standard library, improved compability, and a focus on simplicity and developer productivity.
U+237C ⍼ Is Azimuth
Julia Snail – An Emacs Development Environment for Julia Like Clojure's Cider
The article discusses the Julia programming language and its use in a snail simulation project, which explores the computational modeling of snail locomotion and behavior.
Cloudflare crawl endpoint
Cloudflare announces the launch of a new BR (Brotli Compression) Crawl Endpoint, which allows users to test and validate their website's Brotli compression settings, helping to optimize website performance and reduce data usage.
Agents that run while I sleep
The article discusses the author's journey in developing autonomous software agents that can run and perform tasks while the user is away, allowing for more efficient and hands-off workflow management.
Tony Hoare has died
The article pays tribute to Tony Hoare, a pioneering computer scientist who made significant contributions to the field of programming languages, algorithms, and the theory of computation. It highlights Hoare's influential work, including the development of Quicksort and Communicating Sequential Processes, and his lasting impact on the computer science community.
Yann LeCun raises $1B to build AI that understands the physical world
https://web.archive.org/web/20260310153721/https://www.wired...
https://www.ft.com/content/e5245ec3-1a58-4eff-ab58-480b6259a... (https://archive.md/5eZWq)
Create value for others and don’t worry about the returns
The article discusses the author's experience running 69 agents, a complex system that involves managing and optimizing various parameters to achieve desired outcomes. It provides insights into the challenges and strategies involved in deploying and maintaining such a large-scale system.
RISC-V Is Sloooow
The article discusses the performance of RISC-V processors, noting that they can be significantly slower than other architectures, particularly in certain workloads. The author provides insights into the factors that can contribute to this performance difference and suggests areas for further optimization.
Writing my own text editor, and daily-driving it
The article discusses the development of a new text editor, focusing on its key features like real-time collaboration, syntax highlighting, and customizable keyboard shortcuts. The text editor aims to provide a streamlined and efficient writing experience for developers and writers.
SSH Secret Menu
https://xcancel.com/rebane2001/status/2031037389347406054
Standardizing source maps
Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon
Hi HN, we're Sanchit and Shubham (YC W26). We built a fast inference engine for Apple Silicon. LLMs, speech-to-text, text-to-speech – MetalRT beats llama.cpp, Apple's MLX, Ollama, and sherpa-onnx on every modality we tested. Custom Metal shaders, no framework overhead.
Also, we've open-sourced RCLI, the fastest end-to-end voice AI pipeline on Apple Silicon. Mic to spoken response, entirely on-device. No cloud, no API keys.
To get started:
brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
brew install rcli
rcli setup # downloads ~1 GB of models
rcli # interactive mode with push-to-talk
Or: curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
The numbers (M4 Max, 64 GB, reproducible via `rcli bench`):LLM decode – 1.67x faster than llama.cpp, 1.19x faster than Apple MLX (same model files): - Qwen3-0.6B: 658 tok/s (vs mlx-lm 552, llama.cpp 295) - Qwen3-4B: 186 tok/s (vs mlx-lm 170, llama.cpp 87) - LFM2.5-1.2B: 570 tok/s (vs mlx-lm 509, llama.cpp 372) - Time-to-first-token: 6.6 ms
STT – 70 seconds of audio transcribed in *101 ms*. That's 714x real-time. 4.6x faster than mlx-whisper.
TTS – 178 ms synthesis. 2.8x faster than mlx-audio and sherpa-onnx.
We built this because demoing on-device AI is easy but shipping it is brutal. Voice is the hardest test: you're chaining STT, LLM, and TTS sequentially, and if any stage is slow, the user feels it. Most teams fall back to cloud APIs not because local models are bad, but because local inference infrastructure is.
The thing that's hard to solve is latency compounding. In a voice pipeline, you're stacking three models in sequence. If each adds 200ms, you're at 600ms before the user hears a word, and that feels broken. You can't optimize one stage and call it done. Every stage needs to be fast, on one device, with no network round-trip to hide behind.
We went straight to Metal. Custom GPU compute shaders, all memory pre-allocated at init (zero allocations during inference), and one unified engine for all three modalities instead of stitching separate runtimes together.
MetalRT is the first engine to handle all three modalities natively on Apple Silicon. Full methodology:
LLM benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...
Speech benchmarks: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...
How: Most inference engines add layers between you and the GPU: graph schedulers, runtime dispatchers, memory managers. MetalRT skips all of it. Custom Metal compute shaders for quantized matmul, attention, and activation - compiled ahead of time, dispatched directly.
Voice Pipeline optimizations details: https://www.runanywhere.ai/blog/fastvoice-on-device-voice-ai... RAG optimizations: https://www.runanywhere.ai/blog/fastvoice-rag-on-device-retr...
RCLI is the open-source voice pipeline (MIT) built on MetalRT: three concurrent threads with lock-free ring buffers, double-buffered TTS, 38 macOS actions by voice, local RAG (~4 ms over 5K+ chunks), 20 hot-swappable models, and a full-screen TUI with per-op latency readouts. Falls back to llama.cpp when MetalRT isn't installed.
Source: https://github.com/RunanywhereAI/RCLI (MIT)
Demo: https://www.youtube.com/watch?v=eTYwkgNoaKg
What would you build if on-device AI were genuinely as fast as cloud?
Debian decides not to decide on AI-generated contributions
The article discusses the recent changes to the Linux kernel's memory management system, with a focus on the new 'memleak' tracing feature that helps identify and fix kernel memory leaks. It also covers other improvements in areas like memory caching and page reclamation.
Levels of Agentic Engineering
The article discusses the levels of agentic engineering, outlining four distinct approaches: self-directed, collaborative, participatory, and emancipatory. It explores how these levels differ in terms of the degree of user autonomy and control over the design process.
Universal vaccine against respiratory infections and allergens
Researchers at Stanford University have developed a universal vaccine that could protect against a wide range of influenza viruses, potentially providing long-lasting immunity and reducing the need for annual flu shots.
Mesh over Bluetooth LE, TCP, or Reticulum
Columba is an open-source project that aims to develop a secure and privacy-focused messaging platform. The project focuses on building decentralized infrastructure, end-to-end encryption, and user privacy as key principles.
FFmpeg-over-IP – Connect to remote FFmpeg servers
Pike: To Exit or Not to Exit
The article explores the 'should we stop here or gamble on the next exit' problem, which involves making decisions under uncertainty. It discusses an algorithm called Pike that can help navigate this type of decision-making dilemma, providing a framework for weighing the risks and rewards of continuing versus stopping.
EQT eyes potential $6B sale of Linux pioneer SUSE, sources say
EQT, a private equity firm, is reportedly considering a potential sale of SUSE, a leading Linux software provider, for an estimated value of around $6 billion. The sources indicate that EQT is exploring strategic options for SUSE, which it acquired in 2019 for $2.5 billion.
Meta acquires Moltbook
https://web.archive.org/web/20260310154640/https://www.axios..., https://archive.ph/igqsh
https://www.reuters.com/business/meta-acquires-ai-agent-soci...
https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the...
Invoker Commands API
The Invoker Commands API provides a way for web applications to communicate with external programs or services through a secure and standardized interface. This allows developers to extend the functionality of their web apps by integrating with other applications or services.
Launch HN: Didit (YC W26) – Stripe for Identity Verification
Hi HN, I’m Alberto. I co-founded Didit (https://didit.me) with my identical twin brother Alejandro. We are building a unified identity layer—a single integration that handles KYC, AML, biometrics, authentication, and fraud prevention globally. Here’s a demo: https://www.youtube.com/watch?v=eTdcg7JCc4M&t=7s.
Being identical twins, we’ve spent our whole lives dealing with identity confusion, so it is a bit of irony that we ended up building a company to solve it for the internet.
Growing up in Barcelona, we spent years working on products where identity issues were a massive pain. We eventually realized that for most engineering teams, "global identity" is a fiction—in reality it is a fragmented mess. You end up stitching together one provider for US driver's licenses, another for NFC chip extraction in Europe, a third for AML screening, a fourth for government database validation in Brazil, a fifth for liveness detection on low-end Android devices, and yet another for biometric authentication and age estimation. Orchestrating these into a cohesive flow while adapting to localized regulations like GDPR or CCPA is a nightmare that makes no sense for most teams to be working on.
When we looked at the existing "enterprise" solutions, we were baffled. Most require a three-week sales cycle just to see a single page of documentation. Pricing is hidden behind "Contact Us" buttons, and the products themselves are often bloated legacy systems with high latency and abysmal accuracy.
We also noticed a recurring pattern: these tools are frequently optimized only for the latest iOS hardware, performing poorly on the mid-range or older Android devices that make up a huge percentage of the market. This results in a "leaky" funnel where legitimate users drop off due to technical friction and fraud goes undetected because data points are spread across disparate systems. Also, these systems are expensive, often requiring massive annual commits that price out early-stage startups.
We wanted to build a system that is accessible to everyone—a tool that works like Stripe for identity, where you can get a sandbox key in thirty seconds and start running real verifications with world-class UX and transparent pricing.
To solve this, we took the "delusional" path of full vertical integration. Rather than just wrapping existing APIs, we built our own ID verification and biometric AI models—from classification and fraud detection to OCR models for almost every language. This vertical integration is fundamental to how we handle user data. Because we own the entire stack, we control the flow of sensitive information from end-to-end. Your users' data doesn't get bounced around through a chain of third-party black boxes or regional middle-men. This allows us to provide a level of security and privacy that is impossible when you are just an orchestration layer for other people's APIs.
We believe that identity verification is one of the most critical problems on the internet, and must be solved correctly and ethically. Many people are rightfully skeptical, especially given recent news about projects that have turned identity into a tool for mass data collection or surveillance. We don’t do anything of the sort, but we also don’t want to be coerced in the future, so we facilitate data minimization on the customer side. Instead of a business asking for a full ID scan, we allow them to simply verify a specific attribute—like "is this person over 18?"—without ever seeing the document itself. Our goal is to move the industry away from data hoarding and toward zero knowledge, or at least minimal knowledge, verification.
The result of our all-in-one approach is a platform that increases onboarding rates while lowering identity costs. We’ve focused on building a high-confidence automated loop that reduces the need for manual review by up to 90%, catching sophisticated deepfakes and spoofing attempts that standard vision models miss. Our SDK is optimized for low bandwidth connections, ensuring it works on spotty 3G networks where legacy providers usually fail.
We are fully live, and you can jump into the dashboard at https://business.didit.me to see the workflow orchestration immediately. Our pricing is transparent and success-based; we don’t believe in hiding costs behind a sales call.
We’re here all day to answer any question—whether it’s about how we handle NFC verification, our approach to deepfake detection, the general ethics behind biometric data retention, or how we think about the future of identity. We’d love your brutal HN feedback on our APIs, platform, and integration flow!
Roblox is minting teen millionaires
https://archive.ph/6V4RI
Bippy: React Internals Toolkit
Exploring the ocean with Raspberry Pi–powered marine robots
This article explores how Raspberry Pi-powered marine robots are being used to study and monitor the ocean environment, collecting data on factors such as water temperature, pH, and currents to help understand and protect the world's oceans.
After outages, Amazon to make senior engineers sign off on AI-assisted changes
https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77... (https://archive.ph/wXvF3)
https://twitter.com/lukolejnik/status/2031257644724342957 (https://xcancel.com/lukolejnik/status/2031257644724342957)
Mother of All Grease Fires (1994)
Open Weights isn't Open Training
The article discusses the concept of open weights and open training, which involves making the underlying weights and models of AI systems publicly available. This approach aims to promote transparency, collaboration, and the advancement of AI technology.
We are building data breach machines and nobody cares
The article discusses the increasing prevalence of data breaches and the lack of public concern, arguing that we are creating 'data breach machines' through the extensive collection and storage of personal data by companies and organizations. It highlights the need for more robust security measures and greater accountability to protect individuals' privacy and data.