Microsoft BitNet: 100B Param 1-Bit model for local CPUs
BitNet is an open-source project by Microsoft that aims to provide a scalable and efficient blockchain network for decentralized applications. The project explores novel consensus mechanisms and optimization techniques to address the performance and scalability challenges of traditional blockchain platforms.
PeppyOS: A simpler alternative to ROS 2 (now with containers support)
Peppy.bot is an AI-powered chatbot that provides personalized recommendations and assistance to users across various domains, including productivity, wellness, and entertainment.
Building a TB-303 from Scratch
The article provides a comprehensive guide on how to recreate the iconic Roland TB-303 bassline synthesizer from scratch, covering the technical details and principles behind its unique sound.
Zig – Type Resolution Redesign and Language Changes
Zig, a systems programming language, has announced its 2026 release plan, which includes a new standard library, improved compability, and a focus on simplicity and developer productivity.
Create value for others and don’t worry about the returns
The article discusses the author's experience running 69 agents, a complex system that involves managing and optimizing various parameters to achieve desired outcomes. It provides insights into the challenges and strategies involved in deploying and maintaining such a large-scale system.
Cloudflare crawl endpoint
Cloudflare announces the launch of a new BR (Brotli Compression) Crawl Endpoint, which allows users to test and validate their website's Brotli compression settings, helping to optimize website performance and reduce data usage.
U+237C ⍼ Is Azimuth
TADA: Fast, Reliable Speech Generation Through Text-Acoustic Synchronization
The article discusses the open-source release of TADA, a platform for developing AI-powered conversational agents. TADA provides a flexible and extensible architecture that can be used to build and deploy chatbots, virtual assistants, and other conversational applications.
Tony Hoare has died
The article pays tribute to Tony Hoare, a pioneering computer scientist who made significant contributions to the field of programming languages, algorithms, and the theory of computation. It highlights Hoare's influential work, including the development of Quicksort and Communicating Sequential Processes, and his lasting impact on the computer science community.
Yann LeCun raises $1B to build AI that understands the physical world
https://web.archive.org/web/20260310153721/https://www.wired...
https://www.ft.com/content/e5245ec3-1a58-4eff-ab58-480b6259a... (https://archive.md/5eZWq)
Julia Snail – An Emacs Development Environment for Julia Like Clojure's Cider
The article discusses the Julia programming language and its use in a snail simulation project, which explores the computational modeling of snail locomotion and behavior.
Agents that run while I sleep
The article discusses the author's journey in developing autonomous software agents that can run and perform tasks while the user is away, allowing for more efficient and hands-off workflow management.
When the chain becomes the product: Seven years inside a token-funded venture
The article explores how blockchain technology has evolved beyond being a tool for cryptocurrencies, and is now becoming a product in itself, with a focus on the growth and potential of the Ethereum network.
RISC-V Is Sloooow
The article discusses the performance of RISC-V processors, noting that they can be significantly slower than other architectures, particularly in certain workloads. The author provides insights into the factors that can contribute to this performance difference and suggests areas for further optimization.
SSH Secret Menu
https://xcancel.com/rebane2001/status/2031037389347406054
Writing my own text editor, and daily-driving it
The article discusses the development of a new text editor, focusing on its key features like real-time collaboration, syntax highlighting, and customizable keyboard shortcuts. The text editor aims to provide a streamlined and efficient writing experience for developers and writers.
Swiss e-voting can't count 2,048 ballots after USB keys fail to decrypt them
The article discusses a technical issue that disrupted Switzerland's electronic voting system, causing concerns about the reliability and security of the country's e-voting infrastructure. It highlights the challenges governments face in implementing secure and transparent digital voting systems.
AutoKernel: Autoresearch for GPU Kernels
AutoKernel is an open-source project that aims to automate the process of compiling and optimizing kernel modules for different hardware platforms, making it easier for developers to build and deploy custom kernels.
Standardizing source maps
Debian decides not to decide on AI-generated contributions
The article discusses the recent changes to the Linux kernel's memory management system, with a focus on the new 'memleak' tracing feature that helps identify and fix kernel memory leaks. It also covers other improvements in areas like memory caching and page reclamation.
Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon
Hi HN, we're Sanchit and Shubham (YC W26). We built a fast inference engine for Apple Silicon. LLMs, speech-to-text, text-to-speech – MetalRT beats llama.cpp, Apple's MLX, Ollama, and sherpa-onnx on every modality we tested. Custom Metal shaders, no framework overhead.
Also, we've open-sourced RCLI, the fastest end-to-end voice AI pipeline on Apple Silicon. Mic to spoken response, entirely on-device. No cloud, no API keys.
To get started:
brew tap RunanywhereAI/rcli https://github.com/RunanywhereAI/RCLI.git
brew install rcli
rcli setup # downloads ~1 GB of models
rcli # interactive mode with push-to-talk
Or: curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash
The numbers (M4 Max, 64 GB, reproducible via `rcli bench`):LLM decode – 1.67x faster than llama.cpp, 1.19x faster than Apple MLX (same model files): - Qwen3-0.6B: 658 tok/s (vs mlx-lm 552, llama.cpp 295) - Qwen3-4B: 186 tok/s (vs mlx-lm 170, llama.cpp 87) - LFM2.5-1.2B: 570 tok/s (vs mlx-lm 509, llama.cpp 372) - Time-to-first-token: 6.6 ms
STT – 70 seconds of audio transcribed in *101 ms*. That's 714x real-time. 4.6x faster than mlx-whisper.
TTS – 178 ms synthesis. 2.8x faster than mlx-audio and sherpa-onnx.
We built this because demoing on-device AI is easy but shipping it is brutal. Voice is the hardest test: you're chaining STT, LLM, and TTS sequentially, and if any stage is slow, the user feels it. Most teams fall back to cloud APIs not because local models are bad, but because local inference infrastructure is.
The thing that's hard to solve is latency compounding. In a voice pipeline, you're stacking three models in sequence. If each adds 200ms, you're at 600ms before the user hears a word, and that feels broken. You can't optimize one stage and call it done. Every stage needs to be fast, on one device, with no network round-trip to hide behind.
We went straight to Metal. Custom GPU compute shaders, all memory pre-allocated at init (zero allocations during inference), and one unified engine for all three modalities instead of stitching separate runtimes together.
MetalRT is the first engine to handle all three modalities natively on Apple Silicon. Full methodology:
LLM benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...
Speech benchmarks: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...
How: Most inference engines add layers between you and the GPU: graph schedulers, runtime dispatchers, memory managers. MetalRT skips all of it. Custom Metal compute shaders for quantized matmul, attention, and activation - compiled ahead of time, dispatched directly.
Voice Pipeline optimizations details: https://www.runanywhere.ai/blog/fastvoice-on-device-voice-ai... RAG optimizations: https://www.runanywhere.ai/blog/fastvoice-rag-on-device-retr...
RCLI is the open-source voice pipeline (MIT) built on MetalRT: three concurrent threads with lock-free ring buffers, double-buffered TTS, 38 macOS actions by voice, local RAG (~4 ms over 5K+ chunks), 20 hot-swappable models, and a full-screen TUI with per-op latency readouts. Falls back to llama.cpp when MetalRT isn't installed.
Source: https://github.com/RunanywhereAI/RCLI (MIT)
Demo: https://www.youtube.com/watch?v=eTYwkgNoaKg
What would you build if on-device AI were genuinely as fast as cloud?
Levels of Agentic Engineering
The article discusses the levels of agentic engineering, outlining four distinct approaches: self-directed, collaborative, participatory, and emancipatory. It explores how these levels differ in terms of the degree of user autonomy and control over the design process.
Roblox is minting teen millionaires
https://archive.ph/6V4RI
Universal vaccine against respiratory infections and allergens
Researchers at Stanford University have developed a universal vaccine that could protect against a wide range of influenza viruses, potentially providing long-lasting immunity and reducing the need for annual flu shots.
Mesh over Bluetooth LE, TCP, or Reticulum
Columba is an open-source project that aims to develop a secure and privacy-focused messaging platform. The project focuses on building decentralized infrastructure, end-to-end encryption, and user privacy as key principles.
Surpassing vLLM with a Generated Inference Stack
The article discusses the optimization of a client's website, Qwen3, by the digital agency Infinity. It highlights the challenges faced, the strategies implemented, and the significant improvements in website performance, user engagement, and conversion rates achieved through the optimization process.
Pike: To Exit or Not to Exit
The article explores the 'should we stop here or gamble on the next exit' problem, which involves making decisions under uncertainty. It discusses an algorithm called Pike that can help navigate this type of decision-making dilemma, providing a framework for weighing the risks and rewards of continuing versus stopping.
Meta acquires Moltbook
https://web.archive.org/web/20260310154640/https://www.axios..., https://archive.ph/igqsh
https://www.reuters.com/business/meta-acquires-ai-agent-soci...
https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the...
Launch HN: Didit (YC W26) – Stripe for Identity Verification
Hi HN, I’m Alberto. I co-founded Didit (https://didit.me) with my identical twin brother Alejandro. We are building a unified identity layer—a single integration that handles KYC, AML, biometrics, authentication, and fraud prevention globally. Here’s a demo: https://www.youtube.com/watch?v=eTdcg7JCc4M&t=7s.
Being identical twins, we’ve spent our whole lives dealing with identity confusion, so it is a bit of irony that we ended up building a company to solve it for the internet.
Growing up in Barcelona, we spent years working on products where identity issues were a massive pain. We eventually realized that for most engineering teams, "global identity" is a fiction—in reality it is a fragmented mess. You end up stitching together one provider for US driver's licenses, another for NFC chip extraction in Europe, a third for AML screening, a fourth for government database validation in Brazil, a fifth for liveness detection on low-end Android devices, and yet another for biometric authentication and age estimation. Orchestrating these into a cohesive flow while adapting to localized regulations like GDPR or CCPA is a nightmare that makes no sense for most teams to be working on.
When we looked at the existing "enterprise" solutions, we were baffled. Most require a three-week sales cycle just to see a single page of documentation. Pricing is hidden behind "Contact Us" buttons, and the products themselves are often bloated legacy systems with high latency and abysmal accuracy.
We also noticed a recurring pattern: these tools are frequently optimized only for the latest iOS hardware, performing poorly on the mid-range or older Android devices that make up a huge percentage of the market. This results in a "leaky" funnel where legitimate users drop off due to technical friction and fraud goes undetected because data points are spread across disparate systems. Also, these systems are expensive, often requiring massive annual commits that price out early-stage startups.
We wanted to build a system that is accessible to everyone—a tool that works like Stripe for identity, where you can get a sandbox key in thirty seconds and start running real verifications with world-class UX and transparent pricing.
To solve this, we took the "delusional" path of full vertical integration. Rather than just wrapping existing APIs, we built our own ID verification and biometric AI models—from classification and fraud detection to OCR models for almost every language. This vertical integration is fundamental to how we handle user data. Because we own the entire stack, we control the flow of sensitive information from end-to-end. Your users' data doesn't get bounced around through a chain of third-party black boxes or regional middle-men. This allows us to provide a level of security and privacy that is impossible when you are just an orchestration layer for other people's APIs.
We believe that identity verification is one of the most critical problems on the internet, and must be solved correctly and ethically. Many people are rightfully skeptical, especially given recent news about projects that have turned identity into a tool for mass data collection or surveillance. We don’t do anything of the sort, but we also don’t want to be coerced in the future, so we facilitate data minimization on the customer side. Instead of a business asking for a full ID scan, we allow them to simply verify a specific attribute—like "is this person over 18?"—without ever seeing the document itself. Our goal is to move the industry away from data hoarding and toward zero knowledge, or at least minimal knowledge, verification.
The result of our all-in-one approach is a platform that increases onboarding rates while lowering identity costs. We’ve focused on building a high-confidence automated loop that reduces the need for manual review by up to 90%, catching sophisticated deepfakes and spoofing attempts that standard vision models miss. Our SDK is optimized for low bandwidth connections, ensuring it works on spotty 3G networks where legacy providers usually fail.
We are fully live, and you can jump into the dashboard at https://business.didit.me to see the workflow orchestration immediately. Our pricing is transparent and success-based; we don’t believe in hiding costs behind a sales call.
We’re here all day to answer any question—whether it’s about how we handle NFC verification, our approach to deepfake detection, the general ethics behind biometric data retention, or how we think about the future of identity. We’d love your brutal HN feedback on our APIs, platform, and integration flow!