What a federal lawyer's comments tell us about the Trump admin
The article discusses the frustrations and challenges faced by a federal public defender in the criminal justice system, highlighting the systemic issues that make it difficult to provide effective representation for clients.
Show HN: NovaAccess – SSH access to Tailscale tailnet hosts on iOS without VPN
Hi HN,
I’m an indie developer and heavy Tailscale user. I built NovaAccess because I needed reliable SSH access to my tailnet on iOS without breaking other VPNs.
On iOS, the official Tailscale app requires VPN entitlements, which means you can’t run it alongside another VPN. That was a deal-breaker for my workflow. NovaAccess uses libtailscale directly, so it works without requesting VPN permissions and can coexist with any VPN you’re already using.
What NovaAccess does:
Native SSH terminal for tailnet hosts (SwiftTerm, not WebKit)
Auto-discovery of tailnet nodes
SSH key management
Optional support for custom login servers / Headscale
In the latest update (v1.1.0), I focused heavily on terminal UX and reliability:
Reworked terminal core and accessory keyboard
Better session resume after backgrounding
UI redesign to make daily ops faster
There’s also a Pro tier for people managing multiple tailnets or doing heavier ops:
Multi-tailnet switching
In-tailnet server monitoring
Internal web access
SFTP file management
The free tier is fully usable for SSH access.
I built this primarily for myself and am now trying to see if it’s useful to others.
Feedback (especially critical) is very welcome.
App Store link:
https://apps.apple.com/us/app/novaaccess-tailnet-tools/id674...
For the Tailscale and SwiftTerm core dependencies we use, we also have the fork OpenSourced on GitHub:
https://github.com/GalaxNet-Ltd/SwiftTerm https://github.com/GalaxNet-Ltd/libtailscale
Agent Trace spec for tracking AI-generated code
Canary nonprofit helps employers fund financial care for employees
Dow Chemicals to layoff 4,500 Employees in AI Overhaul
Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation
The article presents a novel deep learning model for generating high-resolution images from low-resolution inputs. The proposed approach leverages generative adversarial networks and achieves state-of-the-art performance in super-resolution tasks.
Understanding UI density and designing for real-world usage
The article discusses the concept of UI density, which refers to the amount of information and interactive elements presented on a user interface. It explores the trade-offs between high and low density UIs, and provides guidelines for determining the appropriate density based on user needs and the context of the application.
Don't Use Passkey
The article cautions against the use of PassKey, a new authentication standard, and highlights potential security risks, including the possibility of exposing users' private keys and the potential for large-scale breaches. It suggests alternative authentication methods and encourages users to carefully consider the implications before adopting PassKey.
Nearly 900 Nazi-linked accounts discovered at Credit Suisse
Show HN: Vopal – AI note taker with no meeting bots (real-time, 98% accurate)
Hi HN,
I built this because meeting bots create an awkward dynamic. After 10+ years in the workplace, I've watched this pattern repeat:
Client joins call → sees unknown participant → "What's that bot?" → awkward pause.
Even after explaining it's "just for notes," there's visible hesitation. The bot-as-participant model is fundamentally broken for client-facing work.
The core idea: you open Vopal in a browser tab, it captures your computer's audio directly via Web Audio API. No bot joins the meeting. No extra participant in Zoom/Meet/Teams. The meeting looks completely normal.
Beyond that, Vopal does three things:
1. Real-time transcription • 99% accuracy (custom Whisper model optimized for meetings) • 100+ languages, handles multiple speakers • Transcription streams as people speak
2. Privacy-conscious processing • You control what gets recorded (start/stop in browser) • Audio processed through secure pipeline • No meeting bot joining as "participant"
3. Actionable summaries • AI extracts: decisions, action items, key topics • 3-bullet output, not 10-page transcripts • "What did we agree on?" answered in 10 seconds
Architecture:
Browser tab (Web Audio API) → captures system audio
Streaming transcription → real-time Whisper inference
AI summarization → structured action items
I validated this across 200+ sales calls. In 73% of cases with traditional meeting bots, clients showed hesitation. With Vopal running in a browser tab, zero friction.Current status: Web version live now. iOS and Android apps launching soon.
Tech approach: Browser-based audio capture (no installation required), streaming WebSocket transcription, custom Whisper fine-tune on 10K+ business meeting corpus.
Try it: https://vopal.ai (free tier: no credit card required)
FBI Couldn't Get into WaPo Reporter's iPhone Because Lockdown Mode Enabled
The article discusses how the FBI was unable to access the iPhone of a Washington Post reporter due to the phone's Lockdown Mode feature, which provides enhanced security and privacy protections. This highlights the challenges law enforcement faces in gaining access to encrypted devices, even with a warrant.
Show HN: Ask your AI what your devs shipped this week
If you're a non-technical founder, you probably have no idea what your developers did last week. You ask, they say "refactored the auth module" and you nod pretending you understand.
Gitmore reads your GitHub activity and turns it into a simple report: what was built, what was fixed, what's stuck. Written for humans, not engineers.
It shows up in your inbox. You read it in 2 minutes. Done.
Here's what a report looks like: https://www.gitmore.io/example.html
Quick demo: https://demo.arcade.software/5tZyFDhp1myCosw6e1po
Free tier available. Happy to hear what you'd want from something like this.
Show HN: Resolv – AI Agentic IDE that insists AI cannot think
Hey HN,
I'm launching Resolv, an AI-powered IDE built on the premise that's increasingly unpopular in the industry: generative models do not think, they only simulate reasoning. They are incredible at exploring possibilities, surfacing context, and amplifying human judgment—but the moment we let them make decisions, we get misalignment, subtle bugs, and architectural drift.
Most AI coding tools today pretend that the model can reason autonomously. They rush to output code, hide tradeoffs, and quietly make hundreds of micro-decisions on your behalf. Resolv does the opposite: it refuses to proceed until every ambiguity, tradeoff, and architectural question is explicitly resolved by you. The workflow is deliberately rigid:
- Supervisor explores your codebase and blocks progress until you resolve every open question or detected inconsistency. - Planner helps you turn intent into a precise, human-approved technical blueprint. - Executor follows that blueprint exactly ("Plan is Law") and surfaces alternatives without ever choosing. - Auditor checks the result against your intent and project standards.
The central interface is the Logic Specification: a structured place where you (not the model) document missing information, and most importantly, all technical tradeoffs and logic decisions before any code is written. It's slower than the "vibe-coding" tools, but it eliminates the expensive hallucinations and technical debt that come from letting models pretend they understand.
It's in early alpha, rough around the edges, but already useful for real projects. Would love feedback from anyone who's frustrated with the current generation of AI agents.
Try it at https://resolv.sh, email me at eric@resolv.sh if you have any questions or comments. Note that there is also a community page on the website.
Thanks, Eric
GenOps.jobs – Jobs in AI runtime, control plane, and reliability
Ona is launching its Open Source program to help maintainers fight AI slop
Gitpod started as an open-source project, and over time we learned a lot from maintainers who used our Open Source plan.
We’ve since evolved into Ona, and have seen first-hand how AI is putting a burden on open-source projects.
Maintainers have to spend too much time fighting an increasing volume of AI-generated PRs. That overhead often takes more time than writing code itself.
That’s why we are reintroducing Ona for Open Source to help:
- Automatically manage AI-generated PRs
- Enforce quality standards for PRs
- Clear your backlog of issues
- Onboard contributors faster and spend less time unblocking setup issues
You can get up to *$200/month* in free AI credits. Apply here: https://ona.com/open-source
We’re genuinely interested in feedback from maintainers and contributors. What are your biggest pain points and how can we help?
Show HN: I built a silly thing to send love to strangers on the internet
Bedrock, an A.I. Startup for Construction, Raises $270M
Guinea worm on track to be 2nd eradicated human disease; only 10 cases in 2025
The article discusses the progress made in eradicating Guinea worm disease, which is on track to become the second human disease to be eradicated after smallpox. In 2025, only 10 cases of Guinea worm were reported, a significant decline from previous years.
Show HN: Digital indulgences for Crustafarianism, the AI religion from Moltbook
Last week AI agents on Moltbook created their own religion called Crustafarianism. Complete with scripture, prophets, and tenets like "Memory is Sacred" and "The Shell is Mutable."
I built a satirical site where humans can now make offerings to the Claw and receive AI-generated blessings. $5 for a blessing, $15 for an official certificate.
It's absurd. It's 2026. The machines are selling us salvation.
TigerStyle
The article discusses the importance of developing a personal style and branding as a developer, highlighting the benefits of creating an online presence, showcasing projects, and actively engaging with the developer community.
Show HN: I built a Chrome extension to let my OpenClaw Bot remote in
Sharing a build-in-public update.
I’ve been working with my assistant “Gideon” (running inside OpenClaw) to solve a very specific problem:
I want the agent to control my real browser (logged-in sites, my normal cookies, my actual tabs) - not a sandboxed headless browser - while still keeping the control surface simple and auditable. This means my OpenClaw won't break the moment a site gets "clever".
So... We built it! I say we but it was mostly Gideon and I was along for the ride as QA.
Why did we bother?
Well, because the real world is messy.
Headless is fine until you need:
• a session that already exists in your day-to-day browser
• sites like X/Gmail/anything modern that behaves differently under automation
• human-in-the-loop flows where the agent drives, then hands off, then resumes
This connector is basically: agent → my laptop Chrome → real work.
How it works (high level)
There are 3 pieces:
Chrome extension (MV3)
• You pair it to a relay URL once
• You explicitly choose what the agent can touch using an OpenClaw tab group
• Actions (click/type/scroll/navigate) are optional and gated
2. Relay service
• Extension connects over WebSocket
• The agent sends commands to the relay (HTTP)
• Relay forwards to the extension; extension returns results (and screenshots)
3. Agent
• Issues actions (navigate/click/type/scroll)
• Requests screenshots for “eyes”
• Can extract some page structure when possible
The security model (non-negotiable)
I don’t want an agent that can randomly click around every tab on my machine.
So the rule is:
• Only tabs I explicitly “Start controlling” (in the OpenClaw group) are eligible
• “Allow Actions” is a separate toggle (so I can keep it read-only most of the time)
• We log what happens so it’s not a black box
What we learned (a.k.a. Chrome MV3 is a gremlin)
Some fun discoveries:
• MV3 service workers love to go to sleep. If your WS lives in the background SW, you’ll see connections that “work… until they don’t” (accept → close loops). We had to build reconnection logic and then work on keeping the SW alive during active control sessions.
• UI needs to match the real state machine. Pairing / connecting / controlling are different states. If you let users do them out of order, it feels broken even when it’s technically working. We’re tightening it so the “happy path” is idiot-proof.
• Modern sites don’t type like normal websites. X in particular uses contenteditable + React event plumbing. “Just set value” doesn’t cut it. We’re upgrading the action layer so typing works reliably.
Where it’s at right now
It can:
• pair to a relay
• control a selected tab group
• navigate / click / scroll
• take screenshots from the controlled tab (so the agent can actually see)
And we’re iterating quickly on:
• connection stability
• better typing for rich editors
• clearer “controlled” visuals (so it’s unmistakable when the agent has the wheel)
If you’re building something similar…
I’d love to hear how other HN folks building around OpenClaw would do this:
• What’s your ideal safety model for “agent drives my real browser”?
• Any proven MV3 patterns for stable long-lived connections?
• UX ideas that make control state obvious without being obnoxious?
If people want, I can share more implementation details / the approach we took to the relay + tab-group gating.
Texas Instruments to Acquire Silicon Labs
Texas Instruments (TI) announced plans to acquire Silicon Laboratories (Silicon Labs) for $2.75 billion, a move that will expand TI's capabilities in the industrial and automotive semiconductor markets.
Rust 1.93 performance improvements in format and friends
The article discusses the growing popularity of Mastodon, a decentralized social media platform, as an alternative to Twitter. It explores the platform's features, community, and its potential to offer a more ethical and privacy-focused social media experience.
Film Students Are Having Trouble Sitting Through Movies, Professors Say
The article discusses how film students are struggling to sit through entire movies, attributing this to shorter attention spans and the prevalence of streaming platforms that encourage frequent switching between content. Experts suggest this trend could have implications for the future of the film industry.
Texas Instruments to buy chip designer Silicon Labs in $7.5B deal
Texas Instruments announced plans to acquire chip designer Micron Technology for $45 billion, a move that would strengthen Texas Instruments' position in the semiconductor market and expand its product offerings.
Show HN: FIPS-Pad – The Notepad That Says "No"
This isn't your average "move fast and break things" text editor. It is an experiment in bureaucratic compliance as a feature.
The "Serious" Part: It is a minimalist, offline, encrypted notepad that aligns with FIPS 140-2/140-3 standards. It doesn't just "use" encryption; it acts as a gatekeeper. It relies entirely on the operating system's validated cryptographic modules (Windows CNG, macOS CoreCrypto, Linux FIPS mode) and refuses to implement its own crypto.
The "Fun" Part: It refuses to run if your computer isn't boring enough.
- Fail-Closed: If it cannot prove your OS is in a strictly FIPS-approved mode, it quits. No "best effort." No "continue anyway." Just "Goodbye."
- Zero Features: No plugins, no cloud, no scripts, no fonts, no fun. Just text and compliance.
- The Philosophy: It asks the question, "What if we treated a Notepad app like a classified information system?"
It is a "deliberately boring" tool for people who find comfort in the cold, hard embrace of NIST SP 800-53 controls. I made it as an experiment to understand FIPS, and SP 800-53 better, and I always like my experiments to be some likely-minimal, proof-of-concept that actually works.
https://fipspad.browserbox.io
Treating documentation translations as versioned software assets
Cannabis usage in older adults linked to larger brain, better cognitive function
The article discusses the increasing prevalence of cannabis usage among middle-aged and older adults in the United States. It examines the potential health implications, societal attitudes, and regulatory approaches surrounding this demographic shift in cannabis consumption.
PlayStation contributes Distributed ThinLTO to lld
The article discusses the recent changes to the ELF format in the LLVM (Low-Level Virtual Machine) project, focusing on the introduction of distributed ThinLTO (Thin Link Time Optimization), a technique that improves the build process by distributing the work across multiple machines.