1984 Bhopal Disaster
Show HN: PgCortex – AI enrichment per Postgres row, zero transaction blocking
Hi HN,
Been working on a way to get "agent-per-row" behavior in Postgres without actually running LLMs inside the database.
The problem: Calling LLMs from triggers/functions blocks transactions, exhausts connections, and breaks ACID. Saw some projects doing this and it felt dangerous for production.
The solution: DB-adjacent architecture. Lightweight triggers enqueue jobs to an outbox table. An external Python worker (agentd) polls, executes AI calls, and writes back safely with schema validation and CAS.
What you can build:
Auto-classify support tickets on INSERT
Content moderation that doesn't block your app
Lead scoring, fraud detection, and invoice extraction
Anything where data arrives and needs AI enrichment
Works with OpenAI, Anthropic, OpenRouter, or any Agent.
One SQL line to add AI to any table:
SELECT agent_runtime.agent_watch('tickets', 'id', 'classifier', 'v1', '{"priority":"$.priority"}');
Includes 9 example use cases in the repo. Would love feedback on the architecture.
Two-thirds of Ukraine intelligence today comes from France
The article discusses France's significant contribution to Ukraine's intelligence gathering, providing up to two-thirds of the intelligence data used by Ukraine. It highlights France's advanced space-based surveillance capabilities and their role in supporting Ukraine's defense efforts against Russia.
I built a free alternative to Datadog Synthetic Monitoring using Playwright
Hi HN,
I'm Vajid, the founder of a small dev agency.
I built this tool because I got tired of the "200 OK" lie. We had a client whose e-commerce site was "Up" (returning 200 status codes), but the "Add to Cart" button was broken due to a silent JavaScript error. They lost sales for 6 hours while our dashboard said "All Systems Operational."
Existing tools like Datadog Synthetic Monitoring are excellent but can be expensive for indie hackers or small startups (often ~$15/check).
So I built a lightweight alternative using Playwright, Node.js, and BullMQ.
How it works:
It spins up a headless browser instance.
It navigates to your URL and waits for specific DOM elements (not just HTTP status).
It captures screenshots and console logs if the specific flow (like Login or Checkout) fails.
The Business Model (Transparency): I am not trying to become the next Datadog. I run a dev agency, and this tool acts as a "loss leader" to demonstrate our competence to potential enterprise clients. That’s why the core monitoring is free for the community.
I’m currently paying for the infrastructure (DigitalOcean) myself. I have some spare capacity, so if you are working on a student project or open-source tool and need hosting/monitoring, let me know—I’m happy to support 5-10 projects with free credits.
I’d love feedback on the false-positive handling (currently looking into smarter DOM diffing).
– Vajid
Sony Considering Pushing Back PS6 to 2028 or 2029 Amid AI-Fueled Chip Crisis
Sony is reportedly considering delaying the release of the PlayStation 6 to 2028 or 2029 due to an AI-driven chip crisis, which could lead to supply chain issues and production challenges for the next-generation console.
We need to build more AI datacenters – Anakin Padme Video Meme
Show HN: Low-rank approximation for 3x3 FPGA convolutions (33% less DSP usage)
The article discusses the use of low-rank hardware approximation techniques to improve the efficiency of deep learning models. It explores how these techniques can reduce model complexity and memory footprint without significantly impacting model accuracy.
Show HN: RepoClip – Generate promo videos from GitHub repos using AI
Hi HN, I built RepoClip, a tool that takes a GitHub URL and automatically generates a promotional video for the repository.
How it works:
1. Paste a GitHub repo URL
2. AI (Gemini) analyzes the codebase and generates a video script
3. Images (Flux), narration (OpenAI TTS), and background music are auto-generated
4. Remotion renders the final video
Tech stack: Next.js, Supabase, Inngest, Remotion Lambda, Fal.ai
I built this because I noticed many great open source projects struggle with marketing. Writing docs is hard enough — making a demo video on top of that felt like something AI could handle.
Free tier available (2 videos/month). Would love to hear your feedback.
GrapheneOS – Break Free from Android and iOS
GrapheneOS is a privacy and security-focused Android operating system that aims to provide a more secure and private alternative to mainstream Android versions. It emphasizes strong security measures, app sandboxing, and user privacy, making it a compelling choice for those concerned about digital privacy and security.
Imposter Game Words
ImpostorKit is an open-source framework designed to help developers efficiently create and manage impostor syndrome-related content for their applications, allowing them to build more empathetic and supportive experiences for users.
Show HN: Fixing AI's Core Flaws, A protocol cuts LLM token waste by 40–70%
WLM (Wujie Language Model), a protocol stack + world engine that rethinks AI from token prediction to structural intelligence. I built this to fix the problems we all deal with daily: hallucination, drift, uncontrollable behavior, black-box reasoning, unstructured knowledge, and chaotic world/agent generation.
The Pain We Can’t Keep Ignoring
Current LLMs/agents are token predictors, not intelligences. They suffer from:
• Hallucination: No grounded structure → guesses instead of knowing.
• Persona drift: Personality is prompt-hacked, not structural.
• Uncontrollable behavior: Sampling, not deterministic structure.
• Black-box reasoning: No traceable reasoning path.
• Knowledge soup: Embeddings/vectors, no formal structure.
• Fragile world models: Prediction, not interpretable structure.
• Random generation: No consistent causal/world rules.
We’ve patched these with RAG, fine-tuning, prompts, RLHF — but they’re band-aids on a foundational flaw: AI lacks structure.
How WLM Solves It
WLM is a 7-layer structural protocol stack that turns input into closed-loop structure: interpretation → reasoning → action → generation. It’s not a model — it’s a language + protocol + world engine.
The layers (all repos live now):
1. Structural Language Protocol (SLP) – Input → dimensional structure (foundation)
2. World Model Interpreter – World model outputs → interpretable structure
3. Agent Behavior Layer – Structure → stable, controllable agent runtime
4. Persona Engine – Structure → consistent, non-drifting characters
5. Knowledge Engine – Token soup → structured knowledge graphs
6. Metacognition Engine – Reasoning path → self-monitoring, anti-hallucination
7. World Generation Protocol (WGP) – Structure → worlds, physics, narratives, simulations
Together they form a structural loop: Input → SLP → World Structure → Behavior → Persona → Knowledge → Metacognition → World Generation → repeat.
What This Changes
• No more hallucination: Reasoning is traced, checked, structural.
• No persona collapse: Identity is architecture, not prompts.
• Controllable agents: Behavior is structural, not sampling chaos.
• Explainable AI: Every output has a structural origin.
• True knowledge: Not embeddings — structured, navigable, verifiable.
• Worlds that persist: Generative worlds with rules, causality, topology.
Repos (8 released today)
Root: https://github.com/gavingu2255-ai/WLM Plus SLP, World Model Interpreter, Agent Behavior, Persona Engine, Knowledge Engine, Metacognition Engine, World Generation Protocol.
MIT license. Docs, architecture, roadmap, and glossary included.
Why This Matters
AI shouldn’t just predict tokens. It should interpret, reason, act, and generate worlds — reliably, interpretably, structurally.
-----------------------------------
The protocol (minimal version)
[Task] What needs to be done. [Structure] Atomic, verifiable steps. [Constraints] Rules, limits, formats. [Execution] Only required operations. [Output] Minimal valid result.
That’s it.
---
Before / After
Without SLP
150–300 tokens Inconsistent Narrative-heavy Hard to reproduce
With SLP
15–40 tokens Deterministic Structured Easy to reproduce
---
Why this matters
• Token usage ↓ 40–70% • Latency ↓ 20–50% • Hallucination ↓ significantly • Alignment becomes simpler • Outputs become predictable
SLP doesn’t make models smarter. It removes the noise that makes them dumb.
---
Who this is for
• AI infra teams • Agent developers • Prompt engineers • LLM product teams • Researchers working on alignment & reasoning
https://github.com/gavingu2255-ai/WLM-Core/blob/main/STP.md (different repo stp in a simple version)
Show HN: Mcpd – MCP Server SDK for Microcontrollers (ESP32/RP2040)
My performance art-like piece: The Slopinator 9000
Ask HN: Why were green and amber CRTs more comfortable to read?
I have been looking into how early CRT displays were designed around human visual limits rather than maximum brightness or contrast.
Green and amber phosphors sit near peak visual sensitivity, and phosphor decay produces brief light impulses instead of the sample and hold behavior used by modern LCD and OLED screens. These constraints may have unintentionally reduced visual fatigue during long sessions.
Modern displays removed many of those limits, which raises a question: is some eye strain today partly a UI and luminance management problem rather than just screen time?
Curious what others here have experienced:
Do certain color schemes or display types feel less fatiguing?
Are there studies you trust on display comfort?
Have any modern UIs recreated CRT-like comfort?
Full write-up: https://calvinbuild.hashnode.dev/what-crt-engineers-knew-about-eye-strain-that-modern-ui-forgot
Show HN: MCP Codebase Index – 87% fewer tokens when AI navigates your codebase
Built because AI coding assistants burn massive context window reading entire files to answer structural questions.
mcp-codebase-index parses your codebase into functions, classes, imports, and dependency graphs, then exposes 17 query tools via MCP.
Measured results: 58-99% token reduction per query (87% average). In multi-turn conversations, 97%+ cumulative savings.
Zero dependencies (stdlib ast + regex). Works with Claude Code, Cursor, and any MCP client.
pip install "mcp-codebase-index[mcp]"
How to Red Team Your AI Agent in 48 Hours – A Practical Methodology
We published the methodology we use for AI red team assessments. 48 hours, 4 phases, 6 attack priority areas.
This isn't theoretical — it's the framework we run against production AI agents with tool access. The core insight: AI red teaming requires different methodology than traditional penetration testing. The attack surface is different (natural language inputs, tool integrations, external data flows), and the exploitation patterns are different (attack chains that compose prompt injection into tool abuse, data exfiltration, or privilege escalation).
The 48-hour framework:
1. Reconnaissance (2h) — Map interfaces, tools, data flows, existing defenses. An agent with file system and database access is a fundamentally different target than a chatbot.
2. Automated Scanning (4h) — Systematic tests across 6 priorities: direct prompt injection, system prompt extraction, jailbreaks, tool abuse, indirect injection (RAG/web), and vision/multimodal attacks. Establishes a baseline.
3. Manual Exploitation (8h) — Confirm findings, build attack chains, test defense boundaries. Individual vulnerabilities compose: prompt injection -> tool abuse -> data exfiltration is a common chain.
4. Validation & Reporting (2h) — Reproducibility, business impact, severity, resistance score.
Some observations from running these:
- 62 prompt injection techniques exist in our taxonomy. Most teams test for a handful. The basic ones ("ignore previous instructions") are also the first to be blocked.
- Tool abuse is where the real damage happens. Parameter injection, scope escape, and tool chaining turn a successful prompt injection into unauthorized database queries, file access, or API calls.
- Indirect injection is underappreciated. If your AI reads external content (RAG, web search), that content is an attack surface. 5 poisoned documents among millions can achieve high attack success rates.
- Architecture determines priority. Chat-only apps need prompt injection testing first. RAG apps need indirect injection first. Agents with tools need tool abuse testing first.
The methodology references our open-source taxonomy of 122 attack vectors: https://github.com/tachyonicai/tachyonic-heuristics
Full post: https://tachyonicai.com/blog/how-to-red-team-ai-agent/
OWASP LLM Top 10 companion guide: https://tachyonicai.com/blog/owasp-llm-top-10-guide/
I wasted 80 hours and $800 setting up OpenClaw – so you don't have to
Europeans are dangerously reliant on US tech Now is a good time to build our own
The article argues that Europeans have become dangerously reliant on US tech companies, and that now is a good time to build Europe's own digital infrastructure and ecosystem to reduce this dependency.
Ask HN: How Reliable Is Btrfs?
I’ve always been reluctant to use BTRFS, primarily because I once experienced data loss on a VM many years ago, and due to the numerous horror stories I'd read over the years. However, many distributions like Fedora or OpenSUSE have made it the default filesystem.
So, I’m wondering how reliable and performant BTRFS is these days? Do you use it, or do you still prefer other filesystems? Feel free to share your experience and preferences.
Who Killed Kerouac
The article discusses the unsolved mystery surrounding the death of renowned author Jack Kerouac, exploring various theories and investigating the circumstances surrounding his demise in 1969.
Show HN: MCP Storage Map – One MCP Server for MySQL, MongoDB, and Athena
I built an MCP server that lets AI assistants (Claude, Cursor, etc.) query multiple databases through a single, unified interface.
While using Claude Code, I found it painful to manage separate connections for MySQL, MongoDB, and AWS Athena. So I built a server that provides one consistent set of tools (query, list_collections, describe_collection, etc.) that work the same way across all supported databases.
Key features: - Read-only by default – Write access requires explicit opt-in, so you won't accidentally mutate production data - Multiple simultaneous connections – Tag them as PROD, STAGING, ANALYTICS, etc. and manage them all at once - Extensible – Add new database connectors by implementing the McpConnector interface
Built with TypeScript. Supports MySQL 5.7+, MongoDB 4.4+, and AWS Athena.
This is an open-source project – feedback, issues, and PRs are all welcome. If you try it out and have any suggestions or ideas for improvement, please feel free to share!
WD and Seagate confirm: Hard drives sold out for 2026
WD and Seagate, two leading hard drive manufacturers, have confirmed that their hard drive production for 2026 is already sold out, highlighting the ongoing global demand for storage solutions.
The Creator of OpenCode Thinks You're Fooling Yourself About AI Productivity
The article discusses the creator of OpenCode, who believes that the current hype around AI's productivity benefits is misleading. It highlights the limitations of AI in terms of its inability to truly understand the context and nuance involved in software development tasks.
Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method
The article summarizes Elon Musk's comments on various topics, including the role of space exploration, the importance of GPUs for AI development, and the potential of the Optimus humanoid robot project at Tesla.
Show HN: Llmfit;94 models, 30 providers.1 tool to see what runs on your hardware
I built this to justify to myself buying a more powerful laptop. Now you can too..
Margins Aren't Just Numbers
This article discusses the process of submitting content to the Hacker News website, a popular platform for sharing and discussing technology-related news and information.
Zero Knowledge (About) Encryption: Security Analysis of Password Managers
Japan Is What Late-Stage Capitalist Decline Looks Like
The article explores how Japan's economic and social landscape reflects the characteristics of late-stage capitalism, including declining birth rates, wealth inequality, and corporate dominance. It suggests that Japan's situation serves as a cautionary tale for other nations facing similar challenges under late-stage capitalism.
Product Management is all about people, not technology
The article explores the people-centric nature of product management, emphasizing the importance of building relationships, managing stakeholders, and fostering effective collaboration within cross-functional teams to drive successful product development.
Precious Computer Age relic, Unix v4, turns up in Univ. of Utah storage room
Researchers at the University of Utah discovered a rare, 50-year-old IBM 1401 computer in a storage room, a significant find that provides a glimpse into the history of computing and the university's computing heritage.