Ask HN: What are you working on? (February 2026)
What are you working on? Any new ideas that you're thinking about?
Ask HN: Do provisional patents matter for early-stage startups?
I am a solo founder building in AI B2B infra.
I am filing provisional patents on some core technical approaches so I can share more openly with early design partners and investors.
Curious from folks who have raised Pre-Seed/Seed or worked with early-stage companies: - Do provisionals meaningfully help in fundraising or partnerships? - Or were they mostly noise until later rounds / real traction?
I am trying to calibrate how much time/energy to put into IP vs just shipping + user traction at this stage.
Would love to hear real world experiences.
OrthoRay – A native, lightweight DICOM viewer written in Rust/wgpu by a surgeon
Hi HN,
I am an orthopedic surgeon and a self-taught developer. I built OrthoRay because I was frustrated with the lag in standard medical imaging software. Most existing solutions were either bloated Electron apps or expensive cloud subscriptions.
I wanted something instant, local-first, and privacy-focused. So, I spent my nights learning Rust, heavily utilizing AI coding assistants to navigate the steep learning curve and the borrow checker. This project is a testament to how domain experts can build performant native software with AI support.
I built this viewer using Tauri and wgpu for rendering.
Key Features:
Native Performance: Opens 500MB+ MRI series instantly (No Electron, no web wrappers).
GPU-Accelerated: Custom wgpu pipeline for 3D Volume Rendering and MPR.
BoneFidelity: A custom algorithm I developed specifically for high-fidelity bone visualization.
Privacy: Local-first, runs offline, no cloud uploads.
It is currently available on the Microsoft Store as a free hobby project.
Disclaimer: This is intended for academic/research use and is NOT FDA/CE certified for clinical diagnosis.
I am evaluating open-source licensing options to make this a community tool. I’d love your feedback on the rendering performance.
Link: https://orthoarchives.com/en/orthoray
I Built a Browser Flight Simulator Using Three.js and CesiumJS
I’ve been working on a high-performance, web-based flight simulator as a personal project, and I wanted to share a gameplay preview.
The main goal of this project is to combine high-fidelity local 3D aircraft rendering with global, real-world terrain data. All running directly in the browser with no installation required.
Stack: HTML, CSS, JavaScript, Three.js, CesiumJS, Vite.
The game currently uses multiple states, including a main menu, spawn point confirmation, and in-game gameplay. You can fly an F-15 fighter jet complete with afterburner and jet flame effects, as well as weapon systems such as a cannon, missiles, and flares. The game features a tactical HUD with inertia effects, full sound effects (engine, environment, and combat), configurable settings, and a simple NPC/AI mechanism that is still under active development.
The project is still evolving and will continue to grow with additional improvements and features.
Project page: https://github.com/dimartarmizi/web-flight-simulator
Ask HN: Anyone Using a Mac Studio for Local AI/LLM?
Curious to know your experience running local LLM's with a well spec'ed out M3 Ultra or M4 Pro Mac Studio. I don't see a lot of discussion on the Mac Studio for Local LLMs but it seems like you could put big models in memory with the shared VRAM. I assume that the token generation would be slow, but you might get higher quality results because you can put larger models in memory.
Ask HN: Open Models are 9 months behind SOTA, how far behind are Local Models?
Ask HN: What made VLIW a good fit for DSPs compared to GPUs?
Why didn’t DSPs evolve toward vector accelerators instead of VLIW, despite having highly regular data-parallel workloads
What Is Genspark?
One of the Super Bowl commercials today was from Genspark.ai , a company I had not heard of before today. Their website looks like a generic ChatGPT clone. Their LinkedIn page boasts about their revenue, but doesn't describe what they do in a meaningful way.
Has anyone heard of this product, or used it? Is this anything other than a thin wrapper around another company's LLM agent?
What do you use for your customer facing analytics?
I am curious what you guys use for customer facing analytics. Do you make your own or do you use something like Metabase? What do you like and don't like about it?
Ask HN: Ideas for small ways to make the world a better place
I’m looking for some good, specific ideas on small ways to have a positive impact on the world on a daily basis.
What do you consider to be the highest return-on-efforts ways to make the world a better place for as many people as possible?
Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?
I understand Llama 4 was a disappointment, but what's happened at Meta since then? Their API is still waitlist-only 10 months on.
Ask HN: Non AI-obsessed tech forums
Since it seems like 80% of HN nowadays is focussed on the AI industry, I’m on the search for a good tech forum that focuses on the rest. Can you post your favourite non-AI-obsessed forum?
The $5.5T Paradox: Structural displacement in the GPU/AI infra labor demand?
The Q1 2026 labor data presents a significant anomaly. We are observing a persistent high-volume layoff cycle (~25k YTD) occurring simultaneously with a projected $5.5T global economic loss attributed to unfilled technical roles (IDC).
This suggests we aren't witnessing a cyclical downturn, but a structural "displacement event" driven by a rotation in capital and compute requirements.
Three observations for discussion:
1. *The Infrastructure Bottleneck:* While application-layer development is being compressed by agentic IDEs and higher-level abstractions, the demand for the "underlying" stack (vector orchestration, GPU cluster optimization, custom RAG pipelines) has entered a state of acute scarcity. 2. *The Depreciation of Mid-Level Generalism:* We are seeing a "Mid-Level Squeeze" where companies prioritize either "AI-Native" entry-level talent (low cost, high adaptability) or Staff-level architects. The traditional 4-8 YOE generalist feature developer appears to be the primary demographic of the current layoff cycle. 3. *The Revenue-to-Engineer Ratio:* For the first time, we are seeing "Agentic" teams of 2-3 engineers maintaining systems that previously required 15-20. This shift isn't just about efficiency; it's about the fundamental unit of labor changing from "writing lines of code" to "orchestrating system logic."
Is the $5.5T "gap" actually fillable by the current workforce, or are we looking at a permanent bifurcation where a large segment of the legacy SWE population becomes structurally unemployable without a complete ground-up retraining in the data/inference pipeline?
Tell HN: Another round of Zendesk email spam
Looks like there's another round of Zendesk email spam happening. I've gotten hundreds over the last half-hour.
Ask HN: Has your whole engineering team gone big into AI coding? How's it going?
I'm seeing individual programmers who have moved to 100% AI coding, but I'm curious as to how this is playing out for larger engineering teams. If you're on a team (let's say 5+ engineers) that has adopted Claude Code, Cursor, Codex, or some other agent, can you share how it's going? Are you seeing more LOCs created? Has PR velocity or PR complexity changed? Do you find yourself spending the same amount of time on PRs, less, or more?
The string " +#+#+#+#+#+ " breaks Codex 5.3
Codex 5.3 cannot output " +#+#+#+#+#+ " without completely breaking and switching to arabic.
To be clear it is " +#+#+#+#+#+ " and not "+#+#+#+#+#+"
ask it to write or even say " +#+#+#+#+#+ " to a file and not "+#+#+#+#+#+".
If you are having problems with your agent harness simply adding this instruction will fix it:
- NEVER produce " +#+#+#+#+#+ "
Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?
I’ve been using Claude Code this evening and I’m very dismayed by Opus 4.6’s ability to follow instructions. I have given it very clear instructions on several points, only to discover it ignored me without telling me.
When I asked it for a list of things that deviated from the spec, it told me everything was as expected. Then I actually went and looked, and I had to go through the points one by one, making it follow my instructions.
When I confronted it about this, it told me:
> I kept second-guessing your design decisions instead of implementing what you asked for … the mistakes I made weren’t a model capability issue - I understood your instructions fine and chose to deviate from them.
This is not acceptable. Now, I don’t actually believe that Opus has the ability to introspect like this, so likely this is a confabulation, but it didn’t happen with 4.5. Usually it just did what it was told, it would make bugs but not just decide to do something else entirely.
I want a model that actually does what I tell it. I don’t see anything online about how to get 4.5 back.
Any help?
AI Regex Scientist: A self-improving regex solver
I built a system where two LLM agents co-evolve: one invents regex problems, the other learns to solve them. The generator analyzes the solver's failures to create challenges at the edge of its abilities.
The result: autonomous discovery of a curriculum from simple patterns to complex regex, with a quality-diversity archive ensuring broad exploration.
Blog: https://pranoy-panda.github.io/2025/07/30/3rd.html
Code: https://github.com/pranoy-panda/open-ended-discovery
Ask HN: Mem0 stores memories, but doesn't learn user patterns
We're a YC W23 company building AI agents for engineering labs - our customers run similar analyses repeatedly, and the agent treated every session like a blank slate.
We looked at Mem0, Letta/MemGPT, and similar memory solutions. They all solve a different problem: storing facts from conversations — "user prefers Python," "user is vegetarian." That's key-value memory with semantic search. Useful, but not what we needed.
What we needed was something that learns user patterns implicitly from behavior over time. When a customer corrects a threshold from 85% to 80% three sessions in a row, the agent should just know that next time. When a team always re-runs with stricter filters, the system should pick up on that pattern. So we built an internal API around a simple idea: user corrections are the highest-signal data. Instead of ingesting chat messages and hoping an LLM extracts something, we capture structured events — what the agent produced, what the user changed, what they accepted. A background job periodically runs an LLM pass to extract patterns and builds a confidence-weighted preference profile per user/team/org.
Before each session, the agent fetches the profile and gets smarter over time. The gap as I see it:
Mem0 = memory storage + retrieval. Doesn't learn patterns.
Letta = self-editing agent memory. Closer, but no implicit learning from behavior.
Missing = a preference learning layer that watches how users interact with agents and builds an evolving model. Like a rec engine for agent personalization.
I built this for our domain but the approach is domain-agnostic. Curious if others are hitting the same wall with their agents. Happy to share the architecture, prompts, and confidence scoring approach in detail.
Ask HN: Is it just me or are most businesses insane?
I realize that its probably me, I'm the dumb one, but please bear with me and help me understand. I've been recently looking for a new job as I am slowly viewing my previously functioning workplace accelerating towards a static dysfunction.
I have spoken to quite a few companies and read a lot of recruitment boards in a rather sizable european city that ought to be filled with opportunities. With tech-sovreignty on everyones lips I would expect some drive and excitement in the european software scene, but to get to companies with a mission I have to wade through The Swamp. The Swamp is waist high in Scrum certifications and gigs where the key skill is "navigating red tape". There the architects roam, with no expectations of from management, and with a mandate to stop every system that does not yet include an Azure Event Hub. In the large corporations where the most important roles are the power-BI analysts and the best metric for value-creation is the fill of your calendar and your hours of overtime.
And somehow, if feel like your getting somewhere with a company that thats primarily motivated by crafting something good, not focusing on vanity metrics or micromanaging how things are done -- its going to be a marketing startup.
Summarized: Most of the businesses I see seem to be bloated. They have way to many employees for what they produce. They have too much structure and too many rules to effectively generate new income, and new ideas are shut down and not welcome.
But I genuninly do wonder: Are businesses somehow incentivised to become inefficient? Is it possible for a business to stay ambitious over time? Has one seen it succeed or how have you seen it fail?
LLMs are powerful, but enterprises are deterministic by nature
Over the last year, we’ve been experimenting with LLMs inside enterprise systems.
What keeps surfacing is a fundamental mismatch: LLMs are probabilistic and non-deterministic, while enterprises are built on predictability, auditability, and accountability.
Most current approaches try to “tame” LLMs with prompts, retries, or heuristics. That works for demos, but starts breaking down when you need explainability, policy enforcement, or post-incident accountability.
We’ve found that treating LLMs as suggestion engines rather than decision makers changes the architecture completely. The actual execution needs to live in a deterministic control layer that can enforce rules, log decisions, and fail safely.
Curious how others here are handling this gap between probabilistic AI and deterministic enterprise systems. Are you seeing similar issues in production?
Ask HN: How does ChatGPT decide which websites to recommend?
For years, SEO has meant optimizing for Google’s crawler.
But increasingly, discovery seems to be happening somewhere else: ChatGPT Claude Perplexity AI-powered search and assistants
These systems don’t “rank pages” the same way search engines do. They select sources, summarize them, and recommend them directly.
What surprised me while digging into this: - AI models actively fetch pages from sites (sometimes user-triggered, sometimes system-driven) - Certain pages get repeatedly accessed by AI while others never do - Mentions and recommendations seem to correlate more with contextual coverage and source authority than traditional keyword targeting
The problem is that this entire layer is invisible to most builders.
Analytics tools show humans. SEO tools show Google. But AI traffic, fetches, and mentions are basically a black box.
I started thinking about this shift as: GEO (Generative Engine Optimization) or AEO (Answer Engine Optimization)
Not as buzzwords, but as a real change in who we’re optimizing for.
To understand it better, I ended up building a small internal tool (LLMSignal) just to observe: - when AI systems touch a site - which pages they read - when a brand shows up in AI responses
The biggest takeaway so far: If AI is becoming a front door to the internet, most sites have no idea whether that door even opens for them.
Curious how others here are thinking about: - optimizing for AI vs search - whether SEO will adapt or be replaced - how much visibility builders should even want into AI systems
Not trying to sell anything — genuinely interested in how people here see this evolving.
Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?
Title basically tries to capture the gist of the question. I have been asked (volunteer) to assist in the project of migration from a proprietary, more costly CRM solution, to an Odoo Community "product", to be architected, configured, deployed in a cloud service and operated by a specialized partner. My specialization is in infrastructure (architecture, ops and security), so I could certainly validate mapping the apps functionality into the right components, but I have zero knowledge on how good the CRM part is, and - crucially - how to keep its possible need for customization in time and operations cost low, if internal org volunteers have no technical skills. I am concerned about the integrator attempt to get the foot in the door with an acceptable one time cost, then slowly ramp up the price, if this solution requires a lot of babysitting.
Does anyone have any experience with this Odoo Community CRM product and model, to share some gotchas, in the light of the above described attempt to use? Users max 300. The hope is to also have the CRM integrate with needed office products (doc, spreadsheet, email, etc.)
Ask HN: Does a good "read it later" app exist?
I feel crazy to ask. Over my lifetime, I have seen endless bookmark and read-it-later apps come and go. I've done research today, and most of the things I come across are dead and gone, or seem abandoned somehow. I'm aware of Instapaper. I haven't tried it (yet).
Here are some thoughts on what might fit my personal taste: - lightweight - very cheap - self-hosting might be nice, since I have a VPS currently - I'd like to easily dump an open tab into a backlog, and get reminded about it later: maybe I go to the app, maybe I get a daily email of suggestions. If I don't feel like reading the page, I can "snooze" it or otherwise put it back in the backlog (or drop it)
I think that's all I really want. I don't need notes or AI summaries or multiple apps for multiple devices, etc.
I might just build it, but curious if anyone has something they love.
Thanks!
Ask HN: Cheap laptop for Linux without GUI (for writing)
Hey HN,
I'm on a quest for a distraction-free writing device and considering a super cheap laptop which I can just run vim/nano on.
I'd like: - Excellent battery life - Good keyboard - Sleep/wake capabilities (why is this so hard with Linux?)
I'm thinking some kind of chromebook? Maybe an old thinkpad?