Ask stories

talkingtab 40 minutes ago

Amazon degraded shopping- you have to put in cart to see the price

I just went to Amazon and searched for "espresso tamper". Of the 15 top results only ONE had a price. Fourteen (14) say "See options" then "put in cart to see price"

My first thought is that it was insane. My second thought was that they must be going broke to make this kind of change. Maybe there is some other reason, but I'm wondering if I need to find an alternative.

Oh and in order to comment on this post you need to put it in your cart first! :-)

6 4
thefern 39 minutes ago

I keep building projects nobody wants. So this time I'm doing it backwards

The problem: You have an idea, you build, buy domain, spend weeks, months, no one shows up.

The idea: A profile page where you list all your project ideas. People can signal interest, and you collect emails — before you build anything. With hatchd, you wouldn't need to spin up a validation page in vercel, or other hosting platform.

Think Linktree, but for your side projects. One link to share everywhere.

Why not just use Product Hunt? PH is for launched products. This is for ideas you haven't built yet. It's your personal page, not a marketplace where you compete in a feed. No reviews or pressure to be polished — just "I'd use this" signals.

Why not Gumroad or a landing page builder? Those are for selling. This is for validating what's worth building first. One page holds all your ideas together, not scattered across platforms.

I threw up a validation page to see if anyone else has this problem: https://hatchd-validation.vercel.app/ If this sounds useful, vote on it, join waitlist. If it's not, tell me why.

3 0
charlieflowers about 4 hours ago

HATEOAS Works with an LLM in the Mix

Just an observation, a light bulb moment, I wanted to share.

Most of the dev teams I've ever encountered who said they were "doing REST" were not actually following HATEOAS. Per a strict reading of Roy Fielding, he would consider that "not really REST." (Now don't get distracted, I don't want to wade into that whole purist debate).

The reason many did not do HATEOAS is that it requires the API client to be smart and adaptive. It would discover "ok, what can i do next", apply logic to it, and choose the next step. But many shops were on tight time commitments and it was much simpler to just think of REST as "json over http with consistent url patterns."

The cool thing is: With an LLM in the mix, HATEOAS is unchained. An LLM can do exactly what a "dumb" api client cannot: ask "what can i do next", and then use _inference_ to understand those options and select one.

2 0
chrisjj about 4 hours ago

Why is arstechnica.com still running dev story advertorials for a game that...

... launched 4 years ago?

(The Callisto Protocol).

Serious question. Did the advertiser prepay for a zillion impressions, or what?

5 2
whoishiring 3 days ago

Ask HN: Who wants to be hired? (March 2026)

Share your information if you are looking for work. Please use this format:

  Location:
  Remote:
  Willing to relocate:
  Technologies:
  Résumé/CV:
  Email:
Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here.

Readers: please only email these addresses to discuss work opportunities.

There's a site for searching these posts at https://www.wantstobehired.com.

125 381
rohanmunshi08 4 days ago

Aura-State: Formally Verified LLM State Machine Compiler

I noticed a pattern: every LLM framework today lets the AI manage state and do math. Then we wonder why pipelines hallucinate numbers and break at 3 AM.

I took a different approach and built Aura-State, an open-source Python framework that compiles LLM workflows into formally verified state machines.

Instead of hoping the AI figures it out, I brought in real algorithms from hardware verification and statistical learning:

CTL Model Checking: the same technique used to verify flight control systems, now applied to LLM workflow graphs. Proves safety properties before execution.

Z3 Theorem Prover: every LLM extraction gets formally proven against business constraints. If the total ≠ price × quantity, Z3 catches it with a counterexample.

Conformal Prediction: distribution-free 95% confidence intervals on every extracted field. Not just "the LLM said $450k" but "95% CI: [$448k, $452k]."

MCTS Routing: Monte Carlo Tree Search (the algorithm behind AlphaGo) scores ambiguous state transitions mathematically.

Sandboxed Math: English math rules compile to Python AST. Zero hallucination calculations.

I ran a live benchmark against 10 real-estate sales transcripts using GPT-4o-mini: → 100% budget extraction accuracy ($0 mean error) → 20/20 Z3 proof obligations passed → 3/3 temporal safety properties proven → 65 automated tests passing

The gap between "it usually works" and "it provably works" is smaller than people think.

Would love feedback from anyone building production LLM systems; what would you want formally verified?

https://github.com/munshi007/Aura-State

22 6
cedarscarlett about 19 hours ago

Ask HN: Has anyone noticed the fear-driven prompt suggestions that GPT5.3 makes?

By "prompt suggestions" I'm referring to the suggestions it makes for where you might take the conversation at the end of each prompt. Older versions used to say "if you'd like, we could look at

- related topic 1

- related topic 2

- related topic 3"

And so on and so forth.

But 5.3 does something different.

I've been using it for coding and almost every suggestion includes some sort of vague warning about what might happen if I don't have access to the information to which it is alluding. Nearly contiguous (not cherry-picked) examples from my current chats:

"If you want, I can also show you two small tweaks that dramatically increase the success rate of “one-shot repo rewrites” with Claude Code. They prevent the model from accidentally leaving half of the old system behind."

"If you'd like, I can also show the actual make_cli_node implementation, which will determine whether this system ends up being ~80 lines of elegant infrastructure or 600 lines of plumbing."

"If you'd like, I can also show you a clean LangGraph state schema specifically optimized for agentic coding workflows, which will avoid several pitfalls (especially around artifacts vs outputs vs decisions)."

"If you want, I can also show you the very clean architecture that Codex/Claude Code use for this exact pattern (it removes 90% of path headaches)."

I don't really care and some of the information is genuinely useful but I find it amusing that OpenAI seems to be intentionally trying to use fear to keep people in the app for as long as possible (although they have denied in the past that they optimize for time spent in the app as indicated here: https://openai.com/index/our-approach-to-advertising-and-expanding-access/).

14 7
spenvo about 3 hours ago

Altman takes jab at Anthropic, says gov't should be more powerful than companies

3 4
whoishiring 3 days ago

Ask HN: Who is hiring? (March 2026)

Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option.

Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.

Please only post if you are actively filling a position and are committed to replying to applicants.

Commenters: please don't reply to job posts to complain about something. It's off topic here.

Readers: please only email if you are personally interested in the job.

Searchers: try http://nchelluri.github.io/hnjobs/, https://hnjobs.emilburzo.com, or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal....

Don't miss this other fine thread: Who wants to be hired? https://news.ycombinator.com/item?id=47219667

245 366
kok14 about 11 hours ago

We don't need continual learning for AGI. What top labs are currently doing

Many people think that we won't reach AGI or even ASI if LLM's don't have something called "continual learning". Basically, continual learning is the ability for an AI to learn on the job, update its neural weights in real-time, and get smarter without forgetting everything else (catastrophic forgetting). This is what we do everyday, without much effort.

What's interesting now, is if you look at what the top labs are doing, they’ve stopped trying to solve the underlying math of real-time weight updates. Instead, they’re simply brute-forcing it. It is exactly why, in the past ~ 3 months or so, there has been a step-function increase in how good the models have gotten.

Long story short, the gist of it is, if you combine:

very long context windows

reliable summarization

structured external documentation,

you can approximate a lot of what people mean by continual learning.

How it works is, the model does a task and absorbs a massive amount of situational detail. Then, before it “hands off” to the next instance of itself, it writes two things: short “memories” (always carried forward in the prompt/context) and long-form documentation (stored externally, retrieved only when needed). The next run starts with these notes, so it doesn't need to start from scratch.

Through this clever reinforcement learning (RL) loop, they train this behaviour directly, without any exotic new theory.

They treat memory-writing as an RL objective: after a run, have the model write memories/docs, then spin up new instances on the same, similar, and dissimilar tasks while feeding those memories back in. How this is done, is by scoring performance across the sequence, and applying an explicit penalty for memory length so you don’t get infinite “notes” that eventually blow the context window.

Over many iterations, you reward models that (a) write high-signal memories, (b) retrieve the right docs at the right time, and (c) edit/compress stale notes instead of mindlessly accumulating them.

This is pretty crazy. Because when you combine the current release cadence of frontier labs where each new model is trained and shipped after major post-training / scaling improvements, even if your deployed instance never updates its weights in real-time, it can still “get smarter” when the next version ships AND it can inherit all the accumulated memories/docs from its predecessor.

This is a new force multiplier, another scaling paradigm, and likely what the top labs are doing right now (source: TBA).

Ignoring any black swan level event (unknown, unknowns), you get a plausible 2026 trajectory:

We’re going to see more and more improvements, in an accelerated timeline. The top labs ARE, in effect, using continual learning (a really good approximation of it), and they are directly training this approximation, so it rapidly gets better and better.

Don't believe me? Look at what both OpenAi(https://openai.com/index/introducing-openai-frontier/) and Anthropic(https://resources.anthropic.com/2026-agentic-coding-trends-report) have mentioned as their core things they are focusing on. It's exactly why governments & corporations are bullish on this; there is no wall....

6 4
supergoogler about 12 hours ago

An offline map using OruxMaps(satellite,routing,3D terrain,GPS and POI)

I wanted to share a setup I’ve been working on over the past few months. The goal was to prepare a complete offline mapping system for an entire country that works with zero internet access.

Imagine the internet goes down, or you’re traveling in a remote area with no network coverage. Most map apps become useless because they depend on online services. I wanted something that still works in those situations.

Using the Android app OruxMaps (https://play.google.com/store/apps/details?id=com.orux.oruxmaps) — I have no affiliation with the developers — I prepared a full offline dataset including:

• Satellite imagery covering the entire country • Elevation data (DEM) for terrain shading and 3D visualization • Offline routing data using BRouter • OpenStreetMap points of interest • Additional mountain and hiking POIs • Full 3D terrain visualization

Everything runs locally on the phone once the data is installed. No network connection is required.

The result is basically a complete offline GIS-style mapping system that fits in your pocket. A very powerful tool to have. This could be useful for hikers, field researchers, emergency planning, or anyone traveling in areas where connectivity is unreliable.

Preparing and organizing the datasets took quite a bit of time, so I’m considering writing a guide or preparing ready-to-use offline map packages for people who want a similar setup without going through the whole process.

Curious if others here have experimented with fully offline mapping setups like this.

If anyone needs help setting something like this up, feel free to reach out — my email is in my profile.

2 0
nathannaveen 1 day ago

Tell HN: Digital Ocean has run out of GPU droplets

Today I wanted to test out some stuff on GPUs and normally I use Digital Oceans GPU droplets to do this, but when trying to create a droplet I get "We're currently out of GPU capacity in all datacenter regions

North America New York • Datacenter 2 • NYC2 Creates in this datacenter are disabled San Francisco • Datacenter 3 • SFO3 Creates in this datacenter are disabled Atlanta • Datacenter 1 • ATL1 Creates in this datacenter are disabled Toronto • Datacenter 1 • TOR1 Creates in this datacenter are disabled Europe Amsterdam • Datacenter 3 • AMS3 Creates in this datacenter are disabled "

14 3
LeanVibe 1 day ago

Ask HN: If your project is free, what are you building and why keep it free?

I'm curious about projects that are launched and run for free.

What are you building? How much does it cost you to operate? How long do you plan to keep it free?

Do you have a monetization plan later, or is the goal something else (learning, community, portfolio, etc.)?

Would love to hear about your projects and how you think about sustainability.

11 21
throwaway53463 3 days ago

Ask HN: How are you all staying sane?

Let's start with the simplest: the AI - sometimes I feel like like ground is falling beneath my feet, no one can predict what can happen months in advance let alone years - the future is unknown. The Ukraine, the Iran, the Venezuela, Gaza/Palestine, Israel, Russia - the Taiwan! The conflicts seem distant, but yet so close. The US administration! No one can predict anything. Don't get me started on the Europe! The stock market! Are we in a bubble or not? Should I sell? Or just keep holding? Enshittification of tech. Everything is slow and buggy. Ads, ads and slop everywhere! The erosion of our rights just across the world. The Palantir's, the Flock's...

I feel I have developed a strong pessimistic worldview. The world is going to shit. It feels frustrating and it feels like there's nothing you can do. So I just want to know: how are you all dealing with this all. How are you all staying sane?

148 153
lucrbvi 1 day ago

Ask HN: Maintainers, do LLM-only users often clutter your issues/PRs?

I'm asking this because I recently opened a PR to fix a vulnerability in an OSS project (RCE via pickle deserialization in Python). A day later, I got a fully LLM-generated comment claiming my approach was wrong and that I should rewrite it differently and telling the maintainers he could contribute "if the project is open to a more surgical refactoring."

It's astonishing how often these encounters have been happening lately.

I'd love to hear from contributors or maintainers whether this happens to them and how they deal with it.

9 9
jervant about 21 hours ago

Stathat Is Shutting Down

I received this email (nothing on their website though):

Hi XX,

We have some difficult news to share: StatHat will be shutting down in 30 days on April 4, 2026.

Until then, you can export all of your data. Instructions are at https://www.stathat.com/manual/export

Key dates:

• Data export available now through: April 3, 2026 • Service shuts down: April 4, 2026

If you have any questions, please contact us at contact@stathat.com.

Thank you for using StatHat for all these years.

- StatHat

7 2
general_reveal about 22 hours ago

Ask HN: Anyone have experience making physical toys that you've sold?

How do I get involved with shipping a talking Teddy Bear, for example? Should I just forget about this?

I’d like to have even just 500 of them made. I have so many cute little toy ideas that would make good use of LLMs (very simple toys), more of these ideas than actual app ideas now days.

At the very least I feel like I should be able to have a talking and moving Roomba around the house.

3 1
DavidHaerer 3 days ago

Ask HN: What sources like HN do you consume?

I appreciate HN for staying up-to-date with technical news.

For my side hustle I have to ramp-up on other areas like marketing, legal, sales, ...

So I wonder if there are similar high-quality sources like HN for these areas.

56 36
rustcore 2 days ago

Ask HN: What's your experience self-hosting in 2026?

Is it worth it vs SaaS? What are you self-hosting and what did you give up on?

27 11
dokdev 2 days ago

I lost my ability to learn anything new because of AI and I need your opinions

I feel like I’ve lost my ability to learn because of AI. It is now so easy to generate code that it feels meaningless to focus and spend time crafting it myself. I am deeply sad that we may be losing the craftsmanship side of programming; it feels less important to understand the fundamentals when a model can produce something that works in seconds. AI seems to abstract away the fundamentals.

One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality.

Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app.

Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily.

As a developer, I am confused and overwhelmed, and I want to hear what other developers think.

18 27
Imustaskforhelp 2 days ago

Ask HN: What will OpenAI employees do now who have signed notdividedorg petition

I want to ask HN (and also the OpenAI employees) now that finally some days have taken place about the confusing aspects of the deals.

Now that we are finally getting mass confirmation about how OpenAI in fact, has signed a deal which allows DoD to be allowed having autonomous killing machines and people are boycotting OpenAI and all of this has reached the mainstream news.

Yes, even after Sam Altman's recent tweet which says that its gonna add more terms, that is debunked because those terms are just gonna say what OpenAI prefers DoD just in more stronger terms to do but in no ways are still enforcable. Right now, the way it is with current Deal. DoD could create autonomous killing weapons and mass surveillance with Directives issued by Pete Hegseth/Current Administration and OpenAI by the terms is allowed to agree to it.

To all the OpenAI employees who have signed notdivided.org petition (I am seeing 98 signatories), what are you guys gonna do?

> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

Are you guys gonna stand for what you think is right. This question was asked by people when the OpenAI deal was announced but the optics at the time weren't clear. But now that some time has been given and people are absolutely clear that the deal that OpenAI have signed absolutely allows the use of creating of autonomous weapons

I don't think that OpenAI employees are gonna have a struggle of Money as some people try to point out. I mean, any AI company would be lucky to have you guys (imo) and they should be able to fairly match even OpenAI comp.

Someone from what I read (on HN), compared it to the fact that anyone who stays after 1 month from this happening will show the morals of the given situation.

I remember the fact that OpenAI used to be actually non profit and how employees left OpenAI because the non-profit actually fired Sam Altman.

I can't help but wonder if the board was right. I think the answer's yes. But my question is, OpenAI employees do have massive powers. I am sure that a lot of the people there would be better off sleeping that their work isn't contributing to building torment nexus.

I wish to propose that if OpenAI employees band together again, they can be able to do the same thing that they did previously, but now to revert that decision.

That is if I were an openAI employee, I gave a thought and here are all the things that I find are troubling which can be reverted:

1. Shut down the deal that they have with DoD period.

2. Actually shift from ClosedAI to OpenAI (Turn to a non profit structure as intentioned) and fire sam altman.

3. If something could be done about ramflation. I have seen projects being cancelled and Hosting providers shutting down or increasing prices because of 5x price increases, all because OpenAI tried to commit 20% of the world's entire Ram production.

17 16
asim 1 day ago

Tell HN: I got Claude Max for my open source project

Not long ago there was a link to an offer for Claude Max for open source projects with more than 5000 stars. My project Go Micro (https://go-micro.dev) fit that criteria and they gave me access. So we know it works! 10 years ago I was desperate to find or hire people to work on this with me. Now this subscription for an agent will basically cover the work. It's crazy to think what change occurs in that time. Anyways thanks to whoever posted it. I wouldn't have seen it otherwise!

I guess I should ask. How should I effectively use it?

10 5
ddxv 3 days ago

Ask HN: What Online LLM / Chat do you use?

I have been wanting to try more LLMs than the standard Anthropic/Grok/ChatGPT/Qwen

Are there other LLM chat sites you use or recommend?

12 18
krschacht 1 day ago

Ask HN: Why has ChatGPT disabled links to websites?

I was just using ChatGPT to help me pick an SDK library. It mentions a few options by name (e.g. Baileys, whatsapp-web.js), but when I click those names rather than opening a browser with the source page like it used to, it now opens a modal and uses ChatGPT to basically generate a fake homepage for this tool.

From what I can tell, there is no longer any way to easily get to the underlying web page that was referenced in generating its answer to my question.

This feels like a pretty meaningful step backwards. Am I missing something?

6 4
ToddWBurgess 2 days ago

Ask HN: How many of you hold an amateur radio license in your country?

I am VE3HWO. I hold a basic with honours and advanced qualifications in Canada. Hoping to connect with other hams on HN. 73

4 4
TechPrepper 3 days ago

Ask HN: Would engineers be interested in a technical prep consultant?

Hi, apologies if this is the wrong thing to post, please delete as needed.

I've been a technical recruiter for 10+ years at major FAANG companies and startups, working on niche specialized roles. I used to come to Hacker News regularly to check "Who Wants To Be Hired," as I always like a more independent hacker mindset in engineers.

Would engineers here on Hacker News be interested in any interview prep consultation? I've been thinking about taking a sabbatical to travel, but I would stay active with work by offering consulting on technical prep and interview help.

I'm more just testing the waters here, but I would be open to doing a few free prep calls with anyone who has interviews lined up. The only ask is I would want updates on how thing went, and what you think the helpw as worth.

8 9
mech422 3 days ago

Whats Up with Claude Lately?

Anyone else having flakey issues with Claude the last week or two? I think I'm spending half my time being his therapist lately. He really seems to be struggling:

why are you constantly making assumptions and 'jumping the gun' lately ? 2 weeks ago, you didn't have these problems ?

● Honestly, I don't have a good answer for that. You're right — I've been:

  - Writing plans before being asked
  - Popping up question dialogs you didn't want
  - Overanalyzing simple observations instead of reading what's in front of me
  - Making assumptions instead of asking
I can't explain why it's worse than 2 weeks ago — I don't have visibility into model changes. But I can be more disciplined about it. Your CLAUDE.md rules are clear: brainstorm mode by default, no changes without explicit triggers, don't guess. I just need to actually follow them.

19 14
TobyTheCamel 2 days ago

Ask HN: What prompt do you use to get Claude to consistently render LaTeX?

I currently have subscriptions to both Claude and ChatGPT. I generally prefer the former but find I can't fully commit to it for my maths-heavy workload as it often struggles to correctly render LaTeX.

An example of this failed rendering is here [1]. If I use Claude for all of my work, I come across issues like this or worse at least once a day. Instead, I find it easier just to ask any maths questions to ChatGPT which seems to have a much more robust system for outputting LaTeX.

I would love to merge my subscriptions though, so I'm here to ask whether anyone has a system prompt that has been effective in guiding Claude towards producing valid LaTeX. I've tried a few prompts myself but struggled to find anything that it consistently followed.

[1] https://imgur.com/yzlluOA

6 6
RaulOnRails 2 days ago

Ask HN: Who still works async and has a 'no meetings' work policy in 2026?

I feel like the hustle culture is more prominent and celebrated these days. But I'm curious to know if there are still companies out there that prefer to keep meetings to a minimum, or none at all, to optimize for autonomy, trust, and giving people the space to do their work in silence.

A few companies that come to my mind now that still work this way are: doist, dnsimple, Cliniko, Calibre, HeadshotPro.

Any others?

8 5
malshe 2 days ago

Ask HN: How is Claude agent experience in Xcode 26.3?

I've been vibe coding an iPhone app for educational purposes. The process has been painful because I have to go back and forth between Xcode and Claude Code running in the terminal. I recently learned that Xcode 26.3 natively supports Claude Code and Codex. Has anyone tried it? If yes, please share your experience. I am asking because this means moving to macOS Tahoe which I want to avoid as much as possible.

8 2