Story

Ask HN: Who is using local LLMs in a production environment here?

Haeuserschlucht Friday, January 02, 2026

I'm asking because it seems that nobody really does that. Yes, there are some projects here and there, but ultimately everybody just jumps over to cloud LLMs. Everything is cloud. People pay for GPU usage somewhere in the middle of nowhere. But nobody really uses local LLMs long term. They say, "Well, it's so great. Local LLMs work on small devices they even work on your mobile phone."

I have to say there's one exception for me and that's Whisper. I actually do use Whisper a lot. But I just don't use local LLMs. They're just really, really bad compared to cloud GPUs.

And I don't know why, because for me it seems that having a speech-to-text model is much more challenging to create than just a model that creates text.

But it seems that they really cannot remove the differences and have it run on consumer computers. And so I also go back to cloud LLMs, all privacy aside.

8 3
Read on Hacker News Comments 3