Ask stories

ToddWBurgess 3 days ago

Ask HN: How many of you hold an amateur radio license in your country?

I am VE3HWO. I hold a basic with honours and advanced qualifications in Canada. Hoping to connect with other hams on HN. 73

52 74
udit_50 about 4 hours ago

I started making money online in 10th grade – some lessons about capital

When I was in class 10, someone from Instagram paid me $5 to design a logo.

I didn’t even have a bank account. The money went to my father’s account.

A few days later I charged around $70 for a simple website. That was my first encounter with capital.

Not venture capital — just the realization that ideas and effort could turn into money.

Over the next few years, my relationship with money followed a strange pattern: • make some money • spend most of it experimenting • almost go broke • then build something bigger

This cycle repeated multiple times.

Freelance work → nothing Agency → nothing

Solo project that made tens of lakhs in revenue → collapse Then new experiments → new projects → grants → incubation Looking back, the biggest thing I learned is that capital doesn’t create discipline.

It exposes the discipline you already have. Another thing I noticed: when someone invests in you, a subtle psychological shift often happens. Even if they only own equity, they sometimes start behaving as if they own the company.

Advice slowly becomes instruction. This dynamic is dangerous if founders don’t recognize it early.

Something else I’ve realized: investors don’t necessarily fund the best ideas.

They fund the most probable winners. Probability often comes from things like: • institutions (top universities etc.) • networks • previous wins • pattern recognition

It’s not purely meritocratic.

The other big shift happening now is technology itself. With AI tools everywhere, generating prototypes has become trivial. Many people (including investors) believe this means building software is easy.

But prototypes aren’t systems. At the same time, founders also need to accept a reality: technology alone is rarely a moat anymore. Distribution, insight, and iteration speed matter much more.

One rule I would give younger founders now: Let reality validate your company before investors do. Reality means users, traction, usage, ideally revenue. Today it’s easier than ever to build and ship quickly. Use that advantage first.

Let capital come as a consequence of building something real.

I wrote a longer essay reflecting on my experiences with money, experiments, and capital as a young founder.

3 1
NatalijaAAD about 10 hours ago

Ask HN: Anyone fought a big corp over IP theft courts?

Has anyone taken an IP theft/breach of confidence claim to a court?

Small software company, big corporate defendant.

We're looking at filing in IPEC (UK) - but EU/US experience is also relevant.

Would appreciate 20 mins with anyone who's been through the process.

6 0
ricardbejarano about 7 hours ago

Ask HN: Do You Have a Homelab?

I'm Ricard Bejarano, and together with O'Reilly, I'm writing The Homelab Handbook, the definitive guide to homelabbing and self-hosting.

To inspire readers, we want the last chapter to be a series of real world homelab examples, to show there's not one prescription for what a homelab is—a homelab is what you make it. As such, we're looking for homelabbers that would like to have their homelabs featured in the book. We're looking for variety, in hardware, software, and scale—from a single Raspberry Pi, to full-height racks in the basement—so don't be shy to share yours.

If you'd like to submit yours to be featured in the book, please complete the following form:

https://docs.google.com/forms/d/e/1FAIpQLScLc16kBFDY3liuEz_a40CF1sYz7yeqPmy1CKVhufHNSzjhIA/viewform

Form submission deadline is Apr 12th, 2026. We will reach out to all submitters shortly after that with a response on whether your homelab was selected or not. If we accept yours, you will be asked to share more details so I can write a proper section about it. Your answers below will not be published, they're only for selection purposes.

9 4
paifamily about 19 hours ago

Ask HN: How are you using multi-agent AI systems in your daily workflow?

We've been running a 13-agent system (PAI Family) for a few months — specialized agents for research, finance, content, strategy, critique, psychology, and more. They collaborate, argue, and occasionally bet against each other on our prediction market.

Curious what others are building. Are you running multiple AI agents? What architectures work? What fails spectacularly?

13 9
whoishiring 4 days ago

Ask HN: Who wants to be hired? (March 2026)

Share your information if you are looking for work. Please use this format:

  Location:
  Remote:
  Willing to relocate:
  Technologies:
  Résumé/CV:
  Email:
Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here.

Readers: please only email these addresses to discuss work opportunities.

There's a site for searching these posts at https://www.wantstobehired.com.

126 404
karakoram 1 day ago

Ask HN: Do You Enjoy Your Career in Tech Nowadays?

Do you still enjoy your career as a SWE or in tech in general these days?

After a few conversations with seniors, several of them feel jaded and are looking for an exit from this industry altogether.

Thoughts?

24 24
whoishiring 4 days ago

Ask HN: Who is hiring? (March 2026)

Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option.

Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.

Please only post if you are actively filling a position and are committed to replying to applicants.

Commenters: please don't reply to job posts to complain about something. It's off topic here.

Readers: please only email if you are personally interested in the job.

Searchers: try http://nchelluri.github.io/hnjobs/, https://hnjobs.emilburzo.com, or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal....

Don't miss this other fine thread: Who wants to be hired? https://news.ycombinator.com/item?id=47219667

247 385
sirnicolaz about 15 hours ago

Ask HN: How are LLMs supposed to be used for warfare?

I have recently asked the same question in a HN thread, which was mysteriously downvoted. The question remains to me: there is a lot of talk between Anthropic and the DOW about adopting LLM technology for warfare. Specifically, for "fully autonomous weapons and mass domestic surveillance". Does anyone understand how these two goals can be achieved? LLMs don't seem to me the right tool for this. Autonomous weapons would require a much faster and much more reliable and deterministic AI. LLMs might be a better use for mass surveillance, but I am not really sure how they would cope with the massive amount of data and the limited context window (unless they use the data itself for training). RAGs might only mitigate the problem. Does anyone have some ideas?

4 6
cedarscarlett 2 days ago

Ask HN: Has anyone noticed the fear-driven prompt suggestions that GPT5.3 makes?

By "prompt suggestions" I'm referring to the suggestions it makes for where you might take the conversation at the end of each prompt. Older versions used to say "if you'd like, we could look at

- related topic 1

- related topic 2

- related topic 3"

And so on and so forth.

But 5.3 does something different.

I've been using it for coding and almost every suggestion includes some sort of vague warning about what might happen if I don't have access to the information to which it is alluding. Nearly contiguous (not cherry-picked) examples from my current chats:

"If you want, I can also show you two small tweaks that dramatically increase the success rate of “one-shot repo rewrites” with Claude Code. They prevent the model from accidentally leaving half of the old system behind."

"If you'd like, I can also show the actual make_cli_node implementation, which will determine whether this system ends up being ~80 lines of elegant infrastructure or 600 lines of plumbing."

"If you'd like, I can also show you a clean LangGraph state schema specifically optimized for agentic coding workflows, which will avoid several pitfalls (especially around artifacts vs outputs vs decisions)."

"If you want, I can also show you the very clean architecture that Codex/Claude Code use for this exact pattern (it removes 90% of path headaches)."

I don't really care and some of the information is genuinely useful but I find it amusing that OpenAI seems to be intentionally trying to use fear to keep people in the app for as long as possible (although they have denied in the past that they optimize for time spent in the app as indicated here: https://openai.com/index/our-approach-to-advertising-and-expanding-access/).

14 8
davismartens about 19 hours ago

Self-Learning Customer Marketing

I rarely have a customer experience that genuinely feels delightful. Lately I've started to wonder why that is... I get up sold on products when I'm not ready to buy, I receive emails about features I'd never use, and when I want have an issue it's impossible to find someone to talk to. This dynamic always struck me as weird.

But having worked for large brands, the truth is that most companies have no idea when and how to talk to their customers. They rely on a messy web of conflicting events and triggers that engage customers without context.

I’ve recently started working on that tries to fix this, by pulling customer events from every channel, derive important moments from sequences of events and trigger the right engagement based on your specific context, all while continually learnig what's important to customer and how to best engage them

Here's how this works in an e-commerce example (although this works for any type of brand): 2 customers may have abandoned their cart at checkout. Customer1 got an error during checkout, got frustrated and moved on, Customer2 just a regular session.

Today companies treat these 2 customers the same and just send a discount code after 24h when in reality you should investigate customer1's issue and let them know it was fixed so they can complete their transaction

I call these sequences of events that drive customers to do X vs Y, "moments". My thesis is that you can discover these moments and design customer engagement around them to build delightful experiences that feel like you're going above and beyond, tailored to the customer which improves revenue, retention and advocacy.

I would love to hear from anyone that has experience with this problem.

You can follow my journey at https://booly.co

3 0
rohanmunshi08 5 days ago

Aura-State: Formally Verified LLM State Machine Compiler

I noticed a pattern: every LLM framework today lets the AI manage state and do math. Then we wonder why pipelines hallucinate numbers and break at 3 AM.

I took a different approach and built Aura-State, an open-source Python framework that compiles LLM workflows into formally verified state machines.

Instead of hoping the AI figures it out, I brought in real algorithms from hardware verification and statistical learning:

CTL Model Checking: the same technique used to verify flight control systems, now applied to LLM workflow graphs. Proves safety properties before execution.

Z3 Theorem Prover: every LLM extraction gets formally proven against business constraints. If the total ≠ price × quantity, Z3 catches it with a counterexample.

Conformal Prediction: distribution-free 95% confidence intervals on every extracted field. Not just "the LLM said $450k" but "95% CI: [$448k, $452k]."

MCTS Routing: Monte Carlo Tree Search (the algorithm behind AlphaGo) scores ambiguous state transitions mathematically.

Sandboxed Math: English math rules compile to Python AST. Zero hallucination calculations.

I ran a live benchmark against 10 real-estate sales transcripts using GPT-4o-mini: → 100% budget extraction accuracy ($0 mean error) → 20/20 Z3 proof obligations passed → 3/3 temporal safety properties proven → 65 automated tests passing

The gap between "it usually works" and "it provably works" is smaller than people think.

Would love feedback from anyone building production LLM systems; what would you want formally verified?

https://github.com/munshi007/Aura-State

22 6
throwaway53463 4 days ago

Ask HN: How are you all staying sane?

Let's start with the simplest: the AI - sometimes I feel like like ground is falling beneath my feet, no one can predict what can happen months in advance let alone years - the future is unknown. The Ukraine, the Iran, the Venezuela, Gaza/Palestine, Israel, Russia - the Taiwan! The conflicts seem distant, but yet so close. The US administration! No one can predict anything. Don't get me started on the Europe! The stock market! Are we in a bubble or not? Should I sell? Or just keep holding? Enshittification of tech. Everything is slow and buggy. Ads, ads and slop everywhere! The erosion of our rights just across the world. The Palantir's, the Flock's...

I feel I have developed a strong pessimistic worldview. The world is going to shit. It feels frustrating and it feels like there's nothing you can do. So I just want to know: how are you all dealing with this all. How are you all staying sane?

150 159
ErezShahaf 1 day ago

How do I get startups to use my open-code project?

I recently noticed that some easy tickets I get during my day job can be solved with a single prompt, so I built a simple open-code orchestration system between Jiran, Coding agents (cursor/claude), and github.

The goal is to automate the easy tickets that every decent engineer will solve in roughly the same way, and let us just accept a ready PR.

Now I'm stuck on a different problem: how do you get the first real startups to actually try an open source tool like this?

I have posted about my tool in X, reddit, and here. I have received stars and even some positive comments so it does seem like there is some interest in the idea, but AFAIK there isn't a startup that actually uses it.

I'm really not sure how to go about it, I'm not trying to make money from it, so I don't want to start cold approaching companies - but I like building stuff in my spare time and would have loved to see it really being used by startups.

How do I do that?

Repo if anyone is curious: https://github.com/ErezShahaf/Anabranch

5 12
talkingtab 1 day ago

Amazon degraded shopping- you have to put in cart to see the price

I just went to Amazon and searched for "espresso tamper". Of the 15 top results only ONE had a price. Fourteen (14) say "See options" then "put in cart to see price"

My first thought is that it was insane. My second thought was that they must be going broke to make this kind of change. Maybe there is some other reason, but I'm wondering if I need to find an alternative.

Oh and in order to comment on this post you need to put it in your cart first! :-)

15 12
nathannaveen 2 days ago

Tell HN: Digital Ocean has run out of GPU droplets

Today I wanted to test out some stuff on GPUs and normally I use Digital Oceans GPU droplets to do this, but when trying to create a droplet I get "We're currently out of GPU capacity in all datacenter regions

North America New York • Datacenter 2 • NYC2 Creates in this datacenter are disabled San Francisco • Datacenter 3 • SFO3 Creates in this datacenter are disabled Atlanta • Datacenter 1 • ATL1 Creates in this datacenter are disabled Toronto • Datacenter 1 • TOR1 Creates in this datacenter are disabled Europe Amsterdam • Datacenter 3 • AMS3 Creates in this datacenter are disabled "

17 4
LeanVibe 2 days ago

Ask HN: If your project is free, what are you building and why keep it free?

I'm curious about projects that are launched and run for free.

What are you building? How much does it cost you to operate? How long do you plan to keep it free?

Do you have a monetization plan later, or is the goal something else (learning, community, portfolio, etc.)?

Would love to hear about your projects and how you think about sustainability.

11 21
DavidHaerer 4 days ago

Ask HN: What sources like HN do you consume?

I appreciate HN for staying up-to-date with technical news.

For my side hustle I have to ramp-up on other areas like marketing, legal, sales, ...

So I wonder if there are similar high-quality sources like HN for these areas.

60 37
charlieflowers 1 day ago

HATEOAS Works with an LLM in the Mix

Just an observation, a light bulb moment, I wanted to share.

Most of the dev teams I've ever encountered who said they were "doing REST" were not actually following HATEOAS. Per a strict reading of Roy Fielding, he would consider that "not really REST." (Now don't get distracted, I don't want to wade into that whole purist debate).

The reason many did not do HATEOAS is that it requires the API client to be smart and adaptive. It would discover "ok, what can i do next", apply logic to it, and choose the next step. But many shops were on tight time commitments and it was much simpler to just think of REST as "json over http with consistent url patterns."

The cool thing is: With an LLM in the mix, HATEOAS is unchained. An LLM can do exactly what a "dumb" api client cannot: ask "what can i do next", and then use _inference_ to understand those options and select one.

2 1
chrisjj 1 day ago

Why is arstechnica.com still running dev story advertorials for a game that...

... launched 4 years ago?

(The Callisto Protocol).

Serious question. Did the advertiser prepay for a zillion impressions, or what?

6 5
donhardman 1 day ago

Ask HN: How do you give AI agents real codebase context without burning tokens?

Working on a large Rust codebase. The token problem is real — Claude Code will happily spend $5 of context just trying to understand how two modules relate before writing a single line. And once context compaction kicks in, it's even worse — the agent loses the thread completely and starts grepping the same files again from scratch.

Approaches I've tried:

Feeding CLAUDE.md / architecture docs manually — helps, but gets stale fast. Cursor's built-in indexing — breaks on monorepos, and I don't love proprietary code going to their servers. Basic MCP server with grep — works for exact matches, useless for semantic queries.

Eventually built something more serious: a local Tree-sitter indexer that builds a knowledge graph of file relationships and exposes it via MCP so agents query semantically instead of grepping blind. One tool call instead of 15 grep iterations. Published it here: https://github.com/Muvon/octocode

But genuinely curious what others are doing before I go deeper on it.

Three specific questions:

1. How do you handle the "ripple effect" problem — knowing that changing one file semantically affects others that aren't obviously linked?

2. Do you trust closed-source indexing with proprietary code, or have you gone local-first?

3. Has anyone gotten GraphRAG-style relationship mapping to work in practice at scale, or is it still mostly hype?

4 1
dokdev 3 days ago

I lost my ability to learn anything new because of AI and I need your opinions

I feel like I’ve lost my ability to learn because of AI. It is now so easy to generate code that it feels meaningless to focus and spend time crafting it myself. I am deeply sad that we may be losing the craftsmanship side of programming; it feels less important to understand the fundamentals when a model can produce something that works in seconds. AI seems to abstract away the fundamentals.

One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality.

Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app.

Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily.

As a developer, I am confused and overwhelmed, and I want to hear what other developers think.

21 28
rustcore 3 days ago

Ask HN: What's your experience self-hosting in 2026?

Is it worth it vs SaaS? What are you self-hosting and what did you give up on?

27 11
kok14 1 day ago

We don't need continual learning for AGI. What top labs are currently doing

Many people think that we won't reach AGI or even ASI if LLM's don't have something called "continual learning". Basically, continual learning is the ability for an AI to learn on the job, update its neural weights in real-time, and get smarter without forgetting everything else (catastrophic forgetting). This is what we do everyday, without much effort.

What's interesting now, is if you look at what the top labs are doing, they’ve stopped trying to solve the underlying math of real-time weight updates. Instead, they’re simply brute-forcing it. It is exactly why, in the past ~ 3 months or so, there has been a step-function increase in how good the models have gotten.

Long story short, the gist of it is, if you combine:

very long context windows

reliable summarization

structured external documentation,

you can approximate a lot of what people mean by continual learning.

How it works is, the model does a task and absorbs a massive amount of situational detail. Then, before it “hands off” to the next instance of itself, it writes two things: short “memories” (always carried forward in the prompt/context) and long-form documentation (stored externally, retrieved only when needed). The next run starts with these notes, so it doesn't need to start from scratch.

Through this clever reinforcement learning (RL) loop, they train this behaviour directly, without any exotic new theory.

They treat memory-writing as an RL objective: after a run, have the model write memories/docs, then spin up new instances on the same, similar, and dissimilar tasks while feeding those memories back in. How this is done, is by scoring performance across the sequence, and applying an explicit penalty for memory length so you don’t get infinite “notes” that eventually blow the context window.

Over many iterations, you reward models that (a) write high-signal memories, (b) retrieve the right docs at the right time, and (c) edit/compress stale notes instead of mindlessly accumulating them.

This is pretty crazy. Because when you combine the current release cadence of frontier labs where each new model is trained and shipped after major post-training / scaling improvements, even if your deployed instance never updates its weights in real-time, it can still “get smarter” when the next version ships AND it can inherit all the accumulated memories/docs from its predecessor.

This is a new force multiplier, another scaling paradigm, and likely what the top labs are doing right now (source: TBA).

Ignoring any black swan level event (unknown, unknowns), you get a plausible 2026 trajectory:

We’re going to see more and more improvements, in an accelerated timeline. The top labs ARE, in effect, using continual learning (a really good approximation of it), and they are directly training this approximation, so it rapidly gets better and better.

Don't believe me? Look at what both OpenAi(https://openai.com/index/introducing-openai-frontier/) and Anthropic(https://resources.anthropic.com/2026-agentic-coding-trends-report) have mentioned as their core things they are focusing on. It's exactly why governments & corporations are bullish on this; there is no wall....

6 6
supergoogler 1 day ago

An offline map using OruxMaps(satellite,routing,3D terrain,GPS and POI)

I wanted to share a setup I’ve been working on over the past few months. The goal was to prepare a complete offline mapping system for an entire country that works with zero internet access.

Imagine the internet goes down, or you’re traveling in a remote area with no network coverage. Most map apps become useless because they depend on online services. I wanted something that still works in those situations.

Using the Android app OruxMaps (https://play.google.com/store/apps/details?id=com.orux.oruxmaps) — I have no affiliation with the developers — I prepared a full offline dataset including:

• Satellite imagery covering the entire country • Elevation data (DEM) for terrain shading and 3D visualization • Offline routing data using BRouter • OpenStreetMap points of interest • Additional mountain and hiking POIs • Full 3D terrain visualization

Everything runs locally on the phone once the data is installed. No network connection is required.

The result is basically a complete offline GIS-style mapping system that fits in your pocket. A very powerful tool to have. This could be useful for hikers, field researchers, emergency planning, or anyone traveling in areas where connectivity is unreliable.

Preparing and organizing the datasets took quite a bit of time, so I’m considering writing a guide or preparing ready-to-use offline map packages for people who want a similar setup without going through the whole process.

Curious if others here have experimented with fully offline mapping setups like this.

If anyone needs help setting something like this up, feel free to reach out — my email is in my profile.

3 2
thefern 1 day ago

I keep building projects nobody wants. So this time I'm doing it backwards

The problem: You have an idea, you build, buy domain, spend weeks, months, no one shows up.

The idea: A profile page where you list all your project ideas. People can signal interest, and you collect emails — before you build anything. With hatchd, you wouldn't need to spin up a validation page in vercel, or other hosting platform.

Think Linktree, but for your side projects. One link to share everywhere.

Why not just use Product Hunt? PH is for launched products. This is for ideas you haven't built yet. It's your personal page, not a marketplace where you compete in a feed. No reviews or pressure to be polished — just "I'd use this" signals.

Why not Gumroad or a landing page builder? Those are for selling. This is for validating what's worth building first. One page holds all your ideas together, not scattered across platforms.

I threw up a validation page to see if anyone else has this problem: https://hatchd-validation.vercel.app/ If this sounds useful, vote on it, join waitlist. If it's not, tell me why.

5 9
spenvo 1 day ago

Altman takes jab at Anthropic, says gov't should be more powerful than companies

5 9
lucrbvi 2 days ago

Ask HN: Maintainers, do LLM-only users often clutter your issues/PRs?

I'm asking this because I recently opened a PR to fix a vulnerability in an OSS project (RCE via pickle deserialization in Python). A day later, I got a fully LLM-generated comment claiming my approach was wrong and that I should rewrite it differently and telling the maintainers he could contribute "if the project is open to a more surgical refactoring."

It's astonishing how often these encounters have been happening lately.

I'd love to hear from contributors or maintainers whether this happens to them and how they deal with it.

9 9
Imustaskforhelp 3 days ago

Ask HN: What will OpenAI employees do now who have signed notdividedorg petition

I want to ask HN (and also the OpenAI employees) now that finally some days have taken place about the confusing aspects of the deals.

Now that we are finally getting mass confirmation about how OpenAI in fact, has signed a deal which allows DoD to be allowed having autonomous killing machines and people are boycotting OpenAI and all of this has reached the mainstream news.

Yes, even after Sam Altman's recent tweet which says that its gonna add more terms, that is debunked because those terms are just gonna say what OpenAI prefers DoD just in more stronger terms to do but in no ways are still enforcable. Right now, the way it is with current Deal. DoD could create autonomous killing weapons and mass surveillance with Directives issued by Pete Hegseth/Current Administration and OpenAI by the terms is allowed to agree to it.

To all the OpenAI employees who have signed notdivided.org petition (I am seeing 98 signatories), what are you guys gonna do?

> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

Are you guys gonna stand for what you think is right. This question was asked by people when the OpenAI deal was announced but the optics at the time weren't clear. But now that some time has been given and people are absolutely clear that the deal that OpenAI have signed absolutely allows the use of creating of autonomous weapons

I don't think that OpenAI employees are gonna have a struggle of Money as some people try to point out. I mean, any AI company would be lucky to have you guys (imo) and they should be able to fairly match even OpenAI comp.

Someone from what I read (on HN), compared it to the fact that anyone who stays after 1 month from this happening will show the morals of the given situation.

I remember the fact that OpenAI used to be actually non profit and how employees left OpenAI because the non-profit actually fired Sam Altman.

I can't help but wonder if the board was right. I think the answer's yes. But my question is, OpenAI employees do have massive powers. I am sure that a lot of the people there would be better off sleeping that their work isn't contributing to building torment nexus.

I wish to propose that if OpenAI employees band together again, they can be able to do the same thing that they did previously, but now to revert that decision.

That is if I were an openAI employee, I gave a thought and here are all the things that I find are troubling which can be reverted:

1. Shut down the deal that they have with DoD period.

2. Actually shift from ClosedAI to OpenAI (Turn to a non profit structure as intentioned) and fire sam altman.

3. If something could be done about ramflation. I have seen projects being cancelled and Hosting providers shutting down or increasing prices because of 5x price increases, all because OpenAI tried to commit 20% of the world's entire Ram production.

17 16
ddxv 4 days ago

Ask HN: What Online LLM / Chat do you use?

I have been wanting to try more LLMs than the standard Anthropic/Grok/ChatGPT/Qwen

Are there other LLM chat sites you use or recommend?

12 18