Ask HN: Junior getting lost
Hello those who still read forums.
I have recently graduated from a college and started working as a junior dev (trying to consume as much knowledge from senior colleagues as I can now) and it seems that the real world is kind of a different story compared to the college practice.
In the college we've been taught about design patterns and all these responsibilities like domain, application, infrastructure, UI. Domain should never depend on infrastructure or application layer and so on. But the projects I got have domain that depend on infrastructure and another one where application has a reference directly to infrastructure and been told that this is correct implementation... doh..
I think I was kind of a good at listening for the lectures, but I now am doubting about, whether it was worth learning stuff at all lol since it's so controversial out there. I am, of course, in no position to question senior dev, but what do you guys think - is it really normal that all the college so called "best practices" go straight to the trash bin or am I just misunderstanding the real-work-like context?
Ask HN: Is archive.is currently broken for WSJ links?
For the past couple of days any link I submit stays on the "Loading" spinner and never seems to make it into the queue, and it seems like HN submissions for new articles aren't getting any archive links posted.
Ask HN: How far has "vibe coding" come?
I’m trying to understand where “vibe coding” realistically stands today.
The project I’m currently working on is getting close to 60k lines of code, with fairly complex business logic. From what I’ve heard, at this scale only a few tools (like Claude’s desktop app) are genuinely helpful, so I haven’t experimented much with other AI coding services.
At the same time, I keep seeing posts about people building 20k lines of code and launching a SaaS in a single 40-hour weekend. That’s made me question whether I’m being overly cautious, or just operating under outdated assumptions.
I already rely on AI quite a bit, and one clear benefit is that I now understand parts of the codebase that I previously wrote without fully grasping. Still, at my current pace, it feels like I’ll need several more months of development, followed by several more months of testing, before this can become a real production service. And that testing doesn’t feel optional.
Meanwhile, products that are described as being “vibe coded” don’t seem to be getting particularly negative evaluations.
So I’m wondering how people here think about this now. Is “you don’t really understand the code, so it’ll hurt you later” still a meaningful criticism? Or are we reaching a point where the default approach to building software itself needs to change?
I’d especially appreciate perspectives from people working on larger or more complex systems.
Ask HN: Books to learn 6502 ASM and the Apple II
I want to learn Assembly to make games on the Apple II. What are the old books to learn 6502 Assembly and the Apple II itself (memory, screen management) ? And is it absolutely necessary to learn BASIC before Assembly ?
Ask HN: Who do you follow via RSS feed?
Hello there!
I just set up TinyTinyRSS (https://tt-rss.org/) at home and I'm looking into interesting things to read as well as people/website publishing interesting stuff.
This, among the other things, to reduce the daily (doom)scrolling and avoid the recommendation algorithms by social media.
So: who or what do you follow via RSS feed, and why?
Ask HN: DDD was a great debugger – what would a modern equivalent look like?
I’ve always thought that DDD was a surprisingly good debugger for its time.
It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.
At the same time, it’s clearly a product of a different era:
– single-process
– mostly synchronous code
– no real notion of concurrency or async
– dated UI and interaction model
Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.
Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.
I’m curious how others think about this:
– what ideas from DDD (or similar old tools) are still valuable?
– what would a “modern DDD” need to handle today’s software?
– do you think interactive debugging is still the right abstraction at all?
I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.
Designing programming languages beyond AI comprehension
What characteristics should a programming language have in order to make automated analysis, replication, and learning by artificial intelligence systems difficult? Any idea?
Frigate NVR Critical RCE Vulnerability Severity
Ask HN: What's the Point Anymore?
I love technology. But I'm no longer optimistic about the future. It seems like AI is not going to go away, and instead of building reliable software, managers seem to push people to use AI more, as long as they ship products. Everything else is being destroyed by AI: art, music, books, personal websites. Why read a blog post, when Google AI Summary can just give you the summary? Why read a book, when you can just get AI summary of it? Why pay artists for music, when you can just generate endless amount of AI music?
And even things like "doing day to day chores" are being automated away with tools like AI assistants. The only thing you are left to do is to eat and take a sh*t throughout the day. How should people make money? No idea, as in the "prosperous future", everything is replaced by AI.
So my question HN: What's the point anymore? Why keep going and where to?
Ask HN: What recent UX changes make no sense to you?
For me, it is the shift toward thin, auto-hiding scroll bars. I see it on macOS, Linux (Mint), mobile phones and probably Windows too (though i haven't used windows in a while).
Is this a cleaner look? I have always loved visible scroll bars because they act as useful guides for where I am on a page and how much content remains and just easy to drag. Now you have to hover over it first.
I am curious what UX changes have stood out to you lately, for better or worse.. Maybe some designers reading this forum will take notes.
Ask HN: Notification Overload
I'm looking for tools or methods to better curate the deluge and cacophony of notifications, emails, texts and phone calls I imagine we are all getting inundated with everyday with increasing entropy and volume.
The amount of "notifications" I get everyday is overwhelming to the point where I often decide to switch my phone to "silent", leave my phone in another room, and even turn it off for periods of time. The problem with this is that I miss important things and they often get buried.
I've spent hours and hours unsubscribing, deleting, uninstalling, toggling settings, but then I find myself reinstalling, resubscribing. It's just a mess, and exhausting to just think about.
The reason I'm writing this is partially to vent. I just realized that my closest friend's birthday was a few weeks ago. I had it in my calendar, but never saw the notification. Yes, I should be more organized and Yes, it's not the end of the world. but damnit, i get so much crap from this bionic appendage, and still I cant use this tool to help me with remembering important things.
It just seems like its getting worse every year.
Hopefully this is helpful to others.
P.S. can we please stop with the "would you like all or some cookies" popup on every friggin website?
P.P.S. can websites stop asking for permission to invade my OS?
P.P.P.S. does anyone else ever want to run away and be an off-grid hermit?
How much recurring income do you generate in 2026 and from what?
It’s always interesting to know the (side) hustles people are running that, in their opinion, provides recurring revenue that is either a good source or passive income or their main source of income.
Ask HN: Vibe Researching" with AI – Anyone Using It for Real?
The concept of "vibe researching" – using AI to rapidly explore, synthesize literature, and generate novel research ideas or frameworks – seems promising. Beyond just literature reviews, it could act as a brainstorming co-pilot.
Has anyone here seriously used AI (e.g., Claude for long-context paper analysis, custom GPTs on arXiv, or specialized agents) to aid in hypothesis generation, research gap identification, or drafting substantive parts of a paper?
What are the biggest pitfalls regarding accuracy, hallucination of citations, or superficial understanding of complex theory? How do you validate the AI's output?
Do you see it as a legitimate accelerator for early-stage research, or more of a productivity tool for mundane tasks? Any success stories linking it to a tangible research outcome?
Looking for honest experiences from academics, industry researchers, or solo discoverers.
The Anti-Pomodoro Technique: Focus on Taking Breaks, Not Watching the Timer
I’ve never been able to maintain enough focus on a timer. The temptation to get distracted is always strong—and since it’s easy to ignore the timer, I often did.
After failing to follow the Pomodoro method, I’d feel irritated, frustrated, and blame myself. Soon enough, the routine would fall apart, and I’d go back to working in my usual way—without boundaries or timers.
Then I had an epiphany: focusing on the timer forces you into a battle with yourself. And since it’s hard to fight your own subconscious micro-reactions and habits, you end up frustrated. Sticking rigidly to a timer is the wrong goal. The real goal should be taking regular breaks—focus will follow naturally.
To test this idea, I created "Black Screen for Windows" — an app that forcibly blacks out my screens for a few minutes at regular intervals. Usually, that’s 3–5 minutes every 20–30 minutes.
This practice of enforced, regular breaks has not only improved my well-being but also dramatically boosted my productivity—all without the frustration. My ability to focus improved, too, with a small hack: I start with a 30-minute interval, then gradually shorten it until I find a span of time in which I can maintain clean, distraction-free focus.
I find this works better for me than the classic tier-based Pomodoro.
What do you think?
Ask HN: Has Show HN become LLM-prompt-centric?
It seems to me that Show HN is filled with low-effort see-what-I-prompted-Claude posts—no innovation, no real creation, just yet another copy of a copy. If you’re going to prompt an LLM, at least come up with something original, not the millionth text editor.
Where can I find startups looking for fractional product leads?
I am looking for the best place to find startup founders who need to hire fractional roles to get them off the ground. I have 15 years of product building experience in the SaaS world, and am looking to connect with other founders.
Ask HN: Where to find cool companies to work for?
I class myself as a product engineer and have worked with React, NextJS, PostgreSQL, PHP, Typescript etc.
I am tired of using linkedin to find a new job. I am looking for more smaller companies with remote work.
Anyone know some sources to search?
Ask HN: European alternative to Vercel/Cloudflare for hosting
Hi, I’m looking for a hosting/CDN solution that’s similar to Vercel or Cloudflare Pages/Workers, but based in Europe.
Any recommendations or experiences with European providers?
Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?
Lately i've been experimenting with this template in Claude's default prompt ``` When I ask a question, give me at least two plausible but contrasting perspectives, even if one seems dominant. Make me aware of assumptions behind each. ```
I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?
Tell HN: I cut Claude API costs from $70/month to pennies
The first time I pulled usage costs after running Chatter.Plus - a tool I'm building that aggregates community feedback from Discord/GitHub/forums - for a day hours, I saw $2.30. Did the math. $70/month. $840/year. For one instance. Felt sick.
I'd done napkin math beforehand, so I knew it was probably a bug, but still. Turns out it was only partially a bug. The rest was me needing to rethink how I built this thing. Spent the next couple days ripping it apart. Making tweaks, testing with live data, checking results, trying again. What I found was I was sending API requests too often and not optimizing what I was sending and receiving.
Here's what moved the needle, roughly big to small (besides that bug that was costin me a buck a day alone):
- Dropped Claude Sonnet entirely - tested both models on the same data, Haiku actually performed better at a third of the cost
- Started batching everything - hourly calls were a money fire
- Filter before the AI - "lol" and "thanks" are a lot of online chatter. I was paying AI to tell me that's not feedback. That said, I still process agreements like "+1" and "me too."
- Shorter outputs - "H/M/L" instead of "high/medium/low", 40-char title recommendation
- Strip code snippets before processing - just reiterating the issue and bloating the call
End of the week: pennies a day. Same quality.
I'm not building a VC-backed app that can run at a loss for years. I'm unemployed, trying to build something that might also pay rent. The math has to work from day one.
The upside: these savings let me 3x my pricing tier limits and add intermittent quality checks. Headroom I wouldn't have had otherwise.
Happy to answer questions.
Ask HN: How much emphasis to put on unit testing and when?
I'm wondering if a shift has occurred. When I started as a junior software engineer, over a decade ago, I learned about unit testing, integration testing, system testing. The whole codebase we worked on was thoroughly unit tested, and had layers of integration tests and system tests as well. I've worked for other employers since and in some cases any kind of automated testing was completely absent. Still, the message I got when reading and keeping up with best practices was: unit test ALL the things!
I've personally found that when the architecture of the system is not mature yet, unit tests can get in the way. Terribly so. Integration tests or system tests to assert behavior seem the starting point in this and other scenario's, including when there are no tests at all yet.
I've recently read a statement about letting go of a strict "unit test everything" mindset and go for integration tests instead. I'm thinking it probably depends, as with everything, on the type of system you're working on, the maturity of the system, the engineers' experience with automated testing, etc.
I'd be interested to learn when each type of testing helps you and when it gets in the way (and what it gets in the way of).
I built a C++ runtime with immutable objects and no GIL
I’ve spent the last few months rethinking how a dynamic language runtime should interact with modern hardware. The result is ProtoCore and its first major implementation, ProtoJS.
Most dynamic runtimes (Python, Ruby, and even JS engines) handle concurrency through Global Interpreter Locks (GIL) or complex memory barriers because managing mutable state across threads is notoriously difficult.
With ProtoCore, I took a different path based on three pillars:
Immutability by Default: All core data structures are immutable. Instead of locking, we use structural sharing for memory efficiency. This inherently eliminates data races at the object level.
Hardware-Aware Memory Model: Objects are cache-line aligned (64-byte cells) to prevent false sharing and optimize cache locality.
Tagged Pointers: We use a 56-bit embedded payload for SmallIntegers, meaning zero heap allocation for most numeric operations.
To prove the architecture, I built ProtoJS. It uses QuickJS for parsing but replaces the entire runtime with ProtoCore primitives. This allows for real worker thread execution ("Deferred") where immutable objects are shared across threads without copying or GIL-related contention.
Current Status:
ProtoCore: 100% test pass rate (50/50 tests) and a comprehensive technical audit completed today.
ProtoJS: Phase 1 complete, demonstrating real parallel execution and sub-1ms GC pauses.
I’m an Electronic Engineer by training (now a university professor), and I wanted to see if applying low-level hardware principles could fix the high-level concurrency "mess."
I’d love to hear your thoughts on the trade-offs of this immutable-first approach in systems programming.
ProtoCore (The engine): https://github.com/numaes/protoCore ProtoJS (The JS runtime): https://github.com/gamarino/protoJS
How to DeGoogle Myself?
I recently started a non-profit, and after being approved for Google Nonprofits I tried to enroll in Google Workspace.
I created an account but upon login it prompted me for a phone number for "additional security". After entering my cell, I got the message: "This phone number has already been used too many times for verification."
There seems to be no way around this whatsoever short of getting a new phone number. There's no way to contact a human being for support. Removing my cell # from other accounts (i.e. college, work) seems to have no effect.
A scary thought came to mind: If Google ever decides to kick me out of their system for my main account, I'm toast. I use it for everything.
How can I begin to practically "deGoogle" myself?
Ask HN: What usually happens after a VC asks for a demo?
I had a VC call that went well. They asked for a demo, mentioned looping in an operating partner, and I shared details etc. Since then it’s been quiet (a day or two).
For folks who’ve raised before or worked in VC: Is this typically just internal review time, or does silence after a demo usually signal a pass?
Not looking for validation, just trying to understand how this phase usually plays out.
Thanks.
Ask HN: If Everyone Can "Build" a SaaS, What Becomes Valuable?
The narrative is shifting: from “no-code” to prompts that generate full-stack apps. Reports suggest the future may belong to “Agent Platform Companies” with usage-based pricing, not traditional seat-licensed SaaS .
This leads to a two-part question:
Future of SaaS: If custom, “good enough” software becomes trivial to create for specific needs, does the traditional SaaS model collapse? Will value shift entirely to AI platforms and infrastructure, with most SaaS becoming commodities?
The New “Valuable Thing”: In a democratized creation world (like TikTok for video), what becomes the scarce asset? Is it distribution, vertical-specific data/models, or integration & trust? What would the “App Store” for these AI-generated micro-SaaS look like?
Looking for perspectives from builders, investors, and SaaS users.
Tell HN: JumpCloud 2FA appears to be down
It seems to me that authorization services like JumpCloud should always provide some way to contact support when you are locked out.
Otherwise, how can one obtain support? Or notify them of some issue?
Does anyone happen to know where I can send an email to let them know? JumpCloud's status page shows all normal, of course.
Ask HN: Can a MMO be vibe coded?
Ask HN: Is there a good open-source alternative to Adobe Acrobat?
Ideally, it would not only be just a pdf reader but also have functionality to remove pages, add pages, sign, and edit forms.
Ask HN: What are the most significant man-made creations to date?
I have the following, in no particular order:
1. Languages (natural e.g. English, and formal e.g. Mathematics, Python etc) 2. Music 3. Cuisine 4. Transistors 5. MS Excel 6. Rockets 7. P2P file sharing 8. Encryption
What do you think? I think I'm missing historical inventions e.g. Gutenberg press
Generative AI failed to replace SaaS
Satya Nadella & others preached about GenAI replacing the "Business Logic" or the "Middle Tier" layer of various SaaS services. The idea was that users would interact with a GenAI model (via a chat interface), and then the model would interact directly with a database. This would have obviously mooted the need for almost all SaaS applications.
What's been happening instead is that GenAI has been moving up the "stack," further and further away from the database. No one's talking about replacing SaaS anymore. Instead, GenAI has become a sort of garnish, something that you sprinkle ON TOP of existing SaaS applications without truly replacing any of their pre-existing features.
This shift "up the stack" speaks volumes to the impotence of our current models. They are so incapable and unreliable that they couldn't replace a single part of Excel, for example. Instead, all Microsoft did was "slap GenAI on top", placing the burden on users to "figure out" how to make it useful. We went from "replace it with a chat agent" to "just slap a chat agent on top and hope for the best." In other words, we actually made our SaaS applications MORE complicated instead of consolidating their features and therefore simplifying them.