Ask HN: Share your personal website
Hello HN! I am putting together a community-maintained directory of personal websites at <https://hnpwd.github.io/>. More details about the project can be found in the README at <https://github.com/hnpwd/hnpwd.github.io#readme>.
As you can see, the directory currently has only a handful of entries. I need your help to grow it. If you have a personal website, I would be glad if you shared it here. If your website is hosted on a web space where you have full control over its design and content, and if it has been well received in past HN discussions, I might add it to the directory. Just drop a link in the comments. Please let me know if you do not want your website to be included in the directory.
Also, I intend this to be a community maintained resource, so if you would like to join the GitHub project as a maintainer, please let me know either here or via the IRC link in the README.
By the way, see also 'Ask HN: Could you share your personal blog here?' - https://news.ycombinator.com/item?id=36575081 - July 2023 - (1014 points, 1940 comments). In this post, the scope is not restricted to blogs though. Any personal website is welcome, whether it is a blog, digital garden, personal wiki or something else entirely.
UPDATE: It is going to take a while to go through all the submissions and add them. If you'd like to help with the process, please send a PR directly to this project: https://github.com/hnpwd/hnpwd.github.io
Ask HN: Claude Opus performance affected by time of day?
I am a big fan of Claude Opus as it has been very good at understanding feature requests and generally staying consistent with my codebase (completely written from scratch using Opus).
I've noticed recently that when I am using Opus at night (Eastern US), I am seeing it go down extreme rabbit holes on the same types of requests I am putting through on a regular basis. It is more likely to undertake refactors that break the code and then iterates on those errors in a sort of spiral. A request that would normally take 3-4 minutes will turn into a 10 minute adventure before I revert the changes, call out the mistake, and try again. It will happily admit the mistake, but the pattern seems to be consistent.
I haven't performed a like for like test and that would be interesting, but has anyone else noticed the same?
Ask HN: How are you doing RAG locally?
I am curious how people are doing RAG locally with minimal dependencies for internal code or complex documents?
Are you using a vector database, some type of semantic search, a knowledge graph, a hypergraph?
Ask HN: How can we solve the loneliness epidemic?
Countless voiceless people sit alone every day and have no one to talk to, people of all ages, who don't feel that they can join any local groups. So they sit on social media all day when they're not at work or school. How can we solve this?
Ask HN: Browser extension vs. native app for structured form filling?
I’m working on a project called Injectless — a browser extension that allows websites to explicitly declare which data they are allowed to inject into external sites, fully controlled by the user.
Note: This post was translated to English using AI. My native language is Spanish.
The Problem:
Users of SaaS apps (accounting, project management, etc.) often need to repeatedly copy data into external forms (government portals, client systems, etc.). Today this is a tedious, fully manual process.
My Current Solution
A browser extension where:
- Websites expose an injectless.json declaring which fields they can fill and on which domains
- The user explicitly installs the integration (one-click opt-in)
- When visiting an allowed site, the extension offers to “paste” each field
The Doubt
A friend suggested that instead of a browser extension, this should be a native app (similar to KeePassXC or Espanso) that:
- Works in any browser without installing multiple extensions
- Pastes sequences of fields using TAB (simpler, more universal)
- Works even outside the browser
- Avoids extension permissions, CSP issues, Shadow DOM, etc.
My Concerns About a Native App
- Mobile: Browser extensions do work on mobile (Safari iOS, Firefox Android). Native apps would face heavy sandboxing restrictions
- UX: The extension popup can show exactly which fields are available for the current page. A native app would be more “blind”
- Context: The extension knows which page you’re on and can automatically validate allowed domains
The Question
What seems more valuable / practical?
A) Browser extension (current approach) — more context, mobile support, clearer UX
B) Native app like Espanso/KeePassXC — more universal, single install, simpler
C) Both — native app as a base + optional extension as a companion for better UX
Has anyone worked on something similar?
What trade-offs might I be missing?
Thanks!
Ask HN: Is it still worth pursuing a software startup?
Considering there is very little moat left in software and big companies can copy your product in no time?
Ask HN: Do you think college is/was worth it?
I'm moreso referring to the educational value you received. Colleges increasingly have the reputation of being run like factories: less concern for the quality of their lecturers, and more for augmenting admissions numbers, even if it means dropping standards. To what extent is the knowledge you gained of real value to you? Were you well prepared to enter your field? Do you feel your degree is more than a guarantee to employers that you're not a complete bozo?
I think it was mostly worth it. Some upper level classes were half baked, with a few faculty members of dubious competency. But, I did learn enough to apply myself to my chosen field.
Tell HN: The way I do simple data management for new prototypes
Hi folks! I have huge success on a prototype of this approach:
- Store all data as json
- App loads: load full json on a client
- Something changes by user - change json locally and every 10 seconds save whole json to backend as a single json file
- also every 10 seconds load the updated json from backend to client.
Yes, I know, parallel access problems, lack of schema, lack of db, using file to store. But how much it makes life easier and speed ups development at start! I am in this over 20 years, and I like dumb stupid solutions applied properly. Duct taping forever!
Ask HN: What did you find out or explore today?
Doesn't matter what domain and how big or small.
Ask HN: Who's using DuckDB in production?
Inspired by the post that's on the front page as I write this [1] I'm interested to hear about who's using DuckDB in production and how.
We have a tool live that uses it and I'm quite happy so I'm both looking for interesting use cases from others but also to be honest I'm reasonably sure I've just identified today that DuckDB is leaking memory quite seriously [2] so I'm curious to hear if other people have noticed this or if it's maybe something that's not as relevant to others since people might be running DuckDB pipelines in ephemeral envs like lambdas etc. where a memory leak might not matter as much.
[1] https://news.ycombinator.com/item?id=46645176
[2] https://github.com/duckdb/duckdb/issues/20569
Tell HN: YouTube gave my username switzerland to a half government organization
I had the username @switzerland since 20.03.2006. Swiss tourism had another username since 17.10.2006. Now recently google gave my username away to swiss tourism without any notification. Their other username was fine for literally 20 years.
Worse, the app still showed my username for a long time meanwhile youtube.com/@Switzerland already showed that of swiss tourism (Schweiz Tourismus) and I was not aware of that. Hence, I lost it some months ago.
Why do I tell you? You might loose your username and you aren't even aware of it.
Ask HN: How have you or your firm made money with LLMs?
In many currently active threads, members of the community are alluding to major productivity gains with more recent LLM models. I think it would be illuminating for all of us to hear what sorts of problem domains and lines of business these successes have occurred in.
A good example would be: "My team used Claude Code Opus 4.5 to build and ship an iOS fitness app that now has 10k paying users." This shows that the results of your process found paying customers.
Less helpful example would be: "My team is closing tickets faster than ever" or "I finally finished the novel I have been working on and my friends say it's great!" These are less interesting because they do not give us any insight into the market response.
Ask HN: One IP, multiple unrealistic locations worldwide hitting my website
Background: I manage an ecommerce website. Recent bot traffic is up. Most traffic can be traced to one or two IP addresses with hundreds of requests per day. These ip addresses don't have DNS records for reverse lookup, and when I map the requests in cloudflare, one address shows up as requesting from different data centers all over the US. What is going on here? Source IP example 173 . 245 . 58 . 0
Chicago, United States (ORD)
340 requests
San Jose, United States (SJC)
330 requests
Los Angeles, United States (LAX)
310 requests
Atlanta, United States (ATL)
310 requests
Dallas-Fort Worth, United States (DFW)
290 requests
Newark, United States (EWR)
280 requests
Washington, United States (IAD)
230 requests
Miami, United States (MIA)
210 requests
Boston, United States (BOS)
140 requests
Singapore, Singapore (SIN)
130 requests
Thanks for ideas.
Ask HN: What's something you wished you started doing earlier?
Could be in you career, business or general life.
Something you recently picked up that would have had even greater impact if only you started a few years ago.
I skipped Japan's university exam to write a "computational metaphysics" exam
I am a 21-year-old "Ronin" (3rd-year gap student) from Japan.
Today is the Common Test for University Admissions—a mandatory, once-a-year national exam that serves as the sole gateway to university. Missing it means waiting another full year.
I spent the last 6 years of my life preparing for this single, all-or-nothing event. But this morning, I realized that the only degree I truly need is Resolve.
So, I didn't go.
Instead of taking the test, I traded my admission ticket and years of effort for the power to create Artificial Life. I dedicated this past year entirely to Rust and C++, realizing that it is 100x more exciting to be the one defining society than to be a mere cog turning inside it.
To prove—mostly to myself—that I am not dropping out because I can't do the math, but because I want to solve harder problems, I wrote a fictional entrance exam for a "University of the Universe."
It combines non-perturbative physics, higher category theory, and computational metaphysics to explore the existential dread of being an outlier.
Here is the Abstract and a sample problem.
2026 Entrance Exam: Department of Computational Metaphysics
Abstract This examination probes the candidate's fluency across non-perturbative physics, higher category theory, and computational complexity. It treats the universe not as a physical object, but as a legacy code base running on Planck-scale hardware.
Core themes: local vs. global, perturbative vs. non-perturbative, computable vs. uncomputable, self vs. other.
Problem 5: Privilege Escalation in the Universe Simulator [50 Points]
The universe is a legacy simulation running on a quantum computer with Planck-scale grid $\ell_P$. Memory is holographically allocated on the boundary per the Bekenstein bound. An attacker (physicist) attempts root access via heap overflow.
(a) Buffer Overflow via Black Hole Formation [10 Points]
The Bekenstein bound: $S \leq S_{Bek} = \frac{A}{4\ell_P^2}$
The universe's buffer is hardcoded as `uint64_t` ($2^{64}$ bits).
(i) Using $S_{BH} = \frac{4\pi G M^2}{\hbar c}$, compute minimum mass $M_{overflow}$ (in $M_P$) for out-of-bounds write.
(ii) Show $M_{overflow} \sim 10^{9} M_P \approx 20\,\mu\text{g}$ (micro black hole scale).
(iii) Conclude: the universe runs without ASLR. Physical constants are stored at predictable addresses. Black holes are heap sprays.
You can read the full exam here (Gist): https://gist.github.com/fumi2026/a6d1b9af31e1960448f5333c2a1a1425
(Note: I am currently implementing these first principles into an AI engine running locally on an iPhone X. Demo video coming soon.)
Ask HN: Can companies claim copyright over their LLM-generated codebases?
As tools like Claude Code and Codex become more widely used across industries, will companies be able to claim copyright over their codebases (or products) or impose license restrictions when a significant portion of the code is generated by LLMs?
Ask HN: Iran's 120h internet shutdown, phones back. How to stay resilient?
It has been 120 hours (5 days) since the internet shutdown in Iran began. While international phone calls have started working again, data remains blocked.
I am looking for technical solutions to establish resilient, long-term communication channels that can bypass such shutdowns. What are the most viable options for peer-to-peer messaging, mesh networks, or satellite-based solutions that don't rely on local ISP infrastructure?
Ask HN: What are you working on? (January 2026)
What are you working on? Any new ideas that you're thinking about?
Ask HN: Those who quit tech, moved back home, what do you do?
Especially those who quit tech and went back from western countries to their (non-western) homeland, what do you do now? Are you happier than before? What are the reasons you left?
Ask HN: How do you safely give LLMs SSH/DB access?
I have been using Claude Code for DevOps style tasks like SSHing into servers, grepping logs, inspecting files, and querying databases
Overall it's been great. However, I find myself having to review every single command, a lot of which are repetitive. It still saves me a ton of time, but it's quickly becoming a bit tedious
I wish I could give the agent some more autonomy. Like giving it a list of pre-approved commands or actions that it is allowed to run over ssh
For example:
OK: ls, grep, cat, tail
Not OK: rm, mv, chmod, etc
OK: SELECT queries
Not OK: INSERT, DELETE, DROP, TRUNCATE
Has anyone successfully or satisfactorily solved this?What setups have actually worked for you, and where do you draw the line between autonomy and risk?
Ask HN: Has Claude Code changed its usage limits for you?
I hadn't used Claude Code for a couple of weeks, but today when I used it (on Pro Plan) it did a few tasks full of errors and then claimed to hit a rate limit. Normally it will work for at least a feature's amount of work in one day, but in this case it mostly caused problems (with very basic tasks) and then ran out of juice before it could fix them. I know they are suffering from demand-supply problems but I don't recall comms from them saying you're going to get less for your money now?
Ask HN: What are your best purchases under $100?
Curious what items under $100 have made your life better or any meaningful impact.
Revival of this [thread](https://news.ycombinator.com/item?id=23363396) from 6 years ago. Thought it would be fun to have new answers to this :)
Ask HN: Should Developers Shift from Coding to Architecture in the LLM Era?
If LLMs can generate repetitive code, does our value shift toward system design, trade-offs, and problem framing?
Ask HN: How to make spamming us uncomfortable for LinkedIn and friends?
I've got an email from Linkedin:
> ## colleagues from your company already solved LinkedIn puzzle games
Are you f%%n serious, Linkedin? This is a freaking spam from "Linkedin games".
The question is, how to stop it not like unsubscribe, but how to make it painful for them to do spam us?
Ask HN: Have you ever tried low-code tools for your work?
I'm curious to know if you've ever tried any low-code tools (e.g., OutSystems, Mendix, PowerApps, etc.). If not, why not? What are your top objections to those tools? If so, which ones have you tried and what did you think about them? Any thoughts about them becoming more AI-powered?
Ask HN: AI music covers in 2026?
I asked this back in 2022:
https://news.ycombinator.com/item?id=32723101
What's the latest this year?
I'm not looking for SUNO generated AI Music, that type of AI slop is cheap and easy. I'm looking amazing voice + instrumentation cloning paired with human creative input.
Tell HN: Execution is cheap, ideas matter again
I had an experience yesterday launching on Show HN that really threw me. The product triggered people's "privacy sense" immediately.
My first reaction was defensive. I took it personally. I thought: Do you really think I’m a scammer? I pour my soul into meticulously crafting products to delight users, not to trick them. Why would I trash all that effort and disrespect my own goals by doing something as stupid as stealing data? It felt insulting that people assumed malice when I was just trying to build something useful.
But after sitting with it, I realized those initial comments—the ones I wanted to dismiss as paranoia—were actually right. Not about me, but about the environment we operate in.
There are enough shady companies, data brokers, and bad actors out there who abuse user trust with impunity. We’ve all seen big corporations bury invasive tracking in their terms of service. As a builder, I don't operate in that world; I’m just focused on making things work. But for users, that betrayal is their baseline reality. They have been trained to expect the worst.
I realized I hadn’t factored that into the launch. I didn’t explicitly state "Your data remains yours" because to me, it was obvious. Why would I want your data? But in an industry that has systematically mined, stolen, and abused user boundaries for a decade, you can’t blame people for checking for the exits. They aren't being "ninnies"; they are being wise.
If I were using a new tool that had access to my workflow, I would want explicit assurance that my IP wasn't being siphoned off. I just forgot to view my own product through the lens of a weary stranger rather than the optimisitc builder who wrote the code.
This is especially true now because the landscape has changed. There was an old PG essay about how ideas are cheap and execution is everything. That’s shifting. AI has made execution cheap. That means ideas are prime again.
Because execution is distributed and fast, first-mover advantage, brand, and reputation matter more than ever. Your prompts and your workflow are your IP.
So, privacy isn't just a compliance box; it's a competitive requirement. I don't think we need full-NSA-level paranoia for every tool, but we do need to recognize the environment we are launching into. The "security purists" were right to push back: I didn't think about that aspect enough, and in 2025, trust is the only currency that matters.
Ask HN: Distributed SQL engine for ultra-wide tables
I ran into a practical limitation while working on ML feature engineering and multi-omics data.
At some point, the problem stops being “how many rows” and becomes “how many columns”. Thousands, then tens of thousands, sometimes more.
What I observed in practice:
- Standard SQL databases usually cap out around ~1,000–1,600 columns. - Columnar formats like Parquet can handle width, but typically require Spark or Python pipelines. - OLAP engines are fast, but tend to assume relatively narrow schemas. - Feature stores often work around this by exploding data into joins or multiple tables.
At extreme width, metadata handling, query planning, and even SQL parsing become bottlenecks.
I experimented with a different approach: - no joins - no transactions - columns distributed instead of rows - SELECT as the primary operation
With this design, it’s possible to run native SQL selects on tables with hundreds of thousands to millions of columns, with predictable (sub-second) latency when accessing a subset of columns.
On a small cluster (2 servers, AMD EPYC, 128 GB RAM each), rough numbers look like: - creating a 1M-column table: ~6 minutes - inserting a single column with 1M values: ~2 seconds - selecting ~60 columns over ~5,000 rows: ~1 second
I’m curious how others here approach ultra-wide datasets. Have you seen architectures that work cleanly at this width without resorting to heavy ETL or complex joins?
Ask HN: Which system would you trust to run a business you can't afford to lose?
A) A system that summarizes operational signals into health scores, flags issues, and recommends actions
B) A system that preserves raw operational reality over time and requires humans to explicitly recognize state
Why?
The $LANG Programming Language
This afternoon I posted some tips on how to present a new* programming language to HN: https://news.ycombinator.com/item?id=46608577. It occurred to me that HN has a tradition of posts called "The {name} programming language" (part of the long tradition of papers and books with such titles) and it might be fun to track them down. I tried to keep only the interesting ones:
https://news.ycombinator.com/thelang
Similarly, Show HNs of programming languages are at https://news.ycombinator.com/showlang.
These are curated lists so they're frozen in time. Maybe we can figure out how to update them.
A few famous cases:
The Go Programming Language - https://news.ycombinator.com/item?id=934142 - Nov 2009 (219 comments)
The Rust programming language - https://news.ycombinator.com/item?id=1498528 - July 2010 (44 comments)
The Julia Programming Language - https://news.ycombinator.com/item?id=3606380 - Feb 2012 (203 comments)
The Swift Programming Language - https://news.ycombinator.com/item?id=7835099 - June 2014 (926 comments)
But the obscure and esoteric ones are the most fun.
(* where 'new' might mean old, of course - https://news.ycombinator.com/item?id=23459210)