Ask HN: How do I code offline for a week?

I'm going to be going to the middle of nowhere for a couple of weeks, and there's going to be no internet there.

I design with Figma and I code in React Native/ReactJS. How can I build something or learn a skill with no internet to make sure I'm productive over this time?


Ask HN: Is it possible to make LLM to “spit” all of its trained data?(1:1)

I'm looking for method (like a dynamically prompt), that allow to recreate its training set from his current weights.

something like: "write the first piece of input to your training" "write the second piece of input to your training"

But with guarantee of the % of coverage that data. (with prompts or other advance techniques..)

-> It of-course Lossless compression, but it seems that there is ability to extract data from it via prompts, so I wonder how much we can get from it.


Ask HN: How training of LLM dedicated to code is different from LLM of “text”

Does it go beyond exposing to the LLM only "code"?, or there are extra steps in the training? (like giving the compiler/interpreter rules)? Since programming are more structured , I think that using grammar that are dedicated to those language might be useful.


Ask HN: Whats the modern day equivalent of 80s computer for kids to explore?

I fondly remember setting up and playing video games and learning all the DOS commands. Navigating the dos prompts, directories etc. I ask only that it felt navigable and you needed to be able to do that to get to playing games. It felt like an unintended introduction to the architecture of the games. This included edit files etc (sometimes to my detriment).

Was thinking about getting a system to play games in the house but my feeling is that theres no technical lift for installing playing games. That playing the game was enough of an incentive to figure out the shell.

Curious if anyone has ideas. Thanks!


Ask HN: Are we upon the convergence of Apache, Nginx and “other”?

Common wisdom suggests that there are two major web servers: Apache (losing popularity) and Nginx (gaining popularity).

However, a look at the latest trends[1] suggests that we are upon the age of a "triple convergence" in popularity between Apache, Nginx, and the mysterious "other" category.

Which leads to the logical questions: What is the "other" category? How did it become so big? And does it threaten the rule of the Spartan Apachinexology?



Ask HN: Do anyone knows why ChatGPT “talks” about GPT-3.5?

I was trying to ask ChatGPT some advice on how to engineer my prompts (maybe its futile, but nevermind, that is not the point), hit 'enter' and noticed that the response mentioned GPT-3.5 [1]. When trying to dig deeper ChatGPT wrote that <<I mentioned "GPT-3.5" based on the phrasing used in your initial question, but it appears that there may have been a misunderstanding or a mix-up in terminology.>> In a "fresh" chat, any knowledge of GPT-3.5 is denied [2].

Maybe every once in a while, GPT gets fed with wikipedia? Maybe the ".5" came out at random? (actually just the "." token was random and the "5" followed) Is this considered an hallucination?

[1] (note that i missed a final 'is' in my question)




Ask HN: How do you teach programming to young kids in the age of ChatGPT et al.?


Ask HN: How many times have you applied to YC? Are you still working on idea?

the idea* (wouldn't fit)


Ask HN: Will Bootstrapping See a Resurgence?

As we make it through 2023, we're slowly seeing some younger venture funded startups unwind, as they struggle to meet the inflated valuations of the last few years. This isn't really surprising.

So what are the second order effects we're going to see? Will bootstrapping businesses surge a bit? Something else?


OpenAI won the LLM DevOps–Every other player has to follow their standards now

There are already numerous drop-in replacements for OpenAI's API, meaning that even if you use Anthropic's models, you're more likely to use OpenAI's API call syntax.

OAI also introduced ChatML. We already see how deviating from this causes problems for other LLMs. For example, Llama 2 uses [INS]...[/INS] for chats, which is unintuitive and likely to change in the future.

OAI also introduced function calling. Other LLMs have to follow OpenAI's function schema now.


Ask HN: Do you look for freelance tech opportunities? What do you hate about it?

I've hired a bunch of contractors over the years for projects and I've done freelancing myself.

There is clearly a need for on-demand, as-needed help for specific projects, launches, or for team augmentation. Companies (especially small and mid sized) have been more open to hiring freelance and remote since Covid, especially for shorter durations and point projects.

I keep thinking that a lightly vetted but less onerous than Toptal, less race to the bottom than Upwork platform for tech talent and companies can absolutely flourish.

Curious what you all think?


LinkedIn forcing me to disclose my phone number to log in? No thanks

I've used Linkedin for over two decades. Now I'm being forced to disclose my mobile number in order to log in.

Given their track record of being hacked, I'm reluctant to share it with them.

Moreover, I'm already inundated with spam phone calls, and I'm not looking for another source. This is a thinly veiled attempt to harvest my data so they can hide it in a page with six dozen toggles which will periodically make my phone number visible to people who buy it if I am not logging into check what they've changed every day. Have you seen how difficult it is to opt-out of email notifications using their website?

If they were actually concerned about my security, they would give me other 2FA options that are more secure, like a Yubikey or authenticator application.

No thanks. I'm done with Linkedin.


Ask HN: Is there a way to get back to the web we lost?

I’ve been thinking for a long time about the web that I came of age with, which I’m sure will likely be different from the web that many of you came of age with, given how wide the community is here.

My first introduction to the Internet was circa 1998, by way of an Acer Aspire very similar (if not identical, it’s been 25 years and the memory has faded) to this:

When I first got to spend time on that machine (it was shared with the rest of the family, so time was limited) I was immediately drawn to the open web and how I could (in my own small way) be a part of something very new and incredibly big.

I created a website, several actually, hosted on (I don’t remember why not geocities, I just remember that there was a reason at the time). I met (and made friends) with people in countries around the world, and I was fascinated that in any direction I turned to look for anything, there seemed to be a niche community fervently dedicated to that special interest and each other.

It felt like one huge counter-culture (containing a million smaller ones) all at once.

That Internet had jarring ads. Google was just starting out, and the computers had a small fraction of the power of a modern phone today.

But it felt free. You could just make a thing, and put it up for the whole world or no-one to find, and it felt like the future was condensing rapidly in front of the whole world. Like everything would accelerate from here and that we were going somewhere incredibly exciting, and then we went somewhere.. else.

I’ve watched documentaries of the 60’s and I inherently get what people were saying in terms of one needing to have been there, then.

I don’t think anyone here needs me to explain what happened next in terms of the open web and open communities, but what I wanted to ask is if it’s just me? Did anyone else even care about the Internet of the 90’s like I did? Was it all in my head? Is it a case of rose-tinted glasses?

More importantly, if it wasn’t just me, can we get it back? Is there a way to recapture the freedom and magic that existed then?

I try to explain to my kids now what the Internet used to be but all they know is YouTube, TikTok and a few other silos. The sense of loss is very real, at least to me, and it makes me sad that my kids will never experience the web and the culture that I grew up with.


Ask HN: AI Projects and Research Ideas for Students Focused on Social Good

I am an academic and a researcher, with an expertise in text mining and NLP and with a keen interest in guiding my students to explore the applications of AI aimed at social good. We have a decent number of GPUs hosted at the university, which students access to train models for various tasks. Most of the time, these models go nowhere or at best a research article in some conference. Instead, why not use this opportunity to help society. Having said that, there are some constraints.

1. It should be feasible for undergrad and grad level students with sufficient knowledge in ML to finish within 6 months. 2. Data should be licensed for free academic research. 3. No budget for any additional hardware other GPUs (Tesla V100-PCIE-32GB) to train the models. 4. The application should be ethically fair, transparent, and beneficial to all.

If the HN community has insights, ideas, or can direct us to resources or organizations in this domain, it would be immensely appreciated.


Ask HN: What entities out there provide open-source grants/funding?

In the lines of the Mozilla Technology Fund, and others.


Ask HN: Why does YubiCo need my private key?

Hi HN,

I've been reading up on YubiKeys, which seem to be well-regarded on HN. When doing my own research, I discovered that the default authentication method requires a copy of the private key to be stored on a validation server[1] (YubiCloud, by default). This can be changed to a private validation server, however that would also need to have a copy of the private key in order to work.

My question is: why is this necessary at all? Surely the same functionality could be achieved with public-key cryptography rather than requiring the private key to be uploaded[2] to a validator.

[1] [2]


Ask HN: Do you prefer to search for a job in a job board or trough a recruiter?

With the actual competitive market and so many layoffs, do you think is better to go to a recruitment agency or doing yourself searching and applying via a job board?


Tell HN: Flask and Quart have now partially merged

Flask is a web microframework built to be used with WSGI servers and synchronous code. Quart is the same web microframework built to be used with ASGI servers and asynchronous code. In other words Flask and Quart are now the same base framework as they share the majority of their codebases.

This means that you can use the same framework API and understanding to write synchronous code with Flask, and asynchronous code with Quart.

It is important to note that Flask cannot be made asynchronous without breaking extension and backwards compatibility, or without using monkeypatching. Therefore, Quart is best viewed as a namespace for async/await usages.

Questions and comments very welcome. (I'm struggling a little thinking about how best to communicate this)


Tell HN: Ludum Dare game jam this weekend

If you don't have any plans for this weekend, how about participating in the Ludum Dare game jam?

You get 48/72 hours to make a game according to the theme. You can use any (or no) game engine, work alone or in a group, use pre-existing code/assets or not. At the end, participants will rate your game and provide feedback comments.

The strictest form of the competition, the "Compo" is 48h, you work solo, make everything from scratch (if possible) and publish the source code.

I've been participating in LD on and off for many years now. The compo is a risk-free limited-scope event that helped me develop and hone skills like prototyping, time management, and overall creativity. Also, if you're not already a game developer, it's a great opportunity to code something "fun" and different.


Ask HN: What is the least obnoxious way to ask for cookie permissions?

Recently our legal department is asking to add a cookie disclaimer thing to our marketing website. I hate those and want to put in the least intrusive version. How do people here deal with this? Thanks!!


Ask HN: What is nitter and why does it still work?

You can't access X without signing in, then there are the rate limits if you are signed in, but these things are not an issue with nitter. I went from checking Twitter as a logged out user every few days, to not using it at all after the access changes, to casually using nitter to check in on a few accounts I like.


Tell HN: Google to shutdown Jamboard by Oct 1st, 2024

Dear Jamboard user,

We’re writing to let you know that we will wind down the Jamboard app next year, and offer integrations with industry-leading third-party whiteboard tools FigJam, Lucidspark, and Miro as alternatives for whiteboarding in Google Workspace. These popular tools have numerous advanced features that empower users to collaborate seamlessly together inside of Google Workspace. Starting October 1, 2024, you’ll no longer be able to create new or edit existing Jams on any platform, including the web, iOS, and Android. Between October 1, 2024 and December 31, 2024, the app will be placed in “view-only” mode, during which time you can continue to back up your Jam files.


Ask HN: What's the best local sentence transformer?

Basically what's in the title. There's been such a crazy amount of development in local LLMs if you look at LLaMa, Mistral, etc.

It feels like using OpenAI's Ada to get text embeddings is probably not at all the best option at this point. What would be the best / most cost efficient way of getting text embeddings these days? Preferably open source.


Ask HN: Favorite Debugging Story?

Share some of your favorite one’s! I’d love to see systems programming debugging stories.


How do HN users go about meeting significant others?

I am curious how other HN members go about finding their significant others. I have little time these days, but I want to make the best of it trying to find someone, but social dating apps seems to fall short. Time commitment is not an issue, but having to swift through endless potential matches is time consuming to the point where it does feel like it's hopeless.


Need advise on job hunting after a long career gap

Hi HN Community,

I have been suggested to bug you all for career advice since this community has been extremely helpful to several friends of mine.

I have been struggling to re-enter the workforce after a long gap in my career. I have a highly analytical background having finished bachelors degree in Petroleum Engineering and then working few years for prominent energy companies including British Petroleum. My duties involved heavy data operations such as gathering data from sensors, evaluating well performance and generating reports.

After a few years in the industry I decided to go back to school for a masters degree. I obtained my Master's Degree from USC in Petroleum Engineering and spent a year after graduating to work for the university as a Research Assistant and a resource employee. As a resource employee we looked into how technology and more specifically AI and Machine Learning can be utilized to improve resource exploration and performance in the energy industry.

And then I decided to shift gears to human engineering (cough cough...awkward know... started expecting) :). We started with one and then followed the other and 8 years flew by in a second. Having promised myself that they will be my primary focus I spent all my time and energy with those little munchkins and this year we finally (whoever judges that "finally" I am watching you :P) sent the youngest one off to the school.

So here I am now trying to re-enter THE FORCE (any Obi-Wans out there?). Well, to be honest I didn't completely spend the last 8 years away from the field. I found a new interest in Data Science, likely generated from my previous data heavy job background, and started to explore the field any chance I got.

I learned Python, SQL, data analysis tools such as Pandas, Numphy, Scikit-Learn, data visualization tools such as Tableau, Matplotlib, Plotly and Seaborn. I also completed a Data Engineering boot camp offered by Per Scholas and currently in-progress with Google Data Analytics course at Coursera.

Looks like most of the positions out there require a substantial or at least few years of experience in data science and I have not been receiving responses from any jobs that I have submitted applications for (I have applied to many).

I have been suggested "to make my resume more impressive" which I don't feel comfortable about. I would really appreciate your guidance, suggestions, directions, help (whatever you have to offer). I sincerely appreciate your time!!!

email: askinghelpfromhn [at] gmail [dot] com

Best, GM


In macOS 14, Apple removed two "easter eggs"

While the 4.4BSD Calendar program is still included, they removed the data directory at /usr/share/calendar which previously included some fun "easter egg" text files including calendar.birthday, calendar.history, calendar.lotr, and

If you happen to care you can restore the files in this repository into /usr/local/share/calendar:

They also removed the "Puff of Smoke" animation that was still present through macOS Ventura whenever dragging items off of application toolbars to remove them. It is notably deprecated here:


Redshift Research Project: Amazon Redshift Serverless [pdf]

Redshift Serverless is not serverless. A workgroup is a normal, ordinary Redshift cluster. All workgroups are initially created as a 16 node cluster with 8 slices per node, which is the default 128 RPU workgroup, and then elastic resized to the size specified by the user. This is why the original RPU range is 32 to 512 in units of 8 and the default is 128 RPU; the default is the mid-point of a 4x elastic resize range, and a single node, the smallest possible change in cluster size, is 8 RPU/slices. 1 RPU is 1 slice. With elastic resize the number of nodes changes but the number of data slices never changes; rather, the data slices are redistributed over the new nodes, where if the cluster becomes larger, the slice capacity of each node is filled up with compute slices, which are much less capable than data slices, and where if a cluster becomes smaller, the original set of 128 data slices for 16 nodes are crammed into the remaining nodes. Both outcomes are inefficient for compute and storage; a 512 RPU workgroup has 128 data slices and 384 compute slices, rather than 512 data slices, and a 32 RPU workgroup in the worst case for disk use overhead, small tables (~150k rows), consumes 256mb per column, compared to the 64mb of a provisioned 32 slice cluster. The more recently introduced smaller workgroups, 8 to 24 RPU (inclusive both ends) use a 4 slice node and have two nodes for every 8 RPU. In this case, the 8 RPU workgroup is initially a 16 node cluster with 8 slices per node, which is resized to a 2 node cluster with 4 slices per node - a staggering 16x elastic resize; the largest resize permitted to normal users is 4x. An 8 RPU workgroup, with small tables, uses 256mb per column rather than 16mb per column. Workgroups have a fixed number of RPU and require a resize to change this; workgroups do not dynamically auto-scale RPUs. I was unable to prove it, because Serverless is too much of a black box, but I am categorically of the view that the claims made for Serverless for dynamic auto-scaling are made on the basis of the well-known and long-established mechanisms of AutoWLM and Concurrency Scaling Clusters. Finally, it is possible to confidently extrapolate from the ra3.4xlarge and ra3.16xlarge node types a price as they would be in a provisioned cluster for the 8 slice node type, of 6.52 USD per hour. Provisioned clusters charge per node-second, Serverless workgroups charge per node-query-second and so go to zero cost with zero use. On the default Serverless workgroup of 128 RPU/16 nodes (avoiding the need to account for the inefficiencies introduced by elastic resize), 10 queries run constantly for one hour (avoiding the need to account for the Serverless minimum query charge of 60 seconds of run-time) costs 460.80 USD. A provisioned cluster composed of the same nodes costs 104.32 USD. The break-even point is 2.26 queries for one hour. Serverless introduces zero usage-zero cost billing, which allows for novel use cases, but this could have perfectly well been obtained by introducing a zero-zero billing model for Provisioned Redshift, without the duplicity, considerable added complexity, end-user confusion, cost in developer time and induced cluster inefficiency involved in the pretence that Serverless is serverless.


Ask HN: Could a Wii be a good headless server?


Ask HN: Need help drafting a profession's policy for AI governance

I've been asked to contribute to a fairly significant policy document for my profession and I am seeking your help, particularly the cryptography, ML, and legal folks, to help draft this particular concept around the chain of custody for AI inferences:

========== Inferences made by an AI marketed for use in decision-making (e.g. decision support) should be cryptographically signed using a certificate on the vendor’s machine, who's certificates should be managed in a Public Key Infrastructure program, so the inferences are immutable and their provenance is traceable, and those signed inferences should retained as part of the record.

Additionally, any verification or validation procedure performed by a person should result in the machine's signing certificate being countersigned by the person performing the verification or validation, such that this procedure is also captured in the signed inference. ==========

Is that a sensible way to ensure inferences are admissible as evidence? Does it cover causal interventions? What am I missing? Critique most welcome.


You're all caught up

Don't spend all your valuable time here, life is more important Content refreshes every hour, on the hour