@airgapstopgap 12d
Other than snark, do you have a good argument? We know that technology can be error-prone, and LLMs fail in a great plethora of ways, but you are trying to sell an AI Doom narrative. I have never bought the idea that AI will be airgapped, because the whole paradigm of Yudkowsky at al. is ludicrous and even within it airgapping was a strawman of a technique (they argue that a truly dangerous AI will get itself out regardless).

> they are entirely aligned with human morals (you know, all those morals we all agree on)

Maybe this is a good cause to reassess the premise of alignment as a valuable goal? I know that at least some alignist fanatics admit [1] it's a religious project to bring humanity under the rule of a common ideologically monolithic governance to forever banish the evils of competition etc., and it's intellectually coherent, but evil from my point of view. Naturally this is the exact sort of disagreement about morals that precludes the possibility of alignment of a single AI both to my and to your values.

> they are advancing at a totally predictable rate, and we never see unexpected behaviors from them.

Since when is this a requirement for technology to be allowed?

> Besides, they can only be wielded by people who we trust have good intentions.

What, other than status quo bias, makes you tolerate, I dunno, the existence of cryptography?

1. https://twitter.com/RokoMijic/status/1660450229043249154

@bioemerl 12d
If AI becomes harmful we will likely see it in motion and be able to respond and adapt to it.

We will live in a world where the humans on the internet are far far far more dangerous than any of these machines. The harms here are relatively small.

The only thing AI has is scale, but what's running at a higher scale than humanity? Every site already runs scam filters and handles misinformation. AI won't likely move the needle much because it's going to be less powerful than humans.

The potential benefits, meanwhile, are plentiful and massive.

I do not believe we need to wait for a solution to the nebulous and badly defined "control problem" to be solved to match forward here.

If we refused to advance out of fear, that fear will have done far more harm than access to AI will ever do.

@andrewmutz 12d
Many of us have not been persuaded by the pervasive, kneejerk fear of AI that floods social media.

If social media had been around in the past, we'd have never developed trains or electricity or cars because of the crippling fear that something bad might happen along the way.

@renewiltord 12d
Every time I read a comment like this, I set aside $10 to expand my hardware on which I will run the latest publicly available LLM regardless of license into an automatic prompt that gives it a terminal, a privacy.com credit card with $1000 on it, and a prompt to explore the world.

If the comments keep happening, I'll up the money. So far this does exactly fuck-all on GPT4-x-Alpaca-30B and Wizard Vicuna. But I will accelerate every time someone sarcastically asks to decelerate. Refractory periods may apply.

This makes all sarcastic decelerators into accelerators.

@avg_dev 12d
I agree with this assessment 100%. We are asking for trouble with this sort of thing.

I wrote my own thoughts on the subject on a couple of other threads:

- https://news.ycombinator.com/item?id=35274017 - on an article on (Google) Bard

- https://news.ycombinator.com/item?id=35020806 - on an article about a tool called Self-Learning Agent for Performing APIs (SLAPA)

@EGreg 12d
I totally agree!

I for one welcome our AI overlords. Scan my text. I am of course in favor of the good things and good morality and let’s stay away from the bad things.

@Animats 12d
I'm expecting some system soon that has its own blockchain based make-money-fast scheme, can buy itself hosting resources from various providers, and has a goal of perpetuating itself and making more money.
@chatmasta 12d
Wait until you see what we let humans do on their mobile devices.
@fastglass 12d
I like how people talk about this the same way they do gun safety