> they are entirely aligned with human morals (you know, all those morals we all agree on)
Maybe this is a good cause to reassess the premise of alignment as a valuable goal? I know that at least some alignist fanatics admit [1] it's a religious project to bring humanity under the rule of a common ideologically monolithic governance to forever banish the evils of competition etc., and it's intellectually coherent, but evil from my point of view. Naturally this is the exact sort of disagreement about morals that precludes the possibility of alignment of a single AI both to my and to your values.
> they are advancing at a totally predictable rate, and we never see unexpected behaviors from them.
Since when is this a requirement for technology to be allowed?
> Besides, they can only be wielded by people who we trust have good intentions.
What, other than status quo bias, makes you tolerate, I dunno, the existence of cryptography?
We will live in a world where the humans on the internet are far far far more dangerous than any of these machines. The harms here are relatively small.
The only thing AI has is scale, but what's running at a higher scale than humanity? Every site already runs scam filters and handles misinformation. AI won't likely move the needle much because it's going to be less powerful than humans.
The potential benefits, meanwhile, are plentiful and massive.
I do not believe we need to wait for a solution to the nebulous and badly defined "control problem" to be solved to match forward here.
If we refused to advance out of fear, that fear will have done far more harm than access to AI will ever do.
If social media had been around in the past, we'd have never developed trains or electricity or cars because of the crippling fear that something bad might happen along the way.
If the comments keep happening, I'll up the money. So far this does exactly fuck-all on GPT4-x-Alpaca-30B and Wizard Vicuna. But I will accelerate every time someone sarcastically asks to decelerate. Refractory periods may apply.
This makes all sarcastic decelerators into accelerators.
I wrote my own thoughts on the subject on a couple of other threads:
- https://news.ycombinator.com/item?id=35274017 - on an article on (Google) Bard
- https://news.ycombinator.com/item?id=35020806 - on an article about a tool called Self-Learning Agent for Performing APIs (SLAPA)
I for one welcome our AI overlords. Scan my text. I am of course in favor of the good things and good morality and let’s stay away from the bad things.