I kind of like Johan Norberg. He is a smart guy, and while I do not always agree with his techno-optimism and his (related) faith in the ability of the free market to sort everything out for the best, I think he adds a valuable perspective to public debate.
However, like the rest of us, he is not an expert on everything. Knowing when one's knowledge on a topic is insufficient to provide enlightenment and when it is better to leave the talking to others can be difficult (trust me on this), and Norberg sometimes fails in this respect. As in the recent one minute and 43 seconds episode of his YouTube series Dead Wrong® in which he comments on the futurology of artificial intelligence (AI). Here he is just... dead wrong:
No more than 10 seconds into the video, Norberg incorrectly cites, in a ridiculing voice, Elon Musk as saying that "superintelligent robots [...] will think of us as rivals, and then they will kill us, to take over the planet". But Musk does not make such a claim: all he says is that unless we proceed with suitable caution, there's a risk that something like this may happen.
Norberg's attempt at immediate refutation - "perhaps super machines will just leave the planet the moment they get conscious [and] might as well leave the human race intact as a large-scale experiment in biological evolution" - is therefore just an invalid piece of strawmanning. Even if Norberg's alternative scenario were shown to be possible, that is not sufficient to establish that there's no risk of a robot apocalypse.
It gets worse. Norberg says that
- even if we invented super machines, why would they want to take over the world? It just so happens that intelligence in one species, homo sapiens, is the result of natural selection, which is a competitive process involving rivalry and domination. But a system that is designed to be intelligent wouldn't have any kind of motivation like that.
Norberg cites Steven Pinker here, but Pinker is just as ignorant as Norberg of serious AI futurology. It just so happens that when I encountered Pinker in a panel discussion last year, he made the very same dead wrong argument as Norberg now does in his video - just minutes after I had explained to him the crucial parts of Omohundro-Bostrom theory needed to see just how wrong the argument is. I am sure Norberg can rise above that level of ineducability, and that now that I am pointing out the existence of serious work on AI motivations he will read at least some of references given above. Since he seems to be under the influence of Pinker's latest book Enlightenment Now, I strongly recommend that he also reads Phil Torres' detailed critique of that book's chapter on existential threats - a critique that demonstrates how jam-packed the chapter is with bad scholarship, silly misunderstandings and outright falsehoods.