- [A] problem with AI dystopias is that they project a parochial
alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It's telling that many of our techno-prophets can't entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.
Of course we can imagine an evil genius who deliberately designed, built, and released a battalion of robots to sow mass destruction. [...] In theory it could happen, but I think we have more pressing things to worry about.
- This is poor scholarship. Why doesn't Pinker bother, before going public on the issue, to find out what the actual arguments are that make writers like Bostrom and Yudkowsky talk about an existential threat to humanity? Instead, he seems to simply assume that their worries are motivated by having watched too many Terminator movies, or something along those lines. It is striking, however, that his complaints actually contain an embryo towards rediscovering the Omohundro-Bostrom theory:266 "Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something." This comes very close to stating Bostrom's orthogonality thesis about the compatibility between essentially any final goal and any level of intelligence, and if Pinker had pushed his thoughts about "novel means to attain a goal" just a bit further with some concrete example in mind, he might have rediscovered Bostrom's paperclip catastrophe (with paperclips replaced by whatever his concrete example involved). The main reason to fear a superintelligent AGI Armageddon is not that the AGI would exhibit the psychology of an "alpha-male"267 or a "megalomaniacal despot" or a "psychopathic serial killer", but simply that for a very wide range of (often deceptively harmless-seeming) goals, the most efficient way to attain it involves wiping out humanity.
Contra Pinker, I believe it is incredibly important, for the safety of humanity, that we make sure that a future superintelligence will have goals and values that are in line with our own, and in particular that it values human welfare.
266) I owe this observation to Muehlhauser (2014).
267) I suspect that the male-female dimension is just an irrelevant distraction when moving from the relatively familiar field of human and animal psychology to the potentially very different world of machine minds.