Why Artificial Intelligence is Not an Existential Threat is an aticle by leading science writer Michael Shermer
1 in the recent
issue 2/2017 of his journal
Skeptic (
mostly behind paywall). How I wish he had a good case for the claim contained in the title! But alas, the arguments he provides are weak, bordering on pure silliness. Shermer is certainly not the first high-profile figure to react to the theory of AI (artificial intelligence) existential risk, as developed by
Eliezer Yudkowsky,
Nick Bostrom and others, with an intuitive feeling that it cannot possibly be right, and the (slightly megalomaniacal) sense of being able to refute the theory, single-handedly and with very moderate intellectual effort. Previous such attempts,
by Steven Pinker and
by John Searle, were exposed as mistaken in my book
Here Be Dragons, and the purpose of the present blog post is to do the analogous thing to Shermer's arguments.
The first half of
Shermer's article is a not-very-deep-but-reasonably-competent summary of some of the main ideas of why an AI breakthrough might be an existential risk to humanity. He cites the leading thinkers of the field:
Eliezer Yudkowsky,
Nick Bostrom and
Stuart Russell, along with famous endorsements from Elon Musk, Stephen Hawking, Bill Gates and Sam Harris.
The second half, where Shermer sets out to refute the idea of AI as an existential threat to humanity, is where things go off rails pretty much immediately. Let me point out three bad mistakes in his reasoning. The main one is
(1), while
(2) and
(3) are included mainly as additional illustrations of the sloppiness of Shermer's thinking.
(1) Shermer states that most AI doomsday prophecies are grounded in the false analogy between human nature and computer nature,
whose falsehood lies in the fact that humans have emotions, while computers do not. It is highly doubtful whether there is a useful sense of the term emotion for which a claim like that holds generally, and in any case Shermer mangles the reasoning behind Paperclip Armageddon - an example that he discusses earlier in his article. If the superintelligent AI programmed to maximize the production of paperclips decides to wipe out humanity, it does this because it has calculated that wiping out humanity is an efficient step towards paperclip maximization. Whether to ascribe to the AI doing so an emotion like aggression seems like an unimportant (for the present purpose) matter of definition. In any case, there is nothing fundamentally impossible or mysterious in an AI taking such a step. The error in Shermer's claim that it takes aggression to wipe out humanity and that an AI cannot experience aggression is easiest to see if we apply his argument to a simpler device such as a heat-seeking missile. Typically for such a missile, if it finds something warm (such as an enemy vehicle) up ahead slightly to the left, then it will steer slightly to the left. But by Shermer's account, such steering cannot happen, because it requires aggression on the part of the heat-seeking missile, and a heat-seeking missile obviously cannot experience aggression, so wee need not worry about heat-seeking missiles (any more than we need to worry about a paperclip maximizer).2
(2) Citing a famous passage by Pinker, Shermer writes:
As Steven Pinker wrote in his answer to the 2015 Edge Question on what to think about machines that think, "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world." It is equally possible, Pinker suggests, that "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." So the fear that computers will become emotionally evil are unfounded [...].
Even if we accepted Pinker's analysis,3 Shermer's conclusion is utterly unreasonable, based as it is on the following faulty logic: If a dangerous scenario A is discussed, and we can give a scenario B that is "equally possible", then we have shown that A will not happen.
(3) In his eagerness to establish that a dangerous AI breakthrough is unlikely and therefore not worth taking seriously, Shermer holds forth that work on AI safety is underway and will save us if the need should arise, citing the recent paper by Orseau and Armstrong as an example, but overlooking that it is because such AI risk is taken seriously that such work comes about.
Footnotes
2) See p 125-126 of
Here Be Dragons for my attempt to explain almost the same point using an example that interpolates between the complexity of a heat-seeking missile and that of a paperclip maximizer, namely a chess program.
3) We shouldn't. See p 117 of
Here Be Dragons for a demonstration of the error in Pinker's reasoning - a demonstration that I (provoked by further such hogwash by Pinker) repeated in
a 2016 blogpost.