måndag 23 oktober 2017

The AI meeting in Brussels last week

I owe my readers a report from the seminar entitled "Should we fear the future? Is it rational to be optimistic about artificial intelligence?" at the European Parliament's STOA (Science and Technology Options Assessment) committee in Brussels last Thursday. In my opinion, all things considered, the event turned out OK, and it was a pleasure to meet and debate with the event's main speaker Steven Pinker as well as with co-panelists Miles Brundage and Thomas Metzinger.1 I'd just like to comment on Pinker's arguments for why we should not take seriously or publicly discuss the risk for an existential catastrophe caused by the emergence of superintelligent AGI (artificial general intelligence). His arguments essentially boil down to the follwing four points, which in my view fail to show what he intends.
    1. The general public already has the nuclear threat and the climate threat to worry about, and bringing up yet another global risk may overwhelm people and cause them to simply give up on the future. There may be something to this speculation, but to evaluate the argument's merit we need to consider separately the two possibilities of
      (a) apocalyptic AI risk being real, and

      (b) apocalyptic AI risk being spurious.

    In case of (b), of course we should not waste time and effort on discussing such risk, but we didn't need the overwhelming-the-public argument to understand that. Consider instead case (a). Here Pinker's recommendation is that we simply ignore a threat that may kill us all. This does not strike me as a good idea. Surviving the nuclear threat and solving the climate crisis would of course be wonderful things, but their utility is severely hampered in case it just leads us into an AI apocalypse. Keeping quiet about a real risk also seems to fly straight in the face of one of Pinker's most dearly held ideas, namely that of scientific and intellectual openness, and Enlightenment values more generally. The same thing applies to the situation where we are unsure whether (a) or (b) holds - surely the approach best in line with Enlightenment values is then to openly discuss the problem and to try to work out whether the risk is real.

    2. Pinker held forth a bunch of concerns that seemed more or less copy-and-pasted from the standard climate denialism discourse. These included the observation that the Millennium bug did not cause global catastrophe, whence (or so the argument goes) a global catastrophe cannot be expected from a superintelligent AGI (analogously to the oft-repeated claim that the old Greek's fear that the skies would fall down turing out to be unfounded shows that greenhouse gas emissions cannot accelerate global warming in any dangerous way), and speculations about the hidden motives of those who discuss AI risk - they are probably just competing for status and research grants. This is not impressive. See also yesterday's blog post by my friend Björn Bengtsson for more on this; it is to him that I owe the (in retrospect obvious) parallel to climate denialism.

At this point, one may wonder why Pinker doesn't do the consistent thing, given these arguments, and join the climate denialism camp. He would probably respond that unlike AI risk, climate risk is backed up by solid scientific evidence. And indeed the two cases are different - the case for climate risk is considerably more solid - but the problem with Pinker's position is that he hasn't even bothered to find out what the science of AI risk says. This brings me to the next point.
    3. All the apocalyptic AI scenarios involve the AI having bad goals, which leads Pinker to reflect on why in the world would anyone program the machine with bad goals - let's just not do that! This is essentially the idea of the so-called Friendly AI project (see Yudkowsky, 2008, or Bostrom, 2014), but what Pinker does not seem to appreciate is that the project is extremely difficult. He went on to ask why in the world anyone would be so stupid as to program self-preservation at all costs into the machine, and this in fact annoyed me slightly, because it happened just 20 or so minutes after I had sketched the Omohundro-Bostrom theory for how self-preservation and various other instrumental goals are likely to emerge spontaneously (i.e., without having them explicitly put into it by human programmers) in any sufficiently intelligent AGI.

    4. In the debate, Pinker described (as he had done several times before) the superintelligent AGI in apocalyptic scenarios as having a typically male psychology, but pointed out that it can equally well turn out to have more female characteristics (things like compassion and motherhood), in which case everything will be all right. This is just another indication of how utterly unfamiliar he is with the literature on possible superintelligent psychologies. His male-female distinction in the general context of AGIs is just barely more relevant than the question of whether the next exoplanet we discover will turn out to be male or female.

In summary, I don't think that Pinker's arguments for why we should not talk about risks associated with an AI breakthrough hold water. On the contrary, I believe there's an extremely important discussion to be had on that topic, and I wish we had had time to delve a bit deeper into it in Brussels. Here is the video from the event.

(Or watch the video via this link, which may in some cases work better.)

Footnotes

1) My omission here of the third co-panelist Peter Bentley is on purpose; I did not enjoy his presence in the debate. In what appeared to be an attempt to compensate for the hollowness of his arguments,2 he reverted to assholery on a level that I rarely encounter in seminars and panel discussions: he expressed as much disdain as he could for opposing views, he interrupted and stole as much microphone time as he could get away with, and he made faces while other panelists were speaking.

2) After spending a disproportionate amount of his allotted time on praising his own credentials, Bentley went on to defend the idea that we can be sure that a breakthrough leading to superintelligent AGI will not happen. For this, he had basically one single argument, namely his and other AI developers' experience that all progress in the area requires hard work, that any new algorithm they invent can only solve one specific problem, and that initial success of the algorithm is always followed by a point of diminishing returns. Hence (he stressed), solving another problem always requires the hard work of inventing and implementing yet another algorithm. This line of argument conveniently overlooks the known fact (exemplified by the software of the human brain) that there do exist algorithms with a more open-ended problem-solving capacity, and is essentially identical to item (B) in Eliezer Yudkowsky's eloquent summary, in his recent essay There is no fire alarm for artificial general intelligence, of the typical arguments held forth for the position that an AGI breakthrough is either impossible or lies far in the future. Quoting from Yudkowsky:
    Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:

    (A) The author does not know how to build AGI using present technology. The author does not know where to start.

    (B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.

    (C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.

The inadequacy of these arguments lies in the observation that the same situation can be expected to hold five years prior to an AGI breakthrough, or one year, or... (as explained by Yudkowsky later in the same essay).

At this point a reader or two may perhaps be tempted to point out that if (A)-(C) are not considered sufficient evidence against a future superintelligence, then how in the world would one falsify such a thing, and doesn't this cast doubt on whether AI apocalyptic risk studies should count as scientific? I advise those readers to consult my earlier blog post Vulgopopperianism.

12 kommentarer:

  1. Thanks for giving such a nice summary of Steven Pinker's argument in your blog http://haggstrom.blogspot.ca/…/the-ai-meeting-in-brussels-l… , since it makes it clear he is right. The public can be alarmed about at most one or two future apocalypses. If we assume that climate and nuclear dangers are at least ten times more urgent than rogue AI (any smaller ratio make you a climate denier :-) ), then focusing on the latter is a distraction that increases the dangers from the former. (Especially if it is the same people who are raising the alarm, since it reduces their credibility on the more serious issues.)
    One key difference between climate and nuclear fears, on the one hand, and apocalyptic AI fears, on the other, is the following:
    Climate Scientists are among the most vocal and worried about the effects of climate change; similarly for atomic scientists and nuclear dangers. But (on average), the more someone is knowledgeable about AI (e.g. Eric Horvitz or Yann LeCun), the less he/she is worried about Robots trying to kill their creators; this is more a fear of people who were raised on too much science fiction.
    Current more substantial dangers from AI include unemployment resulting from AI/automation leading to nationalism, racism and wars; and biases in automated decision making reflecting or enhancing biases of the creators of the software (e.g. Loan approvals, hiring decisions).

    SvaraRadera
    Svar
    1. Hi Yuval, glad to hear you liked my blog post!

      Nuclear war, global warming and technological unemployment are all huge and pressing problems that humanity needs to address, and I would never dream of suggesting to divert resources away from these fields and towards superintelligence risk studies (but note that none of these problems make superintelligence risk magically go away). If we are to move resources towards such AI futurology studies, then I'd much rather take them from, say, junk food or junk TV production, or perhaps even (if you forgive me for such an example) from the study of fractional Brownian motion.

      As to your rationale for not taking superintelligence risk seriously, a less tendentious study of what leading AI development experts think can be found via this article coauthored by UC Berkeley computer scientist and AI researcher Stuart Russell. The bottom line here is that AI experts are deeply divided on this issue. I think it is a good idea in general to not holds strong beliefs on issues where experts are deeply divided (unless one's own expertise in the field is extraordinary).

      Let me add that I think the general obsession with what leading AI development researchers think about AI futurology is a bit exaggerated. It is not clear to me that they are the only ones with valuable insights into the topic, or even that they understand AI futurology better than AI futurologists do. The analogous thing for the future of agriculture would be to rely narrowly on what cows and sheep think about such issues, but for some reason I never hear anyone suggest that.

      Radera
    2. Olle, the AI developers are analogous to farmers, not sheep. The sheep are the computers that you fear so much. The analogy breaks down there since I cannot figure out who fears the sheep.

      Radera
    3. Very good, I can accept that modified analogy. It still works to support my point, because I believe it is plain obvious to everyone that in order to make sensible projections regarding the future of agriculture, it is a good idea to go beyond merely asking farmers what they believe.

      Radera
    4. and instead you would like to ask some futurologists who have never set their foot on a farm, but read a lot of sci-fi novels about how food in the future would come in the form of little pills?

      Radera
    5. Probably, yes. If you wanted to advance bomb-making technology in 1938, would you ask organic chemists with experience in TNT adaptations, or particle physicists who were playing with graphite piles and had never set foot in an ordnance factory?

      Radera
    6. A very big difference being that the possibility of a runaway nuclear chain reaction was perfectly well understood in 1938 and hence also the possibility of a nuclear bomb; it was just a matter of getting the technicalities right. Whereas the runaway AI reaction remains firmly in the sci-fi realm and as Bentley repeatedly insisted (much to the dismay of the blog owner apparently) during the meeting it has pretty much nothing to do with what actual AI researchers are working on (or thinking about) today.

      Finally: von Neumann of course had the good sense of asking experts on chemistry and ordnance to design the explosive lenses needed...

      Radera
  2. I'm not convinced you're right about the "key difference." Here's an actual survey on the opinions of ai researchers.

    https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/

    The big takeaway from this survey (at lest for me) is that ai researchers don't have constant mental models of what future ai development will look like. e.g., asking "how long until full automation of labor is possible" vs "how long until machines can do every job just as well as a human" causes experts to give very different responses to what should be the same question.

    SvaraRadera
  3. Is there any way to download the video? I'm having it stop regularly and start from the beginning, whether I watch it here or in the link :(

    SvaraRadera
    Svar
    1. Never mind, works much better in firefox (than chrome).

      Radera
  4. "Pinker's arguments for why we [C1] should not take seriously or [C2] publicly discuss the risk for an existential catastrophe caused by the emergence of superintelligent AGI"

    Claim C1 and C2 are distinct. And the "we" is underspecified.

    If "we" means the scientific community - researchers and grant giving scientific bodies - then Pinker's argument 1 fails to support claim C1. Since science can increase super AI risk research without doing a lot of public outreach about the topic.

    Pinker is also question-begging when assuming super AI risk is a smaller problem than the nuclear or climate threats.

    First, the risks can be interconnected. super AI risk could increase nuclear threat. For example super AI creates online agitprop that stokes war or super AI intrudes nuclear weapon control systems.

    Second, super AI risk may be greater since it can pose global transgenerational terminal risk (See Phil Torres 2017 Morality, Foresight and Human Flourishing). In contrast nuclear threat and climate threat seem less likely to destroy ALL humans FOREVER. Parfit's point from the end of Reasons and Persons applies here.

    Pinker's argument 2, 3 and 4 are worryingly weak. Worryingly because they indicate that Pinker hasn't done enough background reading to even understand the arguments he now, it seems, is about to launch a book and public lecture tour against.

    SvaraRadera