onsdag 30 januari 2019

Some notes on Pinker's response to Phil Torres

The day before yesterday I published my blog post Steven Pinker misleads systematically about existential risk, whose main purpose was to direct the reader to my friend and collaborator Phil Torres' essay Steven Pinker's fake enlightenment. Pinker has now written a response to Phil's essay, and had it published on Jerry Coyne's blog Why Evolution is True. The response is feeble. Let me expand a little bit on that.

After a highly undignified opening paragraph with an uncharitable and unfounded speculation about Phil's motives for writing the essay,1 Pinker goes on throughout most of his response to explain, regarding all of the quotes that he exibits in his book Enlightenment Now and that Phil points out are taken out of context and misrepresent the various authors' intentions, that... well, that it doesn't matter that they are misrepresentations, because what he (Pinker) needed was words to illustrate his ideas, and for that it doesn't matter what the original authors meant. He suggests that "Torres misunderstands the nature of quotation". So why, then, doesn't Pinker use his own words (he is, after all, one of the most eloquent science writers of our time)? Why does he take this cumbersome detour via other authors? If he doesn't actually care what these authors mean, then the only reason I can see for including all these quotes and citations is that Pinker wants to convey to his readers the misleading impression that he is familiar with the existential risk literature and that this literature gives support to his views.

The most interesting case discussed in Phil's essay and Pinker's response concerns AI researcher Stuart Russell. In Enlightenment Now, Pinker places Russell in the category of "AI experts who are publicly skeptical" that "high-level AI pose[s] the threat of 'an existential catastrophe'." Everyone who has actually read Russell knows that this characterization is plain wrong, and that he in fact takes the risk for an existential catastrophe caused by an AI breakthrough extremely seriously. Phil points this out in his essay, but Pinker insists. In his response, Pinker quotes Russell as saying that "there are reasons for optimism", as if that quote were a demonstration of Russell's skepticism. The quote is taken from Russell's answer to the 2015 Edge question - an eight-paragraph answer that, if one reads it from the beginning to the end rather than merely zooming in on the phrase "there are reasons for optimism", makes it abundantly clear that to Russell, existential AI risk is a real concern. What, then, does "there are reasons for optimism" mean? It introduces a list of ideas for things we could do to avert the existential risk that AI poses. Proposing such ideas is not the same thing as denying the risk.

It seems to me that this discussion is driven by two fundamental misunderstandings on Pinker's part. First, he has this straw man image in his head of an existential risk researcher as someone proclaiming "we're doomed", whereas in fact what existential risk researchers say is nearly always more along the lines of "there are risks, and we need to work out ways to avoid them". When Pinker actually notices that Russell says something in line with the latter, it does not fit the straw man, leading him to the erroneous conclusion that Russell is "publicly skeptical" about existential AI risk.

Second, by shielding himself from the AI risk literature, Pinker is able to stick to his intuition that avoiding the type of catastrophe illustrated by Paperclip Armageddon is easy. In his response to Phil, he says that
    if we built a system that was designed only to make paperclips without taking into account that people don’t want to be turned into paperclips, it might wreak havoc, but that’s exactly why no one would ever implement a machine with the single goal of making paperclips,
continuing his light-hearted discourse from our encounter in Brussells 2017 where he said (as quoted on p 24 of the proceedings from the meeting) that
    the way to avoid this is: don’t build such stupid systems!
The literature on AI risk suggests that, on the contrary, the project of aligning the AI's goals with ours to an extent that suffices to avoid catastrophe is a difficult task, filled with subtle obstacles and traps. I could direct Pinker to some basic references such as Yudkowsky (2008, 2011), Bostrom (2014) or Häggström (2016), but given his plateau-shaped learning curve on this topic since 2014, I fear that he would either just ignore the references, or see them as sources to mine for misleading quotes.

Footnote

1) Borrowing from the standard climate denialist's discourse about what actually drives climate scientists, Pinker says this:
    Phil Torres is trying to make a career out of warning people about the existential threat that AI poses to humanity. Since [Enlightenment Now] evaluates and dismisses that threat, it poses an existential threat to Phil Torres’s career. Perhaps not surprisingly, Torres is obsessed with trying to discredit the book [...].

1 kommentar:

  1. Jag tror just detta - att zooma in på hur han refererar till Stuart Russell - kan vara en klokt huvudspår i framtida kritik av Pinkers tankar riktad till allmänheten.

    Phil långa version är riktigt bra, men Salon-artikeln ger kanske lite väl stort utrymme för "bad scholarship" i icke-centrala frågor. Russell-vinkeln låter en både hålla sig till det mest centrala och samtidigt peka på dålig inläsning som rotorsak.

    SvaraRadera