måndag 28 januari 2019

Steven Pinker misleads systematically about existential risk

The main purpose of this blog post is to direct the reader to existential risk scholar Phil Torres' important and brand-new essay Steven Pinker's fake enlightenment.1 First, however, some background.

Steven Pinker has written some of the most enlightening and enjoyable popular science that I've come across in the last couple of decades, and in particular I love his books How the Mind Works (1997) and The Blank Slate (2002) which offer wonderful insights into human psychology and its evolutionary background. Unfortunately, not everything he does is equally good, and in recent years the number of examples I've come across of misleading rhetoric and unacceptably bad scholarship on his part has piled up to a disturbing extent. This is especially clear in his engagement (so to speak) with the intertwined fields of existential risk and AI (artificial intelligence) risk. When commenting on these fields, his judgement is badly tainted by his wish to paint a rosy picture of the world.

As an early example, consider Pinker's assertion at the end of Chapter 1 in his 2011 book The Better Angels of Our Nature, that we "no longer have to worry about [a long list of barbaric kinds of violence ending with] the prospect of a nuclear world war that would put an end to civilization or to human life itseslf". This is simply unfounded. There was ample reason during the cold war to worry about nuclear annihilation, and from about 2014 we have been reminded of those reasons again through Putin's aggressive geopolitical rhetoric and action and (later) the inauguration of a madman as president of the United States, but the fact of the matter is that the reasons for concern never disappeared - they were just a bit less present in our minds during 1990-2014.

A second example is a comment Pinker wrote at Edge.org in 2014 on how a "problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence". See p 116-117 of my 2016 book Here Be Dragons for a longer quote from that comment, along with a discussion of how badly misinformed and confused Pinker is about contemporary AI futurology; the same discussion is reproduced in my 2016 blog post Pinker yttrar sig om AI-risk men vet inte vad han talar om.

Pinker has kept on repeating the same misunderstandings he made in 2014. The big shocker to me was to meet Pinker face-to-face in a panel discussion in Brussels in October 2017, and hear him make the same falsehoods and non sequiturs again and to add some more, including one that I had preempted just minutes earlier by explaining the relevant parts of Omohundro-Bostrom theory for instrumental vs final AI goals. For more about this encounter, see the blog post I wrote a few days later, and the paper I wrote for the proceedings of the event.

Soon thereafter, in early 2018, Pinker published his much-praised book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Mostly it is an extended argument about how much better the world has become in many respects, economically and otherwise. It also contains a chapter named Existential threats which is jam-packed with bad scholarship and claims ranging from the misleading to outright falsehoods, all of it pointing in the same direction: existential risk research is silly, and we have no reason to pay attention to such concerns. Later that year, Phil Torres wrote a crushing and amazingly detailed but slightly dry rebuttal of that chapter. I've been meaning to blog about that, but other tasks kept coming in the way. Now, however, when Phil's Salon essay... ...is available, is the time. In the essay he presents some of the central themes from the rebuttal in more polished and reader-friendly form. If there is anyone out there who still thinks (as I used to) that Pinker is an honest and trustworthy scholar, Phil's essay is a must-read.

Footnote

1) It is not without a bit of pride that I can inform my readers that Phil took part in the GoCAS guest researcher program on existential risk to humanity that Anders Sandberg and I organized in September-October 2017, and that we are coauthors of the paper Long-term trajectories of human civilization which emanates from that program.

3 kommentarer:

  1. Olle, have you commented anywhere on Cirillo & Taleb's critique of Pinker's treatment of the statistics of war casualties?

    SvaraRadera
    Svar
    1. Nope, but parts of the discussion in Section 8.1 of my Here Be Dragons is similar in spirit to Cirillo and Taleb.

      Radera
  2. Uppenbarligen är AGI-risk-diskussionen en svår diskussion. Läste Slate-artikeln och en del av kommentarstråden. Verkar som att, om första delen av kommentarstråden skulle vara representativ, Torres har misslyckats med sin Slate-publicering; att läsarna mest bara tar ställning mot honom som en nitpickande bråkstake och blir ännu mera låsta i sitt tänkande angående AGI i linje med Pinkers. Se också bloggpost med kommentarstråd från Coyne:
    https://whyevolutionistrue.wordpress.com/2019/01/29/a-response-from-steve-pinker-to-salons-hit-piece-on-enlightenment-now/
    (länk från kommentarstråden i Slate). Man måste nog vara mycket försiktig i hur man agerar och formulerar sig i denna diskussionen, det verkar lätt kunna "backfire".
    /tonyf

    SvaraRadera