onsdag 5 oktober 2016

Christian Munthe om existentiell risk

Jag fröjdar mig varje gång jag tycker mig se att jag bidragit (åtminstone på ett litet hörn) till att engagera en kollega att delta konstruktivt i den viktiga diskussionen kring så kallade existentiella risker, dvs sådant som riskerar föra med sig mänsklighetens undergång. Likaså fröjdar jag mig var gång jag ser ett veritabelt rallarslagsmål övergå i civiliserat samtal. Jag har därför dubbla anledningar att fröjda mig åt det meningsutbyte om existentiell risk som Christian Munthe (bioetiker och professor i praktisk filosofi vid Göteborgs universitet) och jag påbörjade förra året och som pågår ännu, om än lågintensivt.

Det började med en bloggpost av Christian i februari 2015 där han i omilda ordalag raljerade över den spirande forskningen kring existentiell risk, vilken han fann analog med det (med rätta) hårt kritiserade och idag i stort sett övergivna argumentet av 1600-talsfilosofen Blaise Pascal för ett kristet leverne. Jag provocerades att i min bok Here Be Dragons: Science, Technology and the Future of Humanity ägna stor del av Avsnitt 10.4 (rubricerat I am not advocating Pascal's Wager) åt att bemöta hans argumentation.1 Med formuleringar som "as unfamiliar with the existential risk literature as Munthe appears to be" och "deficient reasoning" anslöt jag mig till hans omilda och raljanta nivå. Detta bådade knappast gott för fortsatt diskussion.

Men så hände något oväntat, och jag tvekar inte att se det mer som Christians förtjänst än som min egen. På ett seminarium i Stockholm (vilket ägde rum efter att jag lämnat in slutmanus på Here Be Dragons men innan boken utkommit) och ett i Göteborg, samt i en gemensam intervju i GU-journalen, övergick diskussionen i ett ömsesidigt lyssnande och reflekterande tonläge, och alla råsopar och rallarsvingar var som bortblåsta. Och nu har Christian skrivit en uppsats i ämnet, rubricerad The Black Hole Challenge: Precaution, Existential Risks and the Problem of Knowledge Gaps. Jag gratulerar honom till denna utmärkta uppsats, vilken i fråga om argumentativ stringens och saklighet skiljer sig från den ursprungliga bloggposten såsom natt skiljer sig från dag (fast tvärtom).

Fotnot

1) Här är den centrala passagen, från sidorna 242-244 i Here Be Dragons:
    Those who, for one reason or another, are critical of the study of existential risk - as presented in Chapter 8, Section 10.3, and several other parts of this book - tend to rejoice in comparing that study to Pascal's Wager. A recent blog post by Swedish bioethicist Christian Munthe may serve as an example. Munthe says that "what drives the argument [for why we should do something to prevent a particular existential catastrophe] is the (mere) possibility of a massively significant outcome, and the (mere) possibility of a way to prevent that particular outcome, thus doing masses of good", he goes on to compare this to the (mere) possibility that Pascal's God exists, and he poses the rhetorical question that is also the title of his blog post: "Why aren't existential risk/ultimate harm argument advocates all attending mass?"

    Crucial to Munthe's argument is the clumping together of a wide class of very dissimilar concepts as "the (mere) possibility of a massively significant outcome". Unfortunately, his blog post lacks specific pointers to the literature, but it is clear from the context that what is under attack here are publications like Bostrom and Ćirković (2008) and Bostrom (2013, 2014) - all of them heavily referenced (usually approvingly) in the present book. So for concreteness, let us consider the main existential risk scenario discussed in Bostrom (2014), namely

      (S1) The emergence of a superintelligent AI that has goals and values in poor alignment with our own and that wipes us out as a result.
    This is, if I understand Munthe correctly, a "(mere) possibility of a massively significant outcome". This makes sense if we take it to mean a scenario that we still do not understand well enough to assign it a probability (not even an approximate one) to be plugged into a decision-theoretic analysis à la Section 6.7. But Munthe conflates this with another sense of "(mere) possibility", more on which shortly.

    First, however, consider Munthe's parallel to Pascal's Wager, and note that Pascal didn't have in mind just any old god, but a very specific one, in a scenario that I take the liberty of summarizing as follows:

      (S2) The god Yahweh created man and woman with original sin. He later impregnated a woman with a child that was in some sense also himself. When the child had grown up, he sacrificed it (and thus, in some sense, himself) to save us from sin. But only those of us who worship him and go to church. The rest will be sent to hell.
    Munthe and I appear to be in full agreement that (S2) is a far-fetched and implausible hypothesis, unsupported by either evidence or rational argument. And, as Munthe points out,
      there are innumerable possible versions of the god that lures you with threats and promises of damnation and salvation, and what that particular god may demand in return, often implying a ban on meeting a competing deity's demands, so the wager doesn't seem to tell you to try to start believing in any particular of all these (merely) possible gods.
    In particular, Pascal's Wager succumbs to the following scenario (S3), which appears at least as plausible as (S2) but under which Pascal's choice of going to church becomes catastrophically counterproductive.
      (S3) There exists an omnipotent deity, Baal, who likes atheists and lets them into heaven. The only people whom he sends to hell instead are those who make him jeleaous by worshiping some other deity.
    Next, for Munthe's argument to work, he needs scenario (S1) to share (S2)'s far-fetchedness, implausibility and lack of rational support. He writes that
      there seems to be an innumerable amount of thus (merely) possible existential risk scenarios, as well as innumerable (merely) possibly workable technologies that might help to prevent or mitigate each of these, and it is unlikely (to say the least) that we have resources to bet substantially on them all, unless we spread them so thin that this action becomes meaningless.
    Is (S1) really just one of these "innumerable" risk scenarios? There are strong arguments that it is not, and that it does not have the properties of being far-fetched and implausible. It may perhaps, at first sight, seem far-fetched and implausible to someone who is as unfamiliar with the existential risk literature as Munthe appears to be. Bostrom (2014) and others offer a well-argued and rational defense of the need to consider (S1) as a serious possibility. If Munthe wishes to argue that (S1) does not merit such attention, he needs to concretely address those arguments (or at least the somewhat sketchy account of them that I offer in Sections 4.5 and 4.6 of the present book), rather than tossing around vague and unwarranted parallels to Pascal's Wager.

    The crucial point is this. There are cases where a phenomenon seems to be possible, although we do not yet understand it well enough to be ready to meaningfully assign it a probability - not even an approximate probability. In such cases, we may speak of a "(mere) possibility". But we must not be misled, along with Munthe, by this expression into thinking that this implies that the phenomenon has extremely low probability.

4 kommentarer:

  1. Har du något namn på hot som knappast leder till mänsklighetens undergång men till gigadöd/massdöd och långvarigt sänkt levnadskvalitet för de flesta? Det förefaller lite märkligt att finna sådana hot oviktiga.

    Exempelvis förefaller risken ganska liten att klimatförändringarna eliminerar mänskligheten, även om temperaturerna skulle stiga sex grader. Jobbigt blir det dock säkert.

    Och så kan det vara kombinationen av problem som knäcker oss, snarare än något visst problem.

    SvaraRadera
    Svar
    1. Givetvis är det ofantligt viktigt att vi kan avvärja den sortens hot, vilka går under beteckningen globala katastrofrisker.

      Radera
  2. Kommer och sabbar lite här, inte kunnat läsa hela artikeln tyvärr men en fråga som jag inte kommer ihåg om du svarat på... om AI utvecklar qualia (medvetande) skulle det fortfarande vara lika viktigt att människan "styrde" att vi fick fortsätta vår utveckling mer eller mindre obehindrat? Eller bli den förmodat ökade förmågan för medvetande viktigare?

    SvaraRadera
    Svar
    1. Jag kan väl inte göra anspråk på att ha givit något betämt och slutgiltigt svar, men frågan har vi haft uppe till diskussion här på bloggen tidigare.

      Radera