måndag 20 februari 2017


I have sometimes, in informal discussions, toyed with the term vulgopopperianism, although mostly in Swedish: vulgärpopperianism. What I have not done so far is to pinpoint and explain (even to myself) what it means, but now is the time.

Karl Popper (1902-1994) was one of the most influential philosophers of science of the 20th century. His emphasis on falsifiability as a criterion for a theory to qualify as scientific is today a valuable and standard part of how we think about science, and it is no accident that, in the chapter named "What is science?" in my book Here Be Dragons, I spend two entire sections on his falsificationism.

Science aims at figuring out and understanding what the world is like. Typically, a major part of this endeavour involves choosing, tentatively and in the light of available scientific evidence, which to prefer among two or more descriptions (or models, or theories, or hypotheses) of some relevant aspect of the world. Popper put heavy emphasis on certain asymmetries betwen such descriptions. In a famous example, the task is to choose between the hypothesis
    (S1) all swans are white
and its complement
    (S2) at least one non-white swan exists.
The idea that, unlike (S2), (S1) is falsifiable, such as by the discovery of a black swan, has become so iconic that no less than two books by fans of Karl Popper (namely, Nassim Nicholas Taleb and Ulf Persson) that I've read in the past decade feature black swans on their front page illustrations.

The swan example is highly idealized, and in practice the asymmetry between two competing hypotheses is typically nowhere near as clear-cut. I believe the failure to understand this has been a major source of confusion in scientific debate for many decades. An extreme example is how, a couple of years ago, a Harvard professor of neuroscience named Jason Mitchell drew on this confusion (in conjunction with what we might call p≤0.05-mysticism) to defend a bizarre view on the role of replications in science.

By a vulgopopperian, I mean someone who
    (a) is moderately familiar with Popperian theory of science,

    (b) is fond of the kind of asymmetry that appears in the all-swans-are-white example, and

    (c) rejoices in claiming, whenever he1 encounters two competing hypotheses one of which he for whatever reasons prefers, some asymmetry such that the entire (or almost the entire) burden of proof is on proving the other hypothesis, and insisting that until a conclusive such proof is presented, we can take for granted that the preferred hypothesis is correct.

Jason Mitchell is a clear case of a vulgopopperian. So are many (perhaps most) of the proponents of climate denialism that I have encountered during the decade or so that I have participated in climate debate. Here I would like to discuss another example in just a little bit more detail.

In May last year, two of my local colleagues in Gothenburg, Shalom Lappin and Devdatt Dubhashi, gave a joint seminar entitled AI dangers: imagined and real2, in which they took me somewhat by surprise through spending large parts of the seminar attacking my book Here Be Dragons and Nick Bostroms's Superintelligence. They did not like our getting carried away by the (in their view) useless distraction of worrying about a future AI apocalypse in which things go bad for humanity due to the emergence of superintelligent machines whose goals and motivations are not sufficiently in line with human values and promotion of human welfare. What annoyed them was specifically the idea that the creation of superintelligent machines (machines that in terms of general intelligence, including cross-domain prediction, planning and optimization, vastly exceed current human capabilities) might happen within forseeable future such as a century or so.

It turned out that we did have some common ground as regards the possibility-in-principle of superintelligent machines. Neither Shalom or Devdatt, nor myself, believe that human intelligence comes out of some mysterious divine spark. No, the impression I got was that we agree that intelligence arises from natural (physical) processes, and also that biological evolution cannot plausibly be thought to have come anywhere near a global intelligence optimum by the emergence of the human brain, so that there do exist arrangements of matter that would produce vastly-higher-than-human intelligence.3 So the question then is not whether superintelligence is in principle possible, but rather whether it is attainable by human technology within a century or so. Let's formulate two competing hypotheses about the world:
    (H1) Achieving superintelligence is hard - not attainable (other than possibly by extreme luck) by human technological progress by the year 2100,
    (H2) Achieving superintelligence is relatively easy - within reach of human technological progress, if allowed to continue unhampered, by the year 2100.

There is some vagueness or lack of precision in those statements, but for the present discussion we can ignore that, and postulate that exactly one of the hypotheses is true, and that we wish to figure out which one - either out of curiosity, or for the practical reason that if (H2) is true then this is likely to have vast consequences for the future prospects for humanity and we should try to find ways to make these consequences benign.

It is not a priori obvious which of hypotheses (H1) and (H2) is more plausible than the other, and as far as burden of proof is concerned, I think the reasonable thing is to treat them symmetrically. A vulgopopperian who for one reason or another (and like Shalom Lappin and Devdatt Dubhashi) dislikes (H2) may try to emphasize the similarity with Popper's swan example: humanly attainable superintelligent AI designs are like non-white swans, and just as the way to refute the all-swans-are-white hypothesis is to exhibit a non-white swan, the way to refute (H1) here is to actually build a superintelligent AI; until then, (H1) can be taken to be correct. This was very much how Shalom and Devdatt argued at their seminar. All evidence in favor of (H2) (this kind of evidence makes up several sections in Bostrom's book, and Section 4.5 in mine) was dismissed as showing just "mere logical possibility" and thus ignored. For example, concerning the relatively concrete proposal by Ray Kurzweil (2006) of basing AI-construction on scanning the human brain and copying the computational structures to faster hardware, the mere mentioning of a potential complication in this research program was enough for Shalom and Devdatt to dismiss the whole proposal and throw it on the "mere logical possibility" heap. This is vulgopopperianism in action.

What Shalom and Devdatt seem not to have thought of is that a vulgopopperian who takes the opposite of their stance by strongly disliking (H1) may invoke a mirror image of their view of burden of proof. That other vulgopopperian (let's call him O.V.) would say that surely, if (H2) is false, then there is a reason for that, some concrete and provably uncircumventable obstacle, such as some information-theoretic bound showing that superintelligent AI cannot be found by any algorithm other than brute force search for an extremely rare needle in a haystack the size of Library of Babel. As long as no such obstacle is exhibited, we must (according to O.V.) accept (H2) as the overwhelmingly more plausible hypothesis. Has it not occured to Shalom and Devdatt that, absent their demonstration of such an obstacle, O.V. might proclaim that their arguments go no further than to demonstrate the "mere logical possibility" of (H1)? I am not defending O.V.'s view, only holding it forth as an arbitrarily and unjustifiably asymmetric view of (H1) and (H2) - it is just as arbitrarily and unjustifiably asymmetric as Shalom's and Devdatt's.

Ever since the seminar by Shalom and Devdatt in May last year, I have thought about writing something like the present blog post, but have procrastinated. Last week, however, I was involved in a Facebook discussion with Devdatt and a few others, where Devdatt expressed his vulgopopperian attitude towards the possible creation of a superintelligence so clearly and so unabashedly that it finally triggered me to roll up my sleeves and write this stuff down. The relevant part of the Facebook discussion began with a third party asking Devdatt whether he and Shalom had considered the possibility that (H2) might have a fairly small probability of being true, yet large enough so that given the values at stake (including, but perhaps not limited to, the very survival of humanity) makes it worthy of consideration. Devdatt's answer was a sarcastic "no":
    Indeed neither do we take into account the non-zero probability of a black hole appearing at CERN and destroying the world...
His attempt at analogy annoyed me, and I wrote this:
    Surely you must understand the crucial disanalogy between the Large Hadron Collider black hole issue, and the AI catastrophe issue. In the former case, there are strong arguments (see Giddings and Mangano) for why the probability of catastrophe is small. In the latter case, there is no counterpart of the Giddings-Mangano paper and related work. All there is, is people like you having an intuitive hunch that the probability is small, and babbling about "mere logical possibility".
Devdatt struck back:
    Does one have to consider every possible scenario however crazy and compute its probability before one is allowed to say other things are more pressing?
Notice that what Devdatt does here is that he rejects the idea that his arguments for (H1) need to go beyond establishing "mere logical possibility" and give us reason to believe that the probability of (H1) is large (or equivalently, that the probability of (H2) is small). Yet, a few lines down in the same discussion, he demands just such going-beyond-mere-logical-possibility from arguments for (H2):
    You have made it abundantly clear that superintelligence is a logical possibility, but this was preaching to the choir, most of us believe that anyway. But where is your evidence?
The vulgopopperian asymmetry in Devdatt's view of the (H1) vs (H2) problem could hardly be expressed more clearly.


1) Perhaps I should apologize for writing "he" rather then "he or she", but almost all vulgopopperians I have come across are men.

2) A paper by the same authors, with the same title and with related (although not quite identical) content recently appeared in the journal Communications of the ACM.

3) For simplicity, let us take all this for granted, although it is not really crucial for the discussion; a reader who doubts any of these statements may take their negations and include disjunctively into hypothesis (H2).

söndag 19 februari 2017

80-talsperspektiv på det här med datorer

I några inledande kommentarer till ett mycket sevärt avsnitt av Dokument utifrån 1984, om hemdatorer, hacking och framtida datorisering av arbetsliv och samhälle, gör journalisten Herbert Söderström en ansats till balansering av vad han uppfattar som filmens otyglade teknokoptimism:
    Sedan finns det ett avsnitt om hemdatorer, som jag personligen är något lite mer skeptisk emot. Det talas i programmet om att vi alla kommer att sitta hemma och beställa biobiljetter och sådana saker. Jag tvivlar på det. Själv har jag haft dator hemma i snart fyra år, och jag hittar ingen som helst privat användning av den. Det är faktiskt så att telefonnummer hittar man enklast i telefonkataloger, kakrecept hittar man enklast i bakböcker, och matrecept lämpligen i kokböcker. Boken är ett sällsynt bra sätt att förvara stora mängder information.

    Det är också så att de här så kallade hemdatorerna, vad man nu egentligen menar med dem, men om man menar det som är vanligast, nämligen att man kör dem på sin egen hemma-TV, så har jag ett bra förslag till alla som funderar på att skaffa sig något sådant. Sätt er, tre kvällar i rad, på fyrtio centimeters avstånd från er färg-TV hemma, och så tittar ni igenom Rapport, kväll efter kväll, tre kvällar i rad. Om ni fortfarande vill ha en hemdator så köp för all del en, men jag tror inte att ni kommer att ha någon vidare användning för den, annat än i ett avseende, och det är att om ni vill lära er programmera så går det alldeles utmärkt att göra det också på en liten hemdator. Men att göra något meningsfullt hemma med den i övrigt - ja, möjligen då spela spel - det kan jag inte se.

Inför dessa ord kan jag inte annat än reagera som gode vännen Morgan Andréason, nämligen att ödmjukt undra över vilka företeelser vi har lika grundligt fel om idag.

torsdag 16 februari 2017

Föredrag i Stockholm och Borås

Låt mig flagga för ett par föredragsframträdanden jag kommer att göra de närmaste veckorna - ett i Stockholm, och ett i Borås.
  • Onsdagen den 22 februari kl 13.00 håller jag ett seminarium på Statistiska institutionen, Stockholms universitet med rubriken Fundamentals of Bayesian reasoning and the choice of prior, och följande sammanfattning:
      Bayesian reasoning goes far beyond just Bayesian statistics, and is influential, e.g., in economics and in computer science. There is a beautiful body of work, going back to 20th century thinkers including Ramsey, de Finetti, von Neumann, Morgenstern and Savage, that is often taken as an indication that rationality implies Bayesian reasoning. In a recent Ph.D. course we centered around the question of how convincing this conclusion is, and arrived tentatively at an answer along the lines of "well, maybe sort of...". In this talk I will recap some of these discussions, and also address the issue of whether a prior can be chosen in a way that avoids arbitrariness and subjectivity.
  • Veckan efter, tisdagen den 28 februari kl 18.00, framträder jag på Borås Kulturhus (i ett arrangemang av Folkuniversitetet) med ett föredrag rubricerat I väntan på de intelligenta robotarna, och med följande sammanfattning:
      Vi ser idag stora framsteg inom artificiell intelligens. Det är någorlunda troligt (men inte alls säkert) att ytterligare framsteg på det området någon gång inom de närmaste 100 åren kommer att leda till en situation där vi människor inte längre är de intelligentaste varelserna på vår planet.

      Hur hanterar vi det? Kommer vi att kunna behålla kontrollen?

      Ett möjligt förhållningssätt är att säga "låt oss vänta och se, och hantera situationen först när den väl inträffat". Med utgångspunkt delvis från sin bok Here Be Dragons: Science, Technology and the Future of Humanity (Oxford University Press, 2016) förklarar Olle Häggström varför han anser att det är en dålig idé, och varför vi istället bör börja förbereda oss redan nu.


onsdag 8 februari 2017

Framtida universitet

På Chalmers webbsidor går det nu att hitta en färsk intervju med mig om vart våra universitet är på väg och hur de kan komma att se ut på ett eller ett par årtiondens sikt.1 Jag skulle nog inte räkna intervjun till mina allra bästa framträdanden, för den kan utan större överdrift sägas formera sig till en tämligen akademisk-konservativ klagovisa över hur mycket sämre allt har blivit inom universitetsvärlden och hur mycket värre det riskerar att bli, utan någon särskilt tydlig röd tråd i övrigt.

Vad som ändå kanske mest gläder mig med intervjun är följande. Initiativet till att den kom till kom härrör från ledningen vid den institution där jag har min anställning. Detta bekräftar att de ser ett positivt värde i min roll som ohämmad åsiktsmaskin - något som jag i och för sig känt på mig men som inte automatiskt kan tas för givet. Att en institutionsledning har den synen på sina mest frispråkiga medarbetare är ingalunda någon självklarhet, och jag har intryck av att det på andra håll i universitetssverige förekommer ledningar som ser sådan frispråkighet bland medarbetarna mindre som en tillgång än som ett besvärligt störningsmoment. Lyckligtvis arbetar jag inte på en sådan institution!


1) Brief summary in English: I have been interviewed about my thoughts on trends in the university system and where we may be heading. The interview is available in English translation.

söndag 5 februari 2017

Kort meningsutbyte med ärkebiskopen

Jag hade igår förljande korta meningsutbyte på Twitter med Svenska kyrkans ärkebiskop Antje Jackelén.1
    AJ: Gud skulle kunna förbli en obegriplig makt, om vi inte fick se ett ansikte som vi kan känna igen oss i: Jesus är Guds ansikte. #söndagsord

    OH: Det hjälper inte, för mig är han lik förbannat ganska spooky.

    OH: Ta exempelvis teodicéproblemet. Så vitt jag kan förstå gör den där Jesus varken till eller från i att lösa upp den hopplösa knuten.

    AJ: Ja, han är ingen Alexander som fixar gordiska knuten, men Jesus-perspektivet ger mening och hoppfull riktning åt livet.

Det är såklart inget nytt som framkommer här, men jag finner det intressant varje gång en ledande kyrklig företrädare så oförblommerat erkänner sig stå handfallen inför teodicéproblemet.2 Jag tror att det krävs en särskild talang, som jag själv alldeles saknar, för att finna ro och mening i ett trossystem som inbegriper en så iögonfallande självmotsägelse som den vilken erhålles genom att, ställd in för all den ondska och allt det lidande som finns i vår värld, likväl insistera på Guds godhet och allsmäktighet.


1) Samma Antje Jackelén alltså, som jag kom i samspråk med i Visby i somras.

2) Jag har ofta återkommit till teodicéproblemet här på bloggen, bl.a. här, här och här.

* * *

Edit 6 februari 2017: Sedan jag uppmärksammat Jackelén på denna bloggpost svarade hon med följade tweet:
    AJ: Tack! Teodicén lämpar sig knappast för 140 t. Ifall du är intresserad finns mer i kapitel om ondskan i min bok "Gud är större".
Vi får se. Jag hoppas att hennes bok i alla fall är bättre än den förra jag läste av en svensk ärkebiskop.