torsdag 21 september 2017

Ytterligare radioinslag om existentiell risk-programmet

Dagen efter att P1:s Vetandets värld sänt sitt 20-minutersreportage om vårt gästforskarprogram vid Chalmers och Göteborgs universitet om existentiella risker mot mänskligheten, så har två av landets lokalradiostationer passat på att prata med några av oss forskare i programmet:
  • Morgon i P4 Västernorrland valde att prata med Karim Jebari. Inslaget, som sträcker sig från 2.37.50 in i programmet till 2.58.55 och som tack vare Karims eminenta överblick över området blev ytterst kunskapsspäckat, strukturerades kring ett förslag till topp fem-lista över existentiella risker.
  • Förmiddag i P4 Göteborg med Stefan Livh tog istället ett snack med Anders Sandberg och mig, med start 1.34.25 in i programmet och fram till 1.58.00. Samtalet fick inte samma tydliga struktur som det P4 Västernorrland hade med Karim, men blev glatt och någorlunda informativt.

onsdag 20 september 2017

P1:s Vetandets värld om vårt gästforskarprogram om existentiell risk

Dagens avsnitt av P1:s Vetandets värld handlar om det tvärvetenskapliga gästforskarprogram på Chalmers och Göteborgs universitet med rubriken Existential Risk to Humanity som jag är med och driver. Jag kommer själv till tals några gånger i programmet, jämte gästforskarna Anders Sandberg, Phil Torres, Karin Kuhlemann och Thore Husfeldt. Ratta in P1 klockan 12:10 idag, eller lyssna på webben!

tisdag 19 september 2017

Michael Shermer fails in his attempt to argue that AI is not an existential threat

Why Artificial Intelligence is Not an Existential Threat is an aticle by leading science writer Michael Shermer1 in the recent issue 2/2017 of his journal Skeptic (mostly behind paywall). How I wish he had a good case for the claim contained in the title! But alas, the arguments he provides are weak, bordering on pure silliness. Shermer is certainly not the first high-profile figure to react to the theory of AI (artificial intelligence) existential risk, as developed by Eliezer Yudkowsky, Nick Bostrom and others, with an intuitive feeling that it cannot possibly be right, and the (slightly megalomaniacal) sense of being able to refute the theory, single-handedly and with very moderate intellectual effort. Previous such attempts, by Steven Pinker and by John Searle, were exposed as mistaken in my book Here Be Dragons, and the purpose of the present blog post is to do the analogous thing to Shermer's arguments.

The first half of Shermer's article is a not-very-deep-but-reasonably-competent summary of some of the main ideas of why an AI breakthrough might be an existential risk to humanity. He cites the leading thinkers of the field: Eliezer Yudkowsky, Nick Bostrom and Stuart Russell, along with famous endorsements from Elon Musk, Stephen Hawking, Bill Gates and Sam Harris.

The second half, where Shermer sets out to refute the idea of AI as an existential threat to humanity, is where things go off rails pretty much immediately. Let me point out three bad mistakes in his reasoning. The main one is (1), while (2) and (3) are included mainly as additional illustrations of the sloppiness of Shermer's thinking.
    (1) Shermer states that
      most AI doomsday prophecies are grounded in the false analogy between human nature and computer nature,
    whose falsehood lies in the fact that humans have emotions, while computers do not. It is highly doubtful whether there is a useful sense of the term emotion for which a claim like that holds generally, and in any case Shermer mangles the reasoning behind Paperclip Armageddon - an example that he discusses earlier in his article. If the superintelligent AI programmed to maximize the production of paperclips decides to wipe out humanity, it does this because it has calculated that wiping out humanity is an efficient step towards paperclip maximization. Whether to ascribe to the AI doing so an emotion like aggression seems like an unimportant (for the present purpose) matter of definition. In any case, there is nothing fundamentally impossible or mysterious in an AI taking such a step. The error in Shermer's claim that it takes aggression to wipe out humanity and that an AI cannot experience aggression is easiest to see if we apply his argument to a simpler device such as a heat-seeking missile. Typically for such a missile, if it finds something warm (such as an enemy vehicle) up ahead slightly to the left, then it will steer slightly to the left. But by Shermer's account, such steering cannot happen, because it requires aggression on the part of the heat-seeking missile, and a heat-seeking missile obviously cannot experience aggression, so wee need not worry about heat-seeking missiles (any more than we need to worry about a paperclip maximizer).2

    (2) Citing a famous passage by Pinker, Shermer writes:

      As Steven Pinker wrote in his answer to the 2015 Edge Question on what to think about machines that think, "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world." It is equally possible, Pinker suggests, that "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." So the fear that computers will become emotionally evil are unfounded [...].
    Even if we accepted Pinker's analysis,3 Shermer's conclusion is utterly unreasonable, based as it is on the following faulty logic: If a dangerous scenario A is discussed, and we can give a scenario B that is "equally possible", then we have shown that A will not happen.

    (3) In his eagerness to establish that a dangerous AI breakthrough is unlikely and therefore not worth taking seriously, Shermer holds forth that work on AI safety is underway and will save us if the need should arise, citing the recent paper by Orseau and Armstrong as an example, but overlooking that it is because such AI risk is taken seriously that such work comes about.

Footnotes

1) Michael Shermer founded The Skeptics Society and serves as editor-in-chief of its journal Skeptic. He has enjoyed a strong standing in American skeptic and new atheist circuits, but his reputation may well have passed its zenith, perhaps less due to his current streak of writings showing poor judgement (besides the article discussed in the present blog post, there is, e.g., his wildly overblown endorsement of the so-called conceptual penis hoax) than to some highly disturbing stuff about him that surfaced a few years ago.

2) See p 125-126 of Here Be Dragons for my attempt to explain almost the same point using an example that interpolates between the complexity of a heat-seeking missile and that of a paperclip maximizer, namely a chess program.

3) We shouldn't. See p 117 of Here Be Dragons for a demonstration of the error in Pinker's reasoning - a demonstration that I (provoked by further such hogwash by Pinker) repeated in a 2016 blogpost.

fredag 15 september 2017

Thente och de två kulturerna

Sedan minst fem år tillbaka har jag retat mig på DN-skribenten Jonas Thentes benägenhet att se ned på kunskap inom det för honom så främmande matematisk-naturvetenskapliga ämnesfältet; han personifierar det kunskapsläge och den inställning som C.P. Snow ondgjorde sig över redan 1959 i sin klassiska De två kulturerna. Följande passage i en DN-artikel av Thente för ett par veckor sedan gjorde inget för att milda mitt sinne.

Det är såklart sant att det finns många människor som inte intresserar sig för skönlitteratur, men det skulle förvåna mig väldeliga om denna egenhet skulle visa sig korrelera positivt med naturvetenskaplig bildning eller dito yrkesval. Om Thente vill hävda något sådant borde han belägga det med statistik. Tills han gör det kommer jag att fortsätta betrakta den här sortens utfall som fördomsfullt hittepå och physics envy.

Ulf Danielsson svarar Thente i dagens DN, och framhåller det önskvärda i att ta del av båda de kulturer Snow åsyftar, och att låta dem korsbefruktas i huvudet. Därmed är han inne på Snows linje, och jag kan bara instämma.

söndag 3 september 2017

Om existentiell risk i dagens DN

I DN denna söndagsmorgon bjuds på en stort uppslagen intervju med Anders Sandberg och mig om de olika existentiella hot mot mänsklighetens överlevnad som föreligger, och om det tvåmånaders gästforskarprogram om saken som vi precis dragit igång på Chalmers och Göteborgs universitet. Artikeln finns även elektroniskt, dock bakom betalvägg.

Kom gärna på programmets officiella invigning i morgon måndag klockan 17.30!

(Obs klockslaget, 17.30 - inte 18.30 som jag tidigare felaktigt skrev.)