Visar inlägg med etikett Robin Hanson. Visa alla inlägg
Visar inlägg med etikett Robin Hanson. Visa alla inlägg

onsdag 31 oktober 2018

'Oumuamua-mysteriet

Är vi ensamma, eller finns det andra civilisationer där ute bland stjärnorna? Den frågan hör till de största vi kan ställa oss, och som om den inte vore spännande nog i sig har den en praktisk sida, då svaret kan ha stora konsekvenser för mänsklighetens framtidsutsikter. Detta sista tydliggörs med den så kallade The Great Filter-formalismen, som i korthet utgår från observationen att från kanske 1022 (ge eller ta en tiopotens eller två) potentiellt livgivande planeter i det synliga universum så verkar det som om inte en enda har utvecklat en teknologisk supercivilisation av sådana dimensioner att den är synlig för astronomer var som helst i sagda universum, och konstaterar att någonstans på vägen från potentiellt livgivande planet till supercivilisation finns en flaskhals (eller flera) som är extremt svår att passera. Har vi (mänskligheten) passerat denna flaskhals, eller ligger den ännu framför oss? Om detta har jag skrivit här på bloggen, i Kapitel 9 i min senaste bok Here Be Dragons, och i en artikel i International Journal of Astrobiology tillsammans med Chalmerskollegan Vilhelm Verendel häromåret.

Ett problem för den som vill göra framsteg på det här området är bristen på direkta data, utöver den ensamma datapunkt som den stora tystnaden därute utgör, och det ständigt ökande antalet upptäckta exoplaneter som backar upp uppskattningar som den ovan om antalet potentiellt livgivande planeter därute. (Indirekta data om hur allmänt gästvänligt vårt universum är för härbärgerande av liv finns det mer gott om, och skall inte fnysas åt. Ämnet astrobiologi sysslar med sådant.)

Men när det plötsligt dyker upp något som är en helt ny och möjligen relevant datatpunkt, så förtjänar det vårt intresse. Som stenbumlingen (eller vad det nu är) 'Oumuamua.

'Oumuamua upptäcktes den 19 oktober 2017, och det kunde snabbt rekonstrueras att den 40 dagar tidigare passerat sitt preihelium i närheten av Merkurius omloppsbana (och några äldre fotografier där 'Oumuamua dittills obemärkt figurerat kunde rotas fram till stöd för det). Det väntade hade varit att notera den som ännu en asteroid eller komet. Sådana rör sig (med vanligtvis god precision) i elliptiska banor runt solen. Hur avlång ellipsen är beskrivs matematiskt av dess excentricitet e mellan 0 och 1, där e=0 svarar mot en perfekt cirkel, och banan blir alltmer avlång ju mer e närmar sig 1. Problemet med 'Oumuamua är att dess e-värde uppmättes till cirka 1.2, vilket innebär att banan inte är elliptisk utan hyperbolisk, vilket i sin tur tyder på att 'Oumuamua blott är en gäst (den första och hittills enda vi observerat) i vårt solsystem, alltså ett objekt som kommit inramlandes från den interstellära rymden.

Allt detta är spännande nog, men det finns mer att säga om 'Oumuamua med potential att kittla vår fantasi:
  • Den är ovanligt avlång. Dess dimensioner är behäftade med osäkerhet, men den bästa uppskattningen pekar på en längd på 230 meter och bredd resepektive tjocklek på 35 meter vardera. Även om den ovanliga formen kan få den fantasifulle att associera till Clarkeska monoliter, så är det inte i sig tillräckligt för att vi på allvar skall börja fundera på om 'Oumuamua är ett artificiellt föremål från en utomjordisk civilisation, men det kommer mer:
  • Robin Hanson (mannen bakom The Great Filter) påpekade samma höst att av interstellära objekt av 'Oumuamuas storlek som når tillräckligt långt in i vårt solsystem för att vi skall väntas observera dem, så träffar 'Oumuamua närmare Solen än 99% kan väntas göra, förutsatt att det inte suttit någon därute och avsiktligt siktat nära Solen. Källhänvisningen för denna sifferuppgift är inte klockren, men om vi ändå antar att den är riktig så har vi alltså ett p-värde på 0.01. Jag har i andra sammanhang framhållit att p-värden i den storleksordningen inte är fullt så imponerande som många tycks tro, och det gäller givetvis i än högre grad då nollhypotesen som i detta fall formulerats efter att man sett data. När alternativhypotesen är av så spektakulär natur som i detta fall - att någon avsiktligt skickat 'Oumuamua till vårt solsystem - är anledningen till skepsis ännu större, men jag kan inte se att det skulle vara något allvarligt fel att låta siffran inspirera oss att fundera vidare över den hypotesen. Och det kommer ännu mer:
  • I en artikel i Nature tidigare i år påvisades att 'Oumuamua bana uppvisat avvikelser (med överväldigande statistisk signifikans) från vad gravitationsteorin förutsäger. En naturlig förklaring till en sådan avvikelse vore om 'Oumuamua likt en komet uppvisade avdunstning från ytan till följd av den infallande solstrålningen. Ett rykande (no pun intended) aktuellt preprint av astrofysikerna Shmuel Bialy och Abraham Loeb, och en kommentar till denna av Paul Gilster, hävdar dock att kometteorin inte håller. Istället föreslås solvind som en förklaring, vilket dock kräver att 'Oumuamua har så liten massa att den blott kan vara ett lövtunt skal (högst cirka 0.3 mm). Vad som kan skapa ett sådant objekt vet vi inte, men Bialy och Loeb föreslår att "one possibility is a lightsail floating in interstellar space as debris from an advanced technological equipment".
Och Gilster bjuder på andra överslagsräkningar som ytterligare förstärker intrycket att det är något skumt med 'Oumuamua.

Givet sin blygsamma storlek är 'Oumuamua nu bortom räckhåll för våra teleskop, men jag anser att gåtan om dess beskaffenhet och ursprung är tillräckligt angelägen för att vi inte skall låta en sådan detalj knäcka oss. Gilster hävdar visserligen att "it’s too late to get a mission off to chase it with chemical rockets", men det torde i så fall finnas andra tekniska lösningar. Till vilken kostnad det går att göra vet jag inte, men om det skulle gå att rymma inom en budget på säg 100 miljarder kronor (som två Large Hadron Colliders, typ) så tycker jag utan tvekan att vi skall försöka. Jag håller fortfarande för troligast att 'Oumuamua har naturligt ursprung, men tillräckligt mycket fog finns idag för spekulationer om motsatsen för att en närmare undersökning skall ha väldigt hög prioritet.

Edit: Knappt har jag tryckt på knappen för att publicera denna bloggpost förrän jag nås av tips om ett preprint av Andreas Hein et al med detaljer om hur den rymdexpedition jag efterfrågar i sista stycket skulle kunna genomföras.

söndag 28 januari 2018

Swing Time är läsvärd

Swing Time av Zadie Smith är en av de bästa romaner jag läst på ganska länge. Den känns naturlig att jämföra med Kjell Westös Den svavelgula himlen som jag läste i höstas, då de båda kretsar kring barndomsvänner som så småningom graviterar isär men till slut återser varandra en bit upp i medelåldern, och båda genomsyras av klassfrågor.1 I Swing Time tillkommer rasperspektiv som ett bärande element, till skillnad från i Westös bok vars handling är koncentrerad i och kring ett etniskt tämligen homogent Helsingfors. Smiths bok är mer kosmopolitisk, och utspelar sig i London, New York och ett västafrikanskt land som aldrig nämns vid namn men som verkar vara Gambia, där huvudpersonen medverkar i ett föga framgångsrikt biståndsprojekt.2 Det är ett gott betyg på Swing Time att jag finner den klart bättre än Den svavelgula himlen trots att Westö är en gammal favoritförfattare och trots att hans aktuella bok ingalunda är något misslyckande.

En sak jag fann frapperande hos Swing Time är att de fyra personer som porträtteras någorlunda utförligt - berättarjaget, hennes mamma, hennes barndomsvän Tracy, och megapopstjärnan Aimee (som om än inte i alla avseenden identisk med Madonna likväl är tydligt modellerad efter henne) - alla är kvinnor (många andra personer figurerar i boken men blir mer till bifigurer och rekvisita). Men så slog det mig att jag nog läst många böcker som på motsvarande vis domineras av ett antal män utan att reagera närmare på den saken, så själva min reaktion på detta blev en lärorik påminnelse om skevheter i hur vi ser på män respektive kvinnor.

Av de fyra personprorträtten är det berättarjagets som är mest utförligt, och ändå är det hon som likväl till slut förblir mest enigmatisk. En oro återkommer då och då om huruvida hennes på ytan innehållsrika liv ändå lider av tomhet. Den underliggande frågan om vad som är ett meningsfullt livsinnehåll får inget slutgitigt svar, men blir något man som läsare tar med sig efter att ha lagt ifrån sig boken.

Fotnoter

1) Smiths insiktsfullhet och utsökta känsla för klass- och andra sociala frågor framkommer bland annat också i en essä hon skrev i kölvattnet av Brexit och som jag tidigare lyft fram här på bloggen.

2) Handlingen i Gambia får mig att associera till en annan bra bok jag läst denna vinter: Kevin Simlers och Robin Hansons mycket intressanta The Elephant in the Brain, som jag räknar med att återkomma till här på bloggen och vars genomgående tes är att mycket av vad vi gör drivs av andra (och typiskt mindre ädla) motiv än dem vi hävdar - motiv som vi hemlighåller så noga att vi till och med lurar oss själva. I kapitel efter kapitel illustrerar författarna grundtesen på olika områden, och i kapitlet om välgörenhet hävdar de att det som driver en välgörare som regel mer handlar om att imponera på andra och visa upp dygder än om genuin önskan att göra gott. Detta påstående är givetvis oaptitligt, men samtidigt är det svårt att avvisa, och det havererade biståndsprojektet i Swing Time passar bra in Simlers och Hansons cyniska bild.

onsdag 17 januari 2018

Two excellent contemporary writers

Two of my favorite contemporary writers, operating however in very different genres, are Ted Chiang and Scott Alexander:
  • Ted Chiang is a science fiction writer specializing in short stories. When I read his collection Stories of Your Life and Others I said to myself "wow, this guy is almost better than Greg Egan" (but let me withhold final judgement on that comparison). The book opens with Tower of Babylon, which explores a beautiful alternative cosmology more in line with what people believed in ancient times, and continues with Understand, which, albeit lacking somewhat in realism, gives what is probably the best account I've read on what it might be like to attain superintelligence - an impossible topic, yet important in view of possible future transhumanistic developments. Among the other stories in the book is the title one, Story of Your Life, which was later adapted to the Hollywood movie Arrival; I recommend both reading the story and seeing the movie (the plots diverge somewhat in interesting respects) and then listening to the podcast Very Bad Wizards discussing them.

  • Scott Alexander blogs about science, philosophy, future technologies and related topics. He often penetrates quite deeply into his chosen topic, and his posts are often longish to very long. Several of his blog posts have influenced me significantly, such as...
Excellence is writing may however be more or less genre-specific; I suspect that most good authors of university-level mathematics textbooks suck as poets, and vice versa. Another example is that when last month Ted Chiang tried his luck at writing an essay on AI futurology with the title Silicon Valley is Turning into its Own Worst Fear, his brilliance does not shine through. The gist of Chiang's argument is that superintelligent AI and capitalism are similar in that they both relentlessly optimize for something that is not entirely well-aligned with human well-being, and that since superintelligent AI does not at present exist while capitalism does, only capitalism poses a real danger. This last conclusion is a non sequitur.

And now, finally... get ready for my excuse for discussing Chiang and Alexander in the same blog post! Scott Alexander's blog post Maybe the Real Superintelligent AI is Extremely Smart Computers from earlier this week is a masterful exposition of the errors in Chiang's arguments. When I first saw Chiang's essay, I saw mostly the same errors that Alexander saw, but would never have been able to explain them quite as pedagogically as he does. Do read it (Alexander's blog post, that is), as I have nothing to add to it.

torsdag 5 oktober 2017

Videos from the existential risk workshop

The public workshop we held on September 7-8 as part of the ongoing GoCAS guest researcher program on existential risk exhibited many interesting talks. The talks were filmed, and we have now posted most of those videos on our own YouTube channel. They can of course be watched in any order, although to maximize the illusion of being present at the event, one might follow the list below, in which they appear in the same order as in the workshop. Enjoy!

onsdag 17 maj 2017

Workshop on existential risk to humanity

Come to Gothenburg, Sweden, for the Workshop on existential risk to humanity that I am organizing on September 7-8 this year! The topic is a matter of life and death for the human civilization, but I still expect the workshop to be stimulating and enjoyable. The list of world-class researchers who are confirmed as speakers at the workshop include several who are already known to readers of this blog: Stuart Armstrong, Seth Baum, Robin Hanson, James Miller, Anders Sandberg and Roman Yampolskiy. Plus the following new aquaintances: Milan Cirkovic, David Denkenberger, Karin Kuhlemann, Catherine Rhodes, Susan Schneider and Phil Torres. All of these will in fact stay in town for various durations before and/or after the workshop, at the guest research program on existential risk that runs throughout September and October.

The workshop is open to all interested, and participation is free of charge. Pre-registration, however, is required.

onsdag 15 juni 2016

The New York Times claim about extraterrestrials was pulled out of thin air

"Yes, There Have Been Aliens." So reads the spectacular headline of an article about the possible existence of extraterrestrials by astrophysicist Adam Frank in the New York Times this Sunday. If the headline's suggestion - that science has established that extraterrestrials exist or at least have existed - is warranted, then we are faced with one of the greatest scientific breakthroughs ever, with profound consequences for our collective human self-image. But no, no such breakthrough has been made (or is on the radar), and the headline is deeply misleading.

Sometimes, overly enthusiastic newspaper editors play their contributors a trick by assigning a headline that goes far beyond the claims that are made in the article itself. Perhaps that is the case here? Perhaps professor Frank is the innocent victim of just such an editorial prank? Well, no. Frank has only himself to blame, as the headline's claim is made loudly and clearly in the text of his article, where we can read that
    given what we now know about the number and orbital positions of the galaxy’s planets, the degree of pessimism required to doubt the existence, at some point in time, of an advanced extraterrestrial civilization borders on the irrational.
This is a very strong claim, but it is plain false. Frank bases his claim on a recent joint paper by him and astronomer Woodruff Sullivan in the journal Astrobiology, where the claim about the very likely existence of extraterrestrials at some time in the past is derived from an assumption about how a certain probabilistic quantity p relates to a certain threshold c. It turns out, however, that this assumption was simply pulled out of thin air. Consequently, the same verdict follows for the claims in the New York Times article. Let me explain.

The p I'm talking about here is the probability that a randomly chosen potentially life-bearing planet eventually gives rise to intelligent life with a technological civilization on the level of present-day humanity. Here is an essentially correct statement in Frank's New York Times article:
    But what our calculation revealed is that even if this probability is assumed to be extremely low, the odds that we are not the first technological civilization are actually high. Specifically, unless the probability for evolving a civilization on a habitable-zone planet is less than one in 10 billion trillion, then we are not the first. [italics in original]
One in 10 billion trillion is 10-22, which is Frank's choice of threshold c.1 Combining the two qoutes from the article, we see that Frank means to say that doubting that p≥10-22 "borders on the irrational". Well, if Frank is fond of the idea of rational argument, then he should of course have backed up this claim with some strong evidence that p≥10-22, but he offers no such thing, neither in the New York Times article, nor in the Astrobiology paper.

The probability p is one of the three central parameters in the so-called Great Filter formalism, which is a superb framework for addressing the Fermi paradox and the notorious "Are we alone?" question, and which Adam Frank and Woodruff Sullivan would be well-advised to study.2 The other two parameters are N and q, where N is number of potentially life-bearing planets in the universe (a quantity discussed by Frank and Sullivan in the light of recent advances in the observation of exoplanets), and q is the probability that a randomly chosen civilization at the level of present-day humanity goes on to develop an intergalactic technological supercivilization visible to astronomers all over the universe. Of these three parameters, N is the only one whose order of magnitude we currently have a reasonable grasp of: it is a very large number, somewhere in the vicinity of 1022 (give or take an order of magnitude or so). In contrast, we are very much in the dark about the whereabouts of p and of q (other than the fact that they are probabilities, so they must be between 0 and 1). But there is one thing we pretty much know about them combined, namely that the product pq must be tiny, because otherwise Npq would have been a large number, and we would most likely have been able to see signs of a an intergalactic technological civilization out there, which we haven't. And if the product pq is microscopic, than at least one of the factors p and q must be microscopic. But neither of them is by any means obviously microscopic. Whether it is p that is microscopic, or q, or both, is a wide open question as science currently stands. As to p, it might be 0.9, it might be 0.1, it might be 10-10, it might be 10-30, or it might be 10-100 (or something else). None of these values is (notwithstanding Frank's claim that the last two would "border on the irrational") implausible.

There is no shortage of candidate bottlenecks in the evolution of life that might make p microscopic. Biogenesis is an obvious example, and Hanson (1998) lists a few more: the emergence of prokaryotic single-cell life, of eukaryotic single-cell life, of sexual reproduction, of multi-cell life, and of tool-using animals with big brains.3 Any claim that p cannot plausibly be microscopic needs to come with a demonstration that none of these candidate bottlenecks is sufficiently severe and uncircumventable to account for p being microscopic. Preferably, such a demonstration should also be weighed against the available evidence for q not being microscopic (see in particular Armstrong and Sandberg, 2013). Frank and Sullivan offer none of these things.

Footnotes

1) This seems to be a correction compared to the original Frank-Sullivan Astrobiology paper, where c is of the order 10-24. Their calculation of c invloves arbitrarily and confusedly throwing in an extra factor 0.01 in what was probably an attempt to make the estimate scientifically conservative, but whose effect is in fact the opposite.

2) See, e.g., Chapter 9 in my book Here Be Dragons: Science, Technology and the Future of Humanity, or better yet, see Robin Hanson's seminal paper on this topic. Or see my recent paper with Vilhelm Verendel in the International Journal of Astrobiology where we approach the Great Filter from a Bayesian point of view.

3) A tempting reaction, when first confronted with the task of estimating p, is to say something like "Hey, we exist, we evolved, here on Earth, surely that's an indication that p is probably not so small?". I suspect that such reasoning has influenced much discussion about extraterrestrial life and the Fermi paradox over the years, even in cases where it is never spelled out explicitly. However, it is probably not a valid argument, because a low-p and a high-p universe share the feature that everyone in it will find themselves to exist and to have evolved, whence that observation cannot be used to distinguish a low-p universe from a high-p one.

fredag 10 juni 2016

Förvärrad trafikstockning på mitt nattduksbord

Jag har (som jag tidigare förklarat) för vana att parallelläsa böcker på ett osystematiskt och närmast kaotiskt vis. På mitt nattduksbord finns för närvarande cirka halvdussinet böcker som är att betrakta som pågående läsning, och ytterligare ungefär lika många som är halvlästa och som jag mer eller mindre givit upp hoppet om att fullfölja men inte kommit till skott med att rensa undan. Till denna litterära trafikstockning nödgas jag nu foga ytterligare två böcker, som jag just fått dedicerade exemplar av från respektive författare, vilkas namn båda torde vara välbekanta för trogna läsare av denna blogg. Det rör sig om följande titlar vilka båda omedelbart erhåller hög prioritet på min nuvarande läslista, och jag räknar med att återkomma med en fylligare diskussion av åtminstone en av dem - eller kanske båda - sedan jag läst dem.
  • The Age of Em: Work, Love, and Life When Robots Rule the Earth av Robin Hanson, som är associate professor i nationalekonomi vid George Mason University. Denna unika och mycket efterlängtade bok är författarens sammanfattning av sitt mångåriga arbete med att analysera de långtgående ekonomiska och sociala konsekvenserna av ett teknikgenombrott där vi (innan vi t.ex. hinner förgöra oss själva eller skapa artificiell superintelligens) lär oss att på dator emulera den mänskliga hjärnan och alla dess funktioner. Boken har givetvis beröringspunkter med min egen Here Be Dragons; den är smalare på så vis att den behandlar ett enda relativt specifikt framtidsscenario där jag istället försöker ta mig an ett någorlunda representativt urval, men i gengäld gör han det med ett djup och en detaljrikedom som jag inte kan matcha. Jag vet sedan tidigare att Hanson besitter en enastående analytisk skärpa, och det skulle förvåna mig stort om han med sin nya bok gav anledning att revidera den uppfattningen. (Bryan Caplan låter sig inte imponeras, men Hanson erbjuder svar på tal.)

  • Soccermatics: Mathematical Adventures in the Beautiful Game av David Sumpter, som är professor i tillämpad matematik vid Uppsala universitet. Lagom till EM-slutspelet tar han fotbollen till hjälp i denna populärvetenskapliga framställning, för att med en lång rad exempel påvisa kraften i att använda matematisk modellering till att förstå olika fenomen i den värld vi lever i, något han tidigare med stor framgång praktiserat med tillämpning på exempelvis
      fish swimming among coral in the Great Barrier Reef, democratic change in the Middle East, the traffic of Cuban leaf-cutter ants, swarms of locusts travelling across the Sahara, disease spreading in Ugandan villages, political decision-making by European politicians, dancing honeybees from Sydney, American stock-market investors, and the tubular structures built by Japanese slime mould
    - för att nu citera hans egen uppräkning på sidan 11 i bokens inledning, och nu är turen alltså kommen till fotbollen.

tisdag 26 januari 2016

Reaktioner på Here Be Dragons

Följande är några av de tidiga reaktionerna på min nyutkomna bok Here Be Dragons: Science, Technology and the Future of Humanity. Ingen av recensenterna i kategorin Tidningar och tidskrifter är personer jag sedan tidigare känner till, medan däremot skribenterna i de båda andra kategorierna Bloggar och Övriga fora samtliga befinner sig någonstans på skalan mellan goda vänner och flyktigt bekanta, vilket såklart kan ha påverkat deras bedömningar.

Tidningar och tidskrifter
  • New Scientists Jonathon Keats avrundar (som tidigare meddelats) sin positiva recension med att kalla boken "an essential provocation".

  • Financial Times Stephen Cave framhåller att jag inte gör några tvärsäkra uttalanden om hur framtiden kommer att gestalta sig:
      Häggström is not trying to tell us that things will definitely work out one way or another. Rather, he is reminding us that the future is an uncharted land in which there might be monsters. We need these gloomy forecasts, just as we need glimpses of a solar-powered utopia. There are some predictions that we make in the hope that they will prove wrong, and others that we very much hope will come true. The better we envision them — whether through sober statistics or the all-action sci-fi of my comic-reading boyhood — the better chance we have of steering the ship of fate along the happier course.

  • L'Express de Toronto meddelar (på franska) att min bok inte lämpar sig att läsas nattetid.

  • Engineering and Technology Magazines recensent Dominic Lenton framhåller
      the crux of Häggström’s case that we’re forgetting how scientific progress has the potential both to cause humanity great harm and to bring it great benefit. “To completely ignore this aspect of science seems like negligence bordering on insanity,” he says.

      Some will dismiss his attitude as alarmism, but simply saying we need to know more before we can properly address these issues just puts off the problem for future generations to deal with.

      As Häggström concludes, not making a decision is in itself a decision.

Bloggar
  • Björn Bengtsson fördjupar sig (som tidigare meddelats) på sin blogg Jag är här i det jag i boken döper till Bullerbyscenariot.

  • Robin Hanson uttrycker, på sin blogg Overcoming Bias, ett (delvis berättigat) missnöje över hur jag framställer hans ekonomiska analyser av en framtida värld där personer kan kopieras lika billigt och enkelt som vi idag kopierar musikfiler.

  • Devdatt Dubhashi skriver, på bloggen The Future of Intelligence, en utförlig recension där han visserligen har en del kritiska synpunkter på enskildheter, men ändå en positiv grundton, och följande får mig nästan att rodna:
      Many of us have known Olle Häggström as a world famous probabilist working in the arcane area of percolation theory, but the author of this book [...] seems to be somebody altogether different. Indeed, Olle has run a hyperactive blog for a number of years, engaging in a number of polemics (sometimes bad–tempered) on issues ranging from religion and rationality in the early years, to climate change and most recently, artificial intelligence (AI) and its consequences. This book could be seen as the culmination of this effort.

      Personally, I welcome this metamorphosis.

Övriga fora
  • I en kundrecension på Amazon ger David Aldous boken fem stjärnor av fem möjliga, och kryddar i följande passage med det smickrande (men för mig tidigare obekanta) engelska ordet "erudite":
      The book under review [...] has the timely theme of risks to human civilization from technology, although it does not pretend to be a comprehensive survey. Instead it contains a detailed discussion of a few such risks, together with wide-ranging comments on the politics, ethics and philosophy of science underlying what actions human society might take. Though not presuming any specialized knowledge of science or technology, the writing style is not breezy “popular science”, but a kind of erudite blog, allowing a broader audience a glimpse of debates previously discussed in more specialist forums, outlining the views of other individuals and adding his own, in careful academic style with footnoted references and clarifications.

fredag 15 januari 2016

The Bullerby scenario

My book Here Be Dragons: Science, Technology and the Future of Humanity is about to be officially released by Oxford University Press next week. The book treats a broad variety of topics, but here I want to highlight a key passage on p 212, where I discuss something I decided to call the Bullerby scenario. The context is the so-called Great Filter formalism - a kind of cosmological perspective for trying to understand the long-term prospects of the survival of humanity - introduced in a seminal 1998 paper by Robin Hanson. A key quantity in this formalism is the probability q that a typical civilization on the level of present-day humanity goes on to a level where its presence becomes visible throughout the observable universe. After briefly mentioning some highly speculative possibilities for how we might have a glorious future without such a visible impact (we might, e.g., emigrate into black holes or hidden dimensions), I arrive at the Bullerby scenario:
    Another, less dramatic and in a sense diametrically opposite, scenario in which humanity might prosper despite a small value of q is what we may call the Bullerby Scenario (after Astrid Lindgren's children's stories about the idyllic life in rural Sweden in the late 1940s). Here, humanity settles down into a peaceful and quiet steady state based on green energy, sustainable agriculture, and so on, and refrains from colonization of space and other radical technologies that might lead in that direction. I mention this possibility because it seems to be an implicit and unreflected assumption underlying much of current sustainability discourse, not because I consider it particularly plausible. In fact, given the Darwinian-style arguments discussed above, plus the paradigm of neverending growth that has come to reign both in the economy and in knowledge production (the scientific community), it seems very hard to imagine how such a steady state might come about, except possibly through the strict rule of a totalitarian global government (which I tend to consider incompatible with human flourishing).
I am expecting and hoping for this passage to generate controversy. My friend Björn Bengtsson has already commented upon it at some length, in his blog post The Bullerby Beef - highly recommended!

tisdag 1 december 2015

What I think about What to Think About Machines That Think

Our understanding of the future potential and possible risks of artificial intelligence (AI) is, to put it gently, ruefully incomplete. Opinions are drastically divided, also among experts and leading thinkers, and it may be a good idea to hear from several of them before forming an opinion of one's own, to which it is furthermore a good idea to attach a good deal of epistemic humility. The 2015 anthology What to Think About Machines That Think, edited by John Brockman, offers 186 short pieces by a tremendously broad range of AI experts and other high-profile thinkers. The format is the same as in earlier items in a series of annual collections by the same editor, with titles like What We Believe but Cannot Prove (2005), This Will Change Everything (2009) and This Idea Must Die (2014). I do like these books, but find them a little bit difficult to read, in about the same way that I often have difficulties reading poetry collections: I tend to quickly rush towards the next poem before I have taken the time necessary to digest the previous one, and as a result I digest nothing. Reading Brockman's collections, I need to take conscious care not to do the same thing. To readers able to handle this aspect, they have a lot to offer.

I have still only read a minority of the short pieces in What to Think About Machines That Think, and mostly by writers whose standpoints I am already familiar with. Many of them do an excellent job expressing important points within the extremely narrow page limit given, such as Eliezer Yudkowsky, whom I consider to be one of most important thinkers in AI futurology today. I'll take the liberty of quoting at some length from his contribution to the book:
    The prolific bank robber Willie Sutton, when asked why he robbed banks, reportedly replied, "Because that's where the money is." When it comes to AI, I would say that the most important issues are about extremely powerful smarter-than-human Artificial Intelligence (aka superintelligence) because that's where the utilons are - the value at stake. More powerful minds have bigger real-world impacts.

    [...]

    Within the issues of superintelligence, the most important (again following Sutton's Law) is, I would say, what Nick Bostrom termed the "value loading problem": how to construct superintelligences that want outcomes that are high-value, normative, beneficial for intelligent life over the long run - that are, in short, "good" - since if there is a cognitively powerful agent around, what it wants is probably what will happen.

    Here are some brief arguments for why building AIs that prefer "good" outcomes is (a) important and (b) likely to be technically difficult.

    First, why is it important that we try to create a superintelligence with particular goals? Can't it figure out its own goals?

    As far back as 1739, David Hume observed a gap between "is" questions and "ought" questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is and then begins using words like should, ought, or ought not. From a modern perspective, we'd say that an agent's utility function (goals, preferences, ends) contains extra information not given in the agent's probability distribution (beliefs, world-model, map of reality).

    If in 100 million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with one another, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume's insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the > (the preference ordering) first entered the system and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume's regress and exhibit a slightly different mind that computes < instead of > on that score too.

    I don't particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of, for example, paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome. If we want an AI to do its own moral reasoning, Hume's Law says we need to define the framework for that reasoning. This takes an extra fact beyond the AI having an accurate model of reality and being an excellent planner.

    But if Hume's Law makes it possible in principle to have cognitively powerful agents with any goals, why is value loading likely to be difficult? Don't we just get whatever we programmed?

    The answer is that we get what we programmed, but not necessarily what we wanted. The worrisome scenario isn't AIs spontaneously developing emotional resentment for humans. It's that we create an inductive value learning algorithm and show the AI examples of happy smiling humans labeled as high-value events - and in the early days the AI goes around making existing humans smile and it looks like everything is OK and the methodology is being experimentally validated; and then, when the AI is smart enough, it invents molecular nanotechnology and tiles the universe with tiny molecular smiley-faces. Hume's Law, unfortunately, implies that raw cognitive power does not intrinsically prevent this outcome, even though it's not the result we wanted.

    [...]

    For now, the value loading problem is unsolved. There are no proposed full solutions, even in principle. And if that goes on being true over the next decades, I can't promise you that the development of sufficiently advanced AI will be at all a good thing.

Max Tegmark has views that are fairly close to Yudkowsky's,1 but warns, in his contribution, that "Unfortunately, the necessary calls for a sober research agenda that's sorely needed is being nearly drowned out by a cacophony of ill-informed views", among which he categorizes the following eight as "the loudest":
    1. Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists appear incapable of writing an AI-article without a picture of a gun-toting robot.

    2. "It's impossible": As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there's no law of physics preventing us from building even more intelligent quark blobs.

    3. "It won't happen in our lifetime": We don't know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we'd be foolish to dismiss the possibility as mere science fiction.

    4. "Machines can't control humans": Humans control tigers not because we are stronger, but because we are smarter, so if we cede our position as smartest on our planet, we might also cede control.

    5. "Machines don't have goals": Many AI systems are programmed to have goals and to attain them as effectively as possible.

    6. "AI isn't intrinsically malevolent": Correct - but its goals may one day clash with yours. Humans don't generally hate ants - but if we want to build a hydroelectric dam and there's an anthill there, too bad for the ants.

    7. "Humans deserve to be replaced": Ask any parent how they would feel about you replacing their child by a machine, and whether they'd like a say in the decision.

    8. "AI worriers don't understand how computers work": This claim was mentioned at the above-mentioned conference, and the assembled AI researchers laughed hard.

These passages from Yudkowsky and Tegmark provide just a tip of the iceberg of interesting insights in What to Think About Machines That Think. But, not surprisingly in a volume with so many contributions, there are also disappointments. I'm a huge fan of philosopher Daniel Dennett, and the title of his contribution (The Singularity - an urban legend?) raises expectations further, since one might hope that he could help rectify the curious situation where many writers consider the drastic AI development scenario referred to as an intelligence explosion or the Singularity to be extremely unlikely or even impossible, but hardly anyone (with Robin Hanson being the one notable exception) offers arguments for this position raising above the level of slogans and one-liners. Dennett, by opening his contribution with the paragraph...
    The Singularity - the fateful moment when AI surpasses its creators in intelligence and takes over the world - is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts - Elon Musk, Stephen Hawking, and David Chalmers, among others - and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn't it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?

    I think, on the contrary, that these alarm calls distract us from a more pressing problem...

...and then going on to talk about something quite different, turns his piece into an almost caricaturish illustration of the situation I just complained about.

I want to end by quoting, in full, the extremely short (even by this book's standards) contribution by physicist Freeman Dyson - not because it offers much (or anything) of substance (it doesn't), but because of its witticism:
    I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant.

    If I am right, then the whole question is irrelevant.

I like the humility - "as I often am" - here. Dyson pretty much confirms what I was convinced of all along, namely that he agrees with the message in my 2011 blog post Den oundgängliga trovärdighetsbedömningen: fallet Dyson that he ought to be read critically.2

Footnote

1) And my own views are mostly in good alignment with Yudkowsky's and with Tegmark's. Both of them are cited approvingly in the chapter on AI in my upcoming book Here Be Dragons: Science, Technology and the Future of Humanity (Oxford University Press, January 2016).

2) This is in sharp contrast to the general reaction to my piece among Swedish climate denialists; they seemed at the time to think that Dyson ought to be read un-critically, and that any suggestion to the contrary was deeply insulting (here is a typical example).

måndag 2 november 2015

How would you react to the discovery of extraterrestrial life?

    A discovery [of extraterrestrial life] would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast cold cosmos.

    But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

    Conversely, if we discovered traces of some simple extinct life form - some bacteria, some algae - it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.

These are the words of Oxford philosopher Nick Bostrom, in his 2008 essay Where are they? Why I hope the search for extraterrestrial life finds nothing. His reasoning follows that of Robin Hanson's groundbreaking 1998 piece The Great Filter - are we almost past it?. In a blog post a few years ago, I offered a gentle introduction (in Swedish) to Hanson's Great Filter view of Fermi's Great Silence (i.e., of the fact that we seem not to have encountered, or even seen any signs of, extraterrestrial life) and how it leads to the conclusion that "dead rocks and lifeless sands" would be uplifting. Very briefly, the argument is as follows.

Let N denote the number of potentially life-supporting planets in the observable universe; N is a huge number, perhaps something like 1020 or 1022. Let p denote the probability that a randomly chosen such planet goes on to develop life, and not only life but intelligent life and a technological civilization on the level of present-day humanity. Finally, let q denote the probability, conditional on having come that far, of going on to develop a supertechnological civilization visible to astronomers all over the observable universe. Then Npq is the expected number of such supertechnological civilizations arising, and Fermi's Great Silence strongly suggests that Npq is not very large, because if it were, the probability of at least one such supertechnological civilizations having come about would have been overwhelming, and we would have seen it. But if N is huge and Npq is not very large, then pq must be very small, so at least one of the probabilities p and q is very small. If we're hoping for humanity not to self-destruct or otherwise go extinct before we get the chance to conquer the universe, q had better not be very small. A discovery of extraterrestrial life would suggest that the emergence of life is not quite as unlikely, and p not quite as small, as we might otherwise have thought. But if p is not so small, then q must be very small, and we are pretty much doomed.

This argument makes good sense to me, but it does contain a good deal of handwaving, and it might be interesting to find out whether, e.g., a more rigorous statistical treatment of the same problem leads to the same conclusion. This is what my Chalmers colleague Vilhelm Verendel and I set out to do in our paper Fermi's paradox, extraterrestrial life and the future of humanity: a Bayesian analysis, which has been accepted for publication in the International Journal of Astrobiology. Our findings are a bit inconclusive, because it is by no means clear what is a sensible choice of prior distribution in our Bayesian analysis, and the end result does depend quite a bit on this choice. Quoting from the concluding section of our paper:
    In summary, we still think that the intuition about the alarming effect of discovering extraterrestrial life expressed by Hanson (1998) and Bostrom (2008) has some appeal. In our Bayesian analysis, our first two priors (independent uniform, and independent log-uniform) support it. The third one (perfectly correlated log-uniform), however, contradicts it, and while we find the prior a bit too extreme to make a very good choice, this shows that some condition on the prior is needed to obtain qualitative conclusions about the effect on q of discovering extraterrestrial life.

    A final word of caution: While a healthy dose of critical thinking regarding the choice of Bayesian prior is always to be recommended, the case for epistemic humility is especially strong in the study of the Fermi paradox and related "big questions". In more mainstream scientific studies, circumstances are often favorable, either through the existence of a solid body of independent evidence in support of the prior, or through the availability of sufficient amounts of data that one can reasonably hope that the effects of the prior are (mostly) washed out in the posterior. In the present setting we have neither, so all conclusions from the posterior should be viewed as highly tentative.

Read our full paper here!

onsdag 19 augusti 2015

Two recent comments on Bostrom's Superintelligence

In my opinion, anyone who seriously wants to engage in issues about the potential for disruptive consequences of future technologies, and about the long-term survival of humanity, needs to know about the kind of apocalyptic scenarios in connection with a possible breakthrough in artificial intelligence (AI) that Oxford philosopher Nick Bostrom discusses in his 2014 book Superintelligence. For those readers who consider my review of the book too brief, while not having gotten around to reading the book itself, and who lack the patience to wait for my upcoming Here Be Dragons: Science, Technology and the Future of Humanity (to be released by Oxford University Press in January), I recommend two recent essays published in high-profile venues, both discussing Bostrom's arguments in Superintelligence at some length: Geist and Koch express very different views on the subject, and I therefore strongly recommend (a) reading both essays, and (b) resisting the temptation to quickly, lightheartedly and without deeper understanding of the issues involved take sides in favor of the view that best fits one's own personal intuitions and biases. As for myself, I have more or less taken sides, but only after having thought about the issues for years, and having read many books and many papers on the topic.

I have relatively little to say about Koch's essay, because he and I (and Bostrom) seem to be mostly in agreement. I disagree with his (mild) criticism against Bostrom for a tendency to go off on tangents concerning far-fetched scenarios, an example being (according to Koch) the AI going on to colonize the universe.1 And I have some minor qualms concerning the last few paragraphs, where Koch emphasizes the scientific study of the human brain and how it gives rise to intelligence and consciousness, as the way to move forward on the issues rasied by Bostrom: while I do think that area to be important and highly relevant, holding it forth as the only possible (or even as the obviously best) way forward seems unwarranted.2

My disagreements with Geist are more far-reaching. When he summarizes his opinion about Bostrom's book with the statement that "Superintelligence is propounding a solution that will not work to a problem that probably does not exist", I disagree with both the "a problem that probably does not exist" part and the "a solution that will not work" part. Let me discuss them one at a time.

What Geist means when he speaks of "a problem that probably does not exist" is that a machine with superhuman intelligence will probably never be possible to build. He invokes the somewhat tired but often sensible rule of thumb "extraordinary claims require extraordinary evidence" when he holds forth that "the extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence". If we take the derogatory term "demonic" to refer to superhuman intelligence, then how extraordinary is the claim? This issue separates neatly in two parts, namely:
    (a) Is the human brain a near-optimal arrangement of matter for producing intelligence, or are there arrangements that give rise to vastly higher intelligence?

    (b) If the answer to (a) is that such superhumanly intelligent arrangements of matter do exist, will it ever be within the powers of human technology to construct them?

To me, it seems pretty clear that the likely answer to (a) is that such superhumanly intelligent arrangements of matter do exist, based on how absurd it seems to think that Darwinian evolution, with all its strange twists and turns, its ad hoc solutions and its peacock tails, would have arrived, with the advent of the present-day human brain, at anything like a global intelligence optimum.3

This leaves question (b), which in my judgement is much more open. If we accept both naturalism and the Church-Turing thesis, then it is natural to think that intelligence is essentially an algorithmic property, so that if there exist superhumanly intelligent arrangements of matter, then there are computer programs that implement such intelligence. A nice framework for philosophizing over whether we could ever produce such a program is computer scientist Thore Husfeldt's recent image of the Library of Turing. Husfeldt used it to show that blind search in that library would no more be able to find the desired program than a group of monkeys with typewriters would be able to produce Hamlet. But might we be able to do it by methods more sophisticated then blind search? That is an open question, but I am leaning towards thinking that we can. Nature used Darwinian evolution to succed at the Herculean task of navigating the Library of Mendel to find genomes corresponding to advanced organisms such as us, and we ourselves used intelligent design for navigating the library of Babel to find such masterpieces as Hamlet and Reasons and Persons (plus The Da Vinci Code and a whole lot of other junk). And now, for searching the the Library of Turing in the hope of finding a superintelligent program, we have the luxury of being able to combine Darwinian evolution with intelligent design (on top of which we have the possibility of looking into our own brains and using them as a template or at least an inspiration for engineering solutions). To suggest that a successful such search is within grasp of future human technology seems a priori no more extraordinary than to suggest that it isn't, but one of these suggestions is true, so they cannot both be dismissed as extraordinary.

Geist does invoke a few slightly more concrete arguments for his claim that a superintelligent machine will probably never be built, but they fail to convince. He spends much ink on the history of AI and the observation that despite decades of work no superintelligence has been found, which he takes as an indication that it will never be found, but the conclusion simply does not follow. His statement that "creating intelligence [...] grows increasingly harder the smarter one tries to become" is a truism if we fix the level that we start from, but the question becomes more interesting if we ask whether it is easier or harder to improve from a high level of intelligence than from a low level of intelligence. Or in Eliezer Yudkowsky's words: "The key issue [is] returns on cognitive reinvestment - the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs." Does cognitive reinvestment yield increasing or decreasing returns? In his paper Intelligence explosion microeconomics, Yudkowsky tries to review the evidence, and finds it mixed, but comes out with the tentative conclusion that on balance it points towards increasing returns.

Let me finally discuss the "a solution that will not work" part of Geist's summary statement about Bostrom's Superintelligence. The solution in question is, in short, to instill the AI that will later attain superintelligence level with values compatible with human flourishing. On this matter, there are two key passages in Geist's essay. First this:
    Bostrom believes that superintelligences will retain the same goals they began with, even after they have increased astronomically in intelligence. "Once unfriendly superintelligence exists," he warns, "it would prevent us from replacing it or changing its preferences." This assumption - that superintelligences will do whatever is necessary to maintain their "goal-content integrity" - undergirds his analysis of what, if anything, can be done to prevent artificial intelligence from destroying humanity. According to Bostrom, the solution to this challenge lies in building a value system into AIs that will remain human-friendly even after an intelligence explosion, but he is pessimistic about the feasibility of this goal. "In practice," he warns, "the control problem ... looks quite difficult," but "it looks like we will only get one chance."

And then, later in the essay, this:

    [Our experience with] knowledge-based reasoning programs indicates that even superintelligent machines would struggle to guard their "goal-content integrity" and increase their intelligence simultaneously. Obviously, any superintelligence would grossly outstrip humans in its capacity to invent new abstractions and reconceptualize problems. The intellectual advantages of inventing new higher-level concepts are so immense that it seems inevitable that any human-level artificial intelligence will do so. But it is impossible to do this without risking changing the meaning of its goals, even in the course of ordinary reasoning. As a consequence, actual artificial intelligences would probably experience rapid goal mutation, likely into some sort of analogue of the biological imperatives to survive and reproduce (although these might take counterintuitive forms for a machine). The likelihood of goal mutation is a showstopper for Bostrom’s preferred schemes to keep AI "friendly," including for systems of sub-human or near-human intelligence that are far more technically plausible than the godlike entities postulated in his book.
The idea of goal-content integrity, going back to papers by Omohundro and by Bostrom, is roughly this: Suppose an AI has the ultimate goal of maximizing future production of paperclips, and contemplates the idea of switching to the ultimate goal of maximizing the future production of thumbtacks. Should it switch? In judging whether it is a good idea to switch, it tries to figure out whether switching will promote its ultimate goal - which, since it hasn't yet switched, is paperclip maximization - and it will reason that no, switching to the thumbtacks goal is unlikely to benefit future production of paperclips, so it will not switch. The generalization to other ultimate goals seems obvious.

On the other hand, the claim that a sufficiently intelligent AI will exhibit reliable goal-content integrity is not written in stone, as in a rigorously proven mathematical theorem. It might fail. It might even fail for the very reason suggested by Geist, which is similar to a recent idea by physicist Max Tegmark.4 But we do not know this. Geist says that "it is impossible to [invent new higher-level concepts] without risking changing the meaning of its goals". Maybe it is. On the other hand, maybe it isn't, as it could also be that a superintelligent AI would understand how to take extra care in the formation of new higher-level concepts, so as to avoid corruption of its goal content. Geist cannot possibly know this, and it seems to me that when he confidently asserts that the path advocated by Bostrom is "a solution that will not work", he vastly overestimates the reliability of his own intuitions when speculating about how agents far far more intelligent than him (or anyone he has ever met) will reason and act. To exhibit a more reasonable level of epistemic humility, he ought to downgrade his statement about "a solution that will not work" to one about "a solution that might not work" - which would bring him into agreement with Bostrom (and with me). And to anyone engaging seriously with this important field of enquiry, pessimism about the success probability of Bostrom's preferred scheme for avoiding AI Armageddon should be a signal to roll up one's sleeves and try to improve on the scheme. I hope to make a serious attempt in that direction myself, but am not sure whether I have the smarts for it.

Footnotes

1) The AI-colonizes-the-universe scenario is not at all far-fetched. For it to happen, we need (a) that the AI is capable of interstellar and intergalactic travel and colonization, and (b) that it has the desire to do so. Concerning (a), a 2013 paper by Stuart Armstrong and Anders Sandberg makes a strong case that full-scale colonization of the visible universe at close to the speed of light will become feasible. Concerning (b), arguments put forth in the seminal 1998 Great Filter paper by Robin Hanson strongly suggest that colonizing the universe is not an unlikely choice by an agent that has the capability to do so.

2) It is of course not surprising that Christof Koch, like almost all other researchers, finds his own research area particularly important: he is a neuroscientist, specializing in the neural bases of consciousness. (I kind of like his 2012 book Consciousness: Confessions of a Romantic Reductionist.)

3) Although very much superfluous, we might add to this argument all the deficiencies in the human cognitive machinery that we are aware of. For instance, quoting Koch:
    Homo sapiens is plagued by superstitions and short-term thinking (just watch politicians, many drawn from our elites, to whom we entrust our long-term future). To state the obvious, humanity's ability to calmly reason - its capacity to plan and build unperturbed by emotion (in short, our intelligence) - can improve. Indeed, it is entirely possible that over the past century, average intelligence has increased somewhat, with improved access to good nutrition and stimulating environments early in childhood, when the brain is maturing.

4) In his paper Friendly artificial intelligence: the physics challenge, Tegmark suggests that the concepts involved in the AI's goal content might turn out to be meaningless when the AI discovers the fundamental physics underlying reality, and it will realize that its ulitmate goal is incoherent. Then what will it do? Hard to say. Here is Tegmark:
    For example, suppose we program a friendly AI to maximize the number of humans whose souls go to heaven in the afterlife. First it tries things like increasing people's compassion and church attendance. But suppose it then attains a complete scientific understanding of humans and human consciousness, and discovers that there is no such thing as a soul. Now what? In the same way, it is possible that any other goal we give it based on our current understanding of the world ("maximize the meaningfulness of human life", say) may eventually be discovered by the AI to be undefined.

fredag 5 juni 2015

AI-genombrott på film

För den som vill förstå de enorma potentiella konsekvenserna - både möjligheter och risker - av ett eventuellt framtida genombrott inom artificiell intelligens (AI) har på senare år kommit en rad intressanta och givande böcker. Jag tänker t.ex. på Robin Hansons och Eliezer Yudkowskys AI Foom Debate, på James Millers Singualrity Rising, och på den bok i detta ämne som jag rekommenderar allra högst: Nick Bostroms Superintelligence. Inga science fiction-filmer på samma tema duger som substitut för denna läsning, men de erbjuder i många fall stor underhållning och kan möjligen också bidra till att inspirera seriös debatt i frågan.

I senaste New York Review of Books behandlar Daniel Mendelsohns essä The robots are winning AI-genombrottet som ett alltmer populärt filmtema. Mendelsohn tar ett litteraturhisrotiskt avstamp i robotskildringar i bl.a. Illiaden och i Mary Shelleys Frankenstein, och fortsätter med moderna filmkassiker som 2001 - ett rymdäventyr och Blade Runner, för att till slut komma fram till och diskutera två nyare filmer. Jag har själv det senaste året rapporterat här på bloggen om ett par aktuella filmer på temat AI-genombrott, nämligen Transcendence (som jag inte gillade alls) och Chappie (som jag däremot uppskattade). Det är emellertid två andra bidrag till den nya AI-filmvågen som Mendelsohn behandlar: Her från 2013, och Ex Machina från i år.

Jag såg Her nyligen, och blev inte imponerad.1 Istället för ett traditionellt science fiction-upplägg är filmen upplagd som en romantisk komedi, där huvudpersonen Theodore blir förälskad i Samantha, som är hans smartphones Siri-liknande operativsystem och frukten av det stora AI-genombrottet. Då alla roller i filmen utom Theodore och Samantha är så perifera att de närmast utgör rekvisita, och då Samantha aldrig blir till någon övertygande gestalt, så hänger allt på den träige Theodore, träigt spelad av en träig Joaquin Phoenix. Resultatet blir tämligen träigt, i synnerhet som det knappt går fem sekunder av filmen utan att den träige Theodore (eller någon gång hans synfält) visas i bild.

Så av de båda nyare filmer Mendelsohn diskuterar står hoppet till Ex Machina. Den fanns ett tag med på Svensk Filmindustris lista över utlovade biopremiärer denna vår, men någon gång i vintras försvann den i all tysthet från listan, ett beslut som senare förklarades bygga på "en sammanlagd bedömning baserad bland annat på filmens kvalitet och potential för att hålla för en biografvisning". Ingen biovisningen i Sverige alltså, så vi får ge oss till tåls tills filmen släpps på DVD, vilket möjligen (om man får tro den här webbsidan) kan bli den 17 augusti. Här är i alla fall den officiella trailern:

Fotnot

1) Mendelsohn är dock något mer positivt inställd till filmen. Se även Robin Hansons koncisa redogörelse för vad som gör filmen starkt orealistisk.

måndag 20 april 2015

Tolv katastrofer

I det avsnitt av P1-programmet Vetandets värld som jag länkade till i min förra bloggpost talade vi om risken att en tillräckligt avancerad artificiell intelligens tar över världen, och berörde bl.a. en aktuell forskarrapport som tar upp denna risk som ett av de tolv främsta hoten mot mänskligheten. Rapporten bär namnet och förtjänar att läsas och begrundas. Överhuvudtaget är det här ett område som är väl värt att studera, i första hand för att det är så viktigt för vår framtid (jag utgår här från att läsaren delar min uppfattning att det är angeläget att inte mänskligheten går under). Ett annat skäl att bekanta sig med området är att de scenarier som diskuteras och analyseras ofta är väldigt fängslande för oss som har lätt att fascineras av de radikala och/eller grandiosa framtidsscenarier som science fiction-litteraturen erbjuder. För den som gillar sådant vill jag också nämna en annan aktuell tolv-i-topp-lista över risker, men denna gång risker inte bara för mänskligheten här på vår hemplanet, utan för hela solsystemet: George Dvorskys färska bloggpost Rekommenderad läsning! Dvorskys lista har också provocerat fram läsvärda kommentarer från Anders Sandberg och Robin Hanson.

tisdag 29 juli 2014

Blandade länkar

Istället för en fokuserad bloggpost om ett väl avgränsat ämne bjuder jag idag, utan någon sådan avgränsning, på en uppsättning länkar till texter och webbsidor jag de senaste dagarna funnit särskilt läsvärda - varsågoda!
  • Folke Tersman: Politiker kan lära av forskares dialog, Sans 3/2014.

    Här artikulerar Uppsalafilosofen Folke Tersman knivskarpt den lite vagare oro jag själv känt inför de slentrianmässiga fördömanden av och drev mot sverigedemokratiska positioner som vi lite finare, mer akademiskt bildade och politiskt korrekta personer gärna ägnar oss åt. Obligatorisk läsning för den som avser fortsätta delta i sådana drev!

  • Johan Wästlund: Är blattar bättre än svennar på yatzy? Statistikexperiment för sverigedemokrater och andra.

    Johan Wästlund har figurerat flitigt här på bloggen, men har också en egen blogg, som passande nog bär namnet Johan Wästlund. Hans senaste bloggpost utgör ett förträffligt exempel på det slags saklighet i debatten som Folke Tersman efterlyser, och han slår elegant två flugor i en smäll genom att vederlägga ett stycke främlingsfientlig propaganda samtidigt som han pedagogiskt förklarar ett förrädiskt statistiskt fenomen som är bra att känna till.

  • Sam Harris: Why Don’t I Criticize Israel?

    Jag finner alltid Sam Harris skriverier intressanta, och ofta provocerande. Så även denna gång. Jag håller absolut inte med om allt han skriver i sin färska text om Mellanösternkonflikten,1 men den är synnerligen intressant och bjuder på en del klargörande tankar i denna infekterade fråga.

  • Vladimir Sorokin: Russia is Pregnant with Ukraine, New York Review of Books blog, 24 juli 2014.

    Att Rysslands president Vladimir Putin är en kriminell j-a bandit och ett allvarligt hot mot världsfreden har jag framhållit tidigare här på bloggen. Vad som ändå skänker hopp om dagens Ryssland är existensen av en välartikulerad och modig opposition. Hit hör Pussy Riot, och hit hör i allra högsta grad författaren Vladimir Sorokin. Den aktuella text av honom jag här länkar till hör (liksom hans tidigare essä Let the Past Collapse on Time) till det bästa jag läst om Ukrainakrisen.

  • Robin Hanson: I Still Don't Get Foom.

    Den som väntar på min recension av Nick Bostroms nya bok Superintelligence kommer (som tidigare meddelats) att få vänta i ännu en månad eller två, tills den först publicerats i Axess. Bland de övriga recensioner och kommentarer om boken jag hittills sett är Robin Hansons den intressantaste. Hanson är skeptisk till om det som Bostrom kallar "fast takeoff", och som på en del andra håll benämns Singulariteten, verkligen kan inträffa. Hans lyckas inte helt övertyga mig med sin argumentation, vilken är i linje med vad han tidigare framfört i The Hanson-Yudkowsky AI Foom Debate, men den är likväl helt klart beaktansvärd.

  • Anders Sandberg: The five biggest threats to human existence, The Conversation, 29 maj 2014.

    När Nick Bostroms Oxfordkollega och nära samarbetspartner Anders Sandberg rankar de fem största hoten mot mänsklighetens fortsatta existens så hamnar den av Bostrom avhandlade superintelligensen först på tredje plats. Skynda att ta del av vilka de övriga fyra är, innan det är för sent!

  • Emma Frans: Emmas selektion.

    Emma Frans är ofta först bland svenskspråkiga skribenter med spännande nyheter om beteendevetenskap och annan forksning, och om någon vill utnämna Emmas selektion till Sveriges bästa vetenskapsblogg just nu så har jag inget att invända (men hoppas givetvis på att någon gång själv lyckas erövra titeln). Den välavvägt kaxiga tonen på bloggen markeras av den snygga bannern:

  • Maeve Shearlaw: Dropping in on Turkmenistan's 'door to hell', The Guradian, 18 juli 2014.

    En av de bisarraste platserna på jorden torde vara det 69 meter breda och 30 meter djupa brinnande hål som finns långt ute i öknen i norra Turkmenistan, och som benämns "Door to Hell". Elden har brunnit i 40 år, och Shearlaws bildreportage väcker spontant två frågor hos mig: Varför fyller de inte igen hålet? Och nog är George Kourounis besök nere i hålet det närmaste vi i verkligheten kommit Mr Spocks äventyr inne i en vulkan i senaste Star Trek-filmen?

Fotnot

1) Min största invändning rör Harris lättvindiga avfärdande av den ständiga disproportionaliteten mellan antalet döda palestinier och dito israeler. Denna beror på många faktorer. Det duger inte att att, som Harris gör, peka ut en enda - Israels större skicklighet i med skyddsrum etc skydda sin civilbefolkning - och sedan hävda att just denna aspekt av Israels agerande är moraliskt oantastlig, varför Israel inte har någon moralisk skuld för disproportionaliteten. Disproportionaliteten kommer sig lika mycket av de många dödade palestinierna som av de få dödade israelerna. När vi skall bedöma Israels moraliska agerande är det i första hand det förstnämnda vi bör beakta, och i denna bedömning bör vi enligt min mening tillämpa principen om att när civila dödas så har den som håller i vapnet ett fullt ansvar, med vilket jag menar ett ansvar som inte späds ut oavsett vilka omoraliska handlingar (såsom bruk av mänskliga sköldar) som motparten gjort sig skyldig till.

(Och på tal om disproportionalitet så tycker jag att det är i minsta laget att i en 3000-ordsuppsats över detta ämne ägna blott 9 ord åt bosättningar på ockuperad mark.)

tisdag 15 oktober 2013

Reading the Hanson-Yudkowsky debate

Robin Hanson and Eliezer Yudkowsky are, in my opinion, two the the brightest, most original and most important thinkers of our time. Sadly, they are almost entirely ignored in mainstream discourse. They both consider it fairly likely that the ongoing computer revolution will continue and, at some point in the next 100 years or so, cause profound changes to society and what it means to be human. They differ drastically, however, in their estimates of how likely various more detailed scenarios are. Hanson expects scenarios where we learn how to emulate human brains as computer software, thus enabling us to cheaply copy our minds onto computers, quickly causing vast changes to the labor market and to the economy as a whole. Yudkowsky gives more credence to even more radically transformative scenarios of the kind that are typically labeled intelligence explosion or the Singularity, where we humans eventually manage to construct an AI (an artificial intelligence) that is superior to us in terms of general intelligence, including the construction of AIs, so it quickly manages to create an even smarter AI, and so on in a rapidly escalating spiral, resulting within perhaps weeks or even hours in an AI so smart that it can take over the world.

The Hanson-Yudkowsky AI-Foom Debate is a recently published 730-page ebook, provided free of charge by the Machine Intelligence Research Institute (founded in 2000 and the brainchild of Yudkowsky). The bulk part of the book consists of a debate between Hanson and Yudkowsky that took place on the blog Overcoming Bias during (mostly) November-December 2008. The core topic of the debate concerns the likelihood of the intelligence explosion scenario outlined by Yudkowsky. Hanson considers this scenario quite unlikely, while Yudkowsky considers it reasonably likely provided that civilisation does not collapse first (by a nuclear holocaust or by any of a host of other dangers), but they are both epistemically humble and open to the possibility of being simply wrong. They mostly alternate as authors of the blog posts presented in the book, with just a couple of critical posts by other authors (Carl Shulman and James Miller) interspersed in the first half of the debate. The blog posts are presented mostly chronologically, except in a few cases where chronology is sacrificed in favor of logical flow.1 After this bulk part follows some supplementary material, including (a) a transcript of an oral debate between Hanson and Yudkowsky conducted in June 2011, (b) a 56-page summary by Kaj Sotala of the debate (both the blog exchange in 2008 and the oral debate in 2011), and (c) Yudkowsky's 2013 manuscript Intelligence Explosion Microeconomics which I've recommended in an earlier blog post. Sotala's summary is competent and fair, but adds very little to the actual debate, so to be honest I don't quite see the point of including it in the book. Yudkowsky's manuscript, on the other hand, is very much worth reading after going through the debate; while it does repeat many of the points raised there, the arguments are put a bit more rigorously and systematically compared to the more relaxed style of the debate, and Yudkowsky offers quite a few additional insights that he has made since 2008 and 2011.

My overall assessment of The Hanson-Yudkowsky AI-Foom Debate is strongly favorable. What we get is an engaging dialogue between two exceptionally sharp minds. And the importance of their topic of discussion can hardly be overestimated. While the possibility of an intelligence explosion is far from the only lethal danger to humanity in the 21st century that we need to be aware of, it is certainly one of those that we very much need to take seriously and figure out how to tackle. I warmly - no, insistently - recommend The Hanson-Yudkowsky AI-Foom Debate to anyone who has above-average academic capabilities and who cares deeply about the future of humanity.

So, given the persistent disagreement between Hanson and Yudkowsky on the likelihood of an intelligence explosion, who is right and who is wrong? I don't know. It is not obvious that this question has a clearcut answer, and of course it may well be that they are both badly wrong about the likely future. This said, I still find myself throughout most parts of the book to be somewhat more convinced by Yudkowsky's arguments than by Hanson's. Following are some of my more specific reactions to the book.
  • Hanson and Yudkowsky are both driven by an eager desire to understand each other's arguments and to pinpoint the source of their disagreement. This drives them to bring a good deal of meta into their discussion, which I think (in this case) is mostly a good thing. The discussion gets an extra nerve from the facts that they both aspire to be (as best as they can) rational Bayesian agents and that they are both well-acquainted with Robert Aumann's Agreeing to Disagree theorem, which states that whenever two rational Bayesian agents have common knowledge of each other's estimates of the probability of some event, their estimates must in fact agree.2 Hence, as long as Hanson's and Yudkowsky's disagreement persists, this is a sign that at least one of them is irrational. Especially Yudkowsky tends to obsess over this.

  • All scientific reasoning involves both induction and deduction, although the proportions can vary quite a bit. Concerning the difference between Hanson's and Yudkowsky's respective styles of thinking, what strikes me most is that Hanson relies much more on induction,3 compared to Yudkowsky who is much more willing to engage in deductive reasoning. When Hanson reasons about the future, he prefers to find some empirical trend in the past and to extrapolate it into the future. Yudkowsky engages much more in mechanistic explanations of various phenomena, and combining then in order to deduce other (and hitherto unseen) phenomena.4 (This difference in thinking styles is close or perhaps even identical to what Yudkowsky calls the "outside" versus "inside" views.)

  • Yudkowsky gave, in his very old (1996 - he was a teenager then) paper Staring into the Singularity, a beautiful illustration based on Moore's law of the idea that a self-improving AI might take off towards superintelligence very fast:
      If computing speeds double every two years, what happens when computer-based AIs are doing the research?

      Computing speed doubles every two years.
      Computing speed doubles every two years of work.
      Computing speed doubles every two subjective years of work.

      Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again.

      Six months - three months - 1.5 months ... Singularity.

    The simplicity of this argument makes it tempting to put forth when explaining the idea of an intelligence explosion to someone who is new to it. I have often done so, but have come to think it may be a pedagogical mistake, because it is very easy for an intelligent layman to come up with compelling arguments against the possibility of the suggested scenario, and then walk away from the issue thinking (erroneously) that he/she has shot down the whole intelligent explosion idea. In Chapter 5 of the book (taken from the 2008 blog post The Weak Inside View), Yudkowsky takes care to distance himself from his 1996 argument. It is naive to just extrapolate Moore's law indefinitely into the future, and, perhaps more importantly, Yudkowsky holds software improvements to be a more likely main driver of an intelligence explosion than hardware improvements. Related to this is the notion of hardware overhang discussed by Yudkowsky in Chapter 33 (Recursive Self-Improvement) and elsewhere: At the time where the self-improving AI emerges, there will likely exist huge amounts of poorly protected harware available via the Internet, so why not simply go out and pick that up rather than taking the cumbersome trouble of inventing better hardware?

  • In Chapter 18 (Surprised by Brains), Yudkowsky begins...
      Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do
    ...and then goes on to give a very entertaining dialogue between his own alter ego and a charicature of Hanson, concerning what this funny new thing down on Earth might lead to. The point of the dialogue is to answer the criticism that it is extremely shaky to predict something (an intelligence explosion) that has never happened before - a point made by means of showing that new and surprising things have happened before. I really like this chapter, but cannot quite shake off a nagging feeling that the argument is similar to the so-called Galileo gambit, a favorite ploy among pseudoscientists:
      They made fun of Galileo, and he was right.
      They make fun of me, therefore I am right.

  • The central concept that leads Yudkowsky to predict an intelligence explosion is the new positive feedback introduced by recursive self-improvement. But it isn't really new, says Hanson, recalling (in Chapter 2: Engelbart as UberTool?) the case of Douglas Engelbart, his 1962 paper Augmenting Human Intellect: A Conceptual Framework, and his project to create computer tools (many of which are commonplace today) that will improve the power and efficiency of human cognition. Take word processing as an example. Writing is a non-negligible part of R&D, so if we get an efficient word processor, we will get (at least a bit) better at R&D, so we can then device an even better word processor, and so on. The challenge here to Yudkowsky is this: Why hasn't the invention of the word processor triggered an intelligence explosion, and why is the word processor case different from the self-improving AI feedback loop?

    An answer to the last question might be that the writing part of the R&D process is not really all that crucial to the R&D process, taking up maybe just 2% of the time involved, as opposed to the stuff going on in the AI's brain which makes up maybe 90% of the R&D work. In the word processor case, no more than 2% improvement is possible, and after each iteration the percentage decreases, quickly fizzling out to undetectable levels. But is there really a big qualitative difference between 0.02 and 0.9 here? Won't the 90% part of the R&D taking place inside the AI's brain similarly fizzle out after a number of iterations of the feedback loop, with other factors (external logistics) taking on the role as dominant bottlenecks? Perhaps not, if the improved AI brain figures out ways to improve the external logistics as well. But then again, why doesn't that same argument apply to word processing? I think this is an interesting criticism from Hanson, and I'm not sure how conclusively Yudkowsky has answered it.

Footnotes

1) The editing is gentle but nontrivial, so it is slightly strange that there is no mention of who did it (or even a signature (or two) indicating who wrote the foreword). It might be that Hanson and Yudkowsky did it together, but I doubt it; a prime suspect (if I may speculate) is Luke Muehlhauser.

2) At least, this is what the theorem says in what seems to be Hanson's and Yudkowsky's idea of what a rational Bayesian agent is. I'm not sure I buy into this, because one of the assumptions going into Aumann's theorem is that both agents are born with the same prior. On one hand, I can have some sympathy with the idea that two agents that are exposed to the exact same evidence should have the same beliefs, implying identical priors. On the other hand, as long as no one is able to pinpoint what excatly the objectively correct prior to be born with is, I see no compelling reason for giving the verdict "at least one of them is irrational" as soon as we encounter two agents with different priors. The only suggetion I am aware of for a prior to elevate to this universal objective correctness status is the so-called Solomonoff prior, but the theoretical arguments in its support are unconvincing and the practical difficulties in implementing it are (most likely) unsurmountable.

3) A cornerstone in Hanson's study of the alternative future mentioned in the first paragraph above is an analysis of the growth of the global economy (understood in a suitably wide sense) in the past few million years or so. This growth, he finds, is mostly exponential at a constant rate, except at a very small number of disruptive events when the growth rate makes a more-or-less sudden jump; these events are the emergence of the modern human brain, the invention of farming, and the beginning of the industrial era. From the relative magnitudes and timing of these events, he tries to predict a fourth one - the switch to an economy based on uploaded minds. In my view, as a statistician, this is not terribly convincing - the data set is too small, and there is too little in terms of mechanistic explanations that suggest a continued regularity. Nevertheless, his ideas are very interesting and very much worth discussing. He does so in a draft book that has not been made publically available but only circulated privately to a number of critical readers (including yours truly). Prospective readers may try and contact Hanson directly. See also the enlightening in-depth YouTube discussion on this topic between Hanson and Nikola Danaylov.

4) Althogh I realize that the analogy is grossly unfair to Hanson, I am reminded of the incessant debate between climate scientists and climate denialists. A favorite method of the denialists is to focus on a single source of data (most often a time series of global average temperature), ignore everything else, and proclaim that the statistical evidence in favor of continued warming is statistically inconclusive. Climate scientists, on the other hand, are much more willing to look at the mechanisms underlying wheather and climate, and to deductively combine these into computer models that provide scenarios for future climate - models that the denialists tend to dismiss as mere speculation. (Come to think of it, this comparison is highly unfair not only to Hanson, but even more so to climate science, whose computer models are incomparably better validated, and stand on incomparably firmer ground, than Yudkowsky's models of an intelligence explosion.)