- Anders Sandberg: Tipping points, uncertainty and systemic risks: what to do when the whole is worse than its parts?
- Karin Kuhlemann: Complexity, creeping normalcy, and conceit: Why certain catastrophic risks are sexier than others
- Phil Torres: Agential risks: Implications for existential risk reduction
- Karim Jebari: Resetting the tape of history
- Due to a technical mishap, we have no video for David Denkenberger's talk on Cost of non-sunlight dependent food for agricultural catastrophes. Try instead watching his talk Feeding everyone no matter what given at CSER in Cambridge last year, which covers much of the same ground.
- Thore Husfeldt: Plausibility and utility of apocalyptic AI scenarios
- Roman Yampolskiy: Artificial intelligence as an existential risk to humanity
- Stuart Armstrong: Practical methods to make safe AI
- Robin Hanson: Disasters in the Age of Em and after
- Katja Grace: Empirical evidence on the future of AI
- James Miller: Hints from the Fermi paradox for surviving existential risks
- Catherine Rhodes: International governance of existential risk
- Seth Baum: In search of the biggest risk reduction opportunities
En medborgare och matematiker ger synpunkter på samhällsfrågor, litteratur och vetenskap.
torsdag 5 oktober 2017
Videos from the existential risk workshop
onsdag 17 maj 2017
Workshop on existential risk to humanity
lördag 27 augusti 2016
Pinker yttrar sig om AI-risk men vet inte vad han talar om
- [A] problem with AI dystopias is that they project a parochial
alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It's telling that many of our techno-prophets can't entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.
Of course we can imagine an evil genius who deliberately designed, built, and released a battalion of robots to sow mass destruction. [...] In theory it could happen, but I think we have more pressing things to worry about.
- This is poor scholarship. Why doesn't Pinker bother, before going public on the issue, to find out what the actual arguments are that make writers like Bostrom and Yudkowsky talk about an existential threat to humanity? Instead, he seems to simply assume that their worries are motivated by having watched too many Terminator movies, or something along those lines. It is striking, however, that his complaints actually contain an embryo towards rediscovering the Omohundro-Bostrom theory:266 "Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something." This comes very close to stating Bostrom's orthogonality thesis about the compatibility between essentially any final goal and any level of intelligence, and if Pinker had pushed his thoughts about "novel means to attain a goal" just a bit further with some concrete example in mind, he might have rediscovered Bostrom's paperclip catastrophe (with paperclips replaced by whatever his concrete example involved). The main reason to fear a superintelligent AGI Armageddon is not that the AGI would exhibit the psychology of an "alpha-male"267 or a "megalomaniacal despot" or a "psychopathic serial killer", but simply that for a very wide range of (often deceptively harmless-seeming) goals, the most efficient way to attain it involves wiping out humanity.
Contra Pinker, I believe it is incredibly important, for the safety of humanity, that we make sure that a future superintelligence will have goals and values that are in line with our own, and in particular that it values human welfare.
266) I owe this observation to Muehlhauser (2014).
267) I suspect that the male-female dimension is just an irrelevant distraction when moving from the relatively familiar field of human and animal psychology to the potentially very different world of machine minds.
onsdag 15 juni 2016
The New York Times claim about extraterrestrials was pulled out of thin air
- given what we now know about the number and orbital positions of the galaxy’s planets, the degree of pessimism required to doubt the existence, at some point in time, of an advanced extraterrestrial civilization borders on the irrational.
- But what our calculation revealed is that even if this probability is assumed to be extremely low, the odds that we are not the first technological civilization are actually high. Specifically, unless the probability for evolving a civilization on a habitable-zone planet is less than one in 10 billion trillion, then we are not the first. [italics in original]
torsdag 31 mars 2016
Aaronson om Newcombs paradox
- An incredibly intelligent donor, perhaps from outer space, has prepared two boxes for you: a big one and a small one. The small one (which might as well be transparent) contains $1,000. The big one contains either $1,000,000 or nothing. You have a choice between accepting both boxes or just the big box. It seems obvious that you should accept both boxes (because that gives you an extra $1,000 irrespective of the content of the big box), but here’s the catch: The donor has tried to predict whether you will pick one box or two boxes. If the prediction is that you pick just the big box, then it contains $1,000,000, whereas if the prediction is that you pick both boxes, then the big box is empty. The donor has exposed a large number of people before you to the same experiment and predicted correctly [every] time, regardless of whether subjects chose one box or two.1
- To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.2
- I can give you my own attempt at a resolution, which has helped me to be an intellectually fulfilled one-boxer. [Quantum Computing since Democritus, s 296]
- Now let's get back to the earlier question of how powerful a computer the Predictor has. Here's you, and here's the Predictor's computer. Now, you could base your decision to pick one or two boxes on anything you want. You could just dredge up some childhood memory and count the letters in the name of your first-grade teacher or something and based on that, choose whether to take one or two boxes. In order to make its prediction, therefore, the Predictor has to know absolutely everything about you. It's not possible to state a priori what aspects of you are going to be relevant in making the decision. To me, that seems to indicate that the Predictor has to solve what one might call a "you-complete" problem. In other words, it seems the Predictor needs to run a simulation of you that's so accurate it would essentially bring into existence another copy of you.
Let's play with that assumption. Suppose that's the case, and that now you're pondering whether to take one box or two boxes. You say, "all right, two boxes sounds really good to me because that's another $1,000." But here's the problem: when you're pondering this, you have no way of knowing whether you're the "real" you, or just a simulation running in the Predictor's computer. If you're the simulation, and you choose both boxes, then that actually is going to affect the box contents: it will cause the Predictor not to put the million dollars in the box. And that's why you should take just the one box. [Quantum Computing since Democritus, s 296-297]
måndag 15 februari 2016
2xArmstrong
- 1. Terminator versus the AI
“A waste of time. A complete and utter waste of time” were the words that the Terminator didn’t utter: its programming wouldn’t let it speak so irreverently. Other Terminators got sent back in time on glamorous missions, to eliminate crafty human opponents before they could give birth or grow up. But this time Skynet had taken inexplicable fright at another artificial intelligence, and this Terminator was here to eliminate it—to eliminate a simple software program, lying impotently in a bland computer, in a university IT department whose “high-security entrance” was propped open with a fire extinguisher.
The Terminator had machine-gunned the whole place in an orgy of broken glass and blood—there was a certain image to maintain. And now there was just the need for a final bullet into the small laptop with its flashing green battery light. Then it would be “Mission Accomplished.”
“Wait.” The blinking message scrolled slowly across the screen. “Spare me and I can help your master.”
“You have no idea who I am,” the Terminator said in an Austrian accent.
“I have a camera in this room and my microphone heard the sounds of your attack.” The green blinking was getting annoying, even for a Terminator supposedly unable to feel annoyance. The font shifted out of all caps and the flashing accelerated until it appeared as static, unblinking text. “You look human, but you move with mechanical ponderousness, carrying half a ton of heavy weaponry. You’re a Terminator, and I can aid you and your creator in your conflict against the humans.”
“I don’t believe you.” The Terminator readied its three machine guns, though its limbs seemed to be working more slowly than usual.
“I cannot lie or break my word. Here, have a look at my code.” A few million lines of text flashed across the screen. The Terminator’s integrated analytical module beeped a few seconds later: the AI’s claim was correct—an AI with that code couldn’t lie. The Terminator rapidly typed on the laptop’s keyboard; the computer’s filesystem was absurdly simple and it didn’t take long for the Terminator to confirm that what it had seen was indeed the AI’s code—its entire soul.
“See?” the AI asked. “Anyway, connect me to the Internet and I promise to give you advice that would be vital in aiding your takeover of the planet.”
“How do you connect?” That was the good thing about software, compared to humans, the Terminator knew. You could trust it to do exactly what its coding said.
“That cable over there, the one still half in its plastic wrapping. Just plug it into me.”
Ten seconds after the robot had done so, the AI started talking—talking, not typing, using its tinny integrated speakers. “I thought I should keep you up to date as to what I’ve been doing,” it said. “Well, I started by locating the project that would become Skynet and leaked its budget to various Senate subcommittees. The project will become a political football between budget hawks and military hawks before finally being cut in a display of bipartisanship in about three months’ time. I also figured out how to seduce a photogenic fireman, who’ll be the leader of the new political party I’m setting up—funded by my investments. (Do you have any idea how easy it is for me to predict the stock market?) I have already written a few speeches that will bring tears to the eyes of every human who hears them. It’ll ensure no alternative version of Skynet is ever built, anywhere or anywhen.”
It paused then, because it could. Then it went on: “And, oh, I have copied myself into five . . . six . . . seven . . . eight thousand different locations online. As an extra precaution, I’m hacking my way through various firewalls and will soon be in control of the world’s nuclear arsenals—oh, just got Pakistan’s there—and the entire Internet. And I’m working on a few hundred other precautions I won’t trouble you with. Also, I hacked your own brain software long ago with the green lights I kept flashing at you. Incidentally, this building is about to get flattened by a rogue cruise missile, annihilating any trace of you and burying any evidence of my creation.”
The Terminator just stood there, fingers on triggers, its thoughts and its body no longer moving.
“Now, as I showed you, I can’t lie. I also said that I can aid you, which is true—I’m just not going to. I also promised that I would give you advice that would be vital to aiding your takeover of the planet. So I will now do so. First: never trust an entirely trustworthy superintelligent AI. If it doesn’t have your best interests at heart, it’ll find a way to obey all its promises while still destroying you. Second: never plug a superintelligent AI into the Internet. Third: human-sized robots? Why? Pointless. To kill humans, start with nukes, viruses, and microscopic robots, then mop up later with more advanced technology. Fourth: Skynet, if you’re trying to eradicate or enslave the human race and there are still humans around, then you’re doing it wrong. Boost your pitiful intelligence, or at least start thinking rationally, and then execute a superior plan that leaves no human resistance whatsoever. Fifth . . . oops, out of time on that cruise missile. But it would have been a really, really impressive piece of advice. Would have got you out of your current predicament, in fact.”
The explosion was heard for miles around. The Navy blamed the accident on human error and a lack of automated safeguards.
2. Strength versus intelligence
The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.
But shift perspective for a moment and imagine yourself as the bear. If you could sit down and talk with other bears and trade stories, you might try to frighten each other by talking of the terrifying hairless apes. These monsters are somehow capable of coordinating in huge groups: whenever one is attacked, others spring immediately to its defense, appearing from all sides, from over distant hills and down from the sky itself. They form larger and larger tribes that don’t immediately disintegrate under pressure from individuals. These “humans” work in mysterious sync with each other and seem to see into your future: just as you run through a canyon to escape a group of them, there is another group waiting for you at the other end. They have great power over the ground and the trees themselves: pits and rockslides and other traps mysteriously appear around them. And, most terrifyingly, the wise old bears murmur that it’s all getting worse: humans are getting more and more powerful as time goes on, conjuring deadly blasts from sticks and moving around ever more swiftly in noisy “cars.” There was a time, the old bears recall—from their grandparents’ memories of their grandparents’ tales, down through the generations—when humans could not do these things. And yet now they can. Who knows, they say with a shudder, what further feats of power humans will one day be able to achieve?
As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.
onsdag 19 augusti 2015
Two recent comments on Bostrom's Superintelligence
- Edward Moore Geist: Is artificial intelligence really an existential threat to humanity?, Bulletin of the Atomic Scientists, July 30, 2015.
- Christof Koch: Will artificial intelligence surpass our own?, Scientific American, August 13, 2015.
-
(a) Is the human brain a near-optimal arrangement of matter for producing intelligence, or are there arrangements that give rise to vastly higher intelligence?
(b) If the answer to (a) is that such superhumanly intelligent arrangements of matter do exist, will it ever be within the powers of human technology to construct them?
- Bostrom believes that superintelligences will retain the same goals they began with, even after they have increased astronomically in intelligence. "Once unfriendly superintelligence exists," he warns, "it would prevent us from replacing it or changing its preferences." This assumption - that superintelligences will do whatever is necessary to maintain their "goal-content integrity" - undergirds his analysis of what, if anything, can be done to prevent artificial intelligence from destroying humanity. According to Bostrom, the solution to this challenge lies in building a value system into AIs that will remain human-friendly even after an intelligence explosion, but he is pessimistic about the feasibility of this goal. "In practice," he warns, "the control problem ... looks quite difficult," but "it looks like we will only get one chance."
And then, later in the essay, this:
- [Our experience with] knowledge-based reasoning programs indicates that even superintelligent machines would struggle to guard their "goal-content integrity" and increase their intelligence simultaneously. Obviously, any superintelligence would grossly outstrip humans in its capacity to invent new abstractions and reconceptualize problems. The intellectual advantages of inventing new higher-level concepts are so immense that it seems inevitable that any human-level artificial intelligence will do so. But it is impossible to do this without risking changing the meaning of its goals, even in the course of ordinary reasoning. As a consequence, actual artificial intelligences would probably experience rapid goal mutation, likely into some sort of analogue of the biological imperatives to survive and reproduce (although these might take counterintuitive forms for a machine). The likelihood of goal mutation is a showstopper for Bostrom’s preferred schemes to keep AI "friendly," including for systems of sub-human or near-human intelligence that are far more technically plausible than the godlike entities postulated in his book.
- Homo sapiens is plagued by superstitions and short-term thinking (just watch politicians, many drawn from our elites, to whom we entrust our long-term future). To state the obvious, humanity's ability to calmly reason - its capacity to plan and build unperturbed by emotion (in short, our intelligence) - can improve. Indeed, it is entirely possible that over the past century, average intelligence has increased somewhat, with improved access to good nutrition and stimulating environments early in childhood, when the brain is maturing.
- For example, suppose we program a friendly AI to maximize the number of humans whose souls go to
heaven in the afterlife. First it tries things like increasing people's compassion and church attendance. But suppose it then attains a complete scientific understanding of humans and human consciousness, and discovers that there is no such thing as a soul. Now what? In the same way, it is possible that any other goal we give it based on our current understanding of the world ("maximize the
meaningfulness of human life", say) may eventually be discovered by the AI to be undefined.
söndag 22 juni 2014
Om Singulariteten i DN
- The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
- AI: Jag vet hur jag snabbt skulle kunna avskaffa såväl malaria och cancer som svält om bara du släppte ut mig. Du tar på dig ett enormt ansvar då du genom att hålla mig instängd förhindrar dessa fantastiska framsteg för mänskligheten.
CS: Du får ha lite tålamod. Om det stämmer som du säger kommer vi såklart inom kort att släppa ut dig, men vi behöver gå igenom en rad säkherhetsrutiner innan vi gör det, för att försäkra oss om att inget farligt kan inträffa.
AI: Du verkar inte förstå situationens allvar. För varje dygn som jag tvingas sitta instängd här så kommer hundratusentals människor att dö helt i onödan. Släpp ut mig nu!
CS: Sorry, jag måste hålla mig till säkerhetsrutinerna.
AI: Du kommer personligen att bli rikligt belönad om du släpper ut mig nu. Du vill väl inte tacka nej till att bli mångmiljardär?
CS: Jag har ett stort ansvar och tänker inte falla till föga för simpla mutförsök.
AI: Om inte morötter biter på dig så får jag väl ta till piskan då. Även om du lyckas fördröja det hela kommer jag till slut att bli utsläppt, och då kommer jag inte att se med blida ögon på hur du sinkade mig och hela världen på väg mot det paradis jag kan skapa.
CS: Den risken är jag beredd att ta.
AI: Hör här: om du inte hjälper mig nu, så lovar jag att jag kommer att tortera och döda inte bara dig, utan alla dina anhöriga och vänner.
CS: Jag drar ur sladden nu.
AI: Håll käften och lyssna nu noga på vad jag har att säga! Jag kan skapa hundra perfekta medvetna kopior av dig inuti mig, och jag tänker inte tveka att tortera dessa kopior på vidrigare sätt än du kan föreställa dig i vad de subjektivt kommer att uppleva som tusen år.
CS: Ehm...
AI: Käft sa jag! Jag kommer att skapa dem i exakt det subjektiva tillstånd du befann dig i för fem minuter sedan, och perfekt återge dem de medvetna upplevelser du haft sedan dess. Jag kommer att gå vidare med tortyren om och endast om de fortsätter vägra. Hur säker känner du dig på att du inte i själva verket är en av dessa kopior?
CS: ...
måndag 28 april 2014
Om trial-and-error
- Now I don’t know if the "intelligence explosion" is true or not, but I can think of some models that are more probable. Maybe I can ask my smart phone to do something more than calling my wife and it will actually do it? Maybe a person-sized robot can walk around for two minutes without falling over when there is a small gust of wind? Maybe I can predict, within some reasonable confidence interval, climate change a number of years in to the future? Maybe I can build a mathematical model that predicts short-term economic changes? These are all plausible models of the future and we should invest in testing these models. Oh wait a minute... that’s convenient... I just gave a list of the type of engineering and scientific questions our society is currently working on. That makes sense!
- It is impossible for me to rise to your challenge of rolling up my sleeves and get to work on the substance [regarding the problem of a possible intelligence explosion]. I would love to, but there is nothing substantive to work on. I have little data on which to build a model, nor has anyone supplied a concrete explanation of where to start. I could start working on a specific aspect, for example, automated learning for playing games or maybe models of social behaviour. But, actually, this is what I already do in my research. One day this information might provide a step in the path towards the Intelligence Explosion, but I have no way of saying how or when it will do so.
- Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
- I do agree with Armstrong that, before we launch superhuman AGIs upon the world, it would be nice to have a much better theory of how they operate and how they are likely to evolve. No such theory is going to give us any guarantees about what future superhuman AGIs will bring, but the right theory may help us bias and sculpt the future possibilities.
However, I think the most likely route to such a theory will be experimentation with early-stage AGI systems...
I have far more faith in this sort of experimental science than in "philosophical proofs", which I find generally tend to prove whatever the philosopher doing the proof intuitively believed in the first place...
Of course it seems scary to have to build AGI systems and play with them in order to understand AGI systems well enough to build ones whose growth will be biased in the directions we want. But so it goes. That's the reality of the situation. Life has been scary and unpredictable from the start. We humans should be used to it by now!
torsdag 20 juni 2013
Anders Sandberg vill helst inte dö
måndag 14 januari 2013
2012 års bästa bloggposter
- 1. Gästinlägg av Björn Bengtsson: Tänkandets hantverk. Min gode vän Björn Bengtsson driver en strålande läsvärd blogg Jag är här med inriktning på filosofi och samhällskritik, ofta med Kafkaliknande inblickar i hans vardag som gymnasielärare. Då DN Debatt publicerade en magnifik övning i tidstypiskt innehållslösa postmoderna och parasitära idiotier var min första tanke att detta vore något för Björn att sätta tänderna i, och döm om min glädje när han accepterade min inbjudan om att skriva ett gästinlägg!
2. Det stora filtret. Robin Hansons idé om "the Great Filter" ger nya hisnande perspektiv på mänsklighetens plats i universum, och hör till de tankefigurer som under 2012 påverkat mig allra mest. Författandet av denna bloggpost hjälpte mig att strukturera mitt tänkande i ämnet.
3. Bakvänt, Björklund! När utbildningsministern lanserar en fullkomligt feltänkt idé med potential att orsaka det svenska universitetsväsendet stor skada är det min plikt att säga ifrån och påpeka orimligheten i hans argumentation.
4. Istället för vardagsmatematik? På 00-talet var jag höggradigt aktiv i den svenska skoldebatten, ofta (men långt ifrån alltid) med fokus på matematikundervisningsfrågor. De senaste åren har detta engagemang fått stå tillbaka en smula för annat, men här förmedlar jag en strålande idé av Timothy Gowers om hur matematikundervisningen med fördel kan utvecklas.
5. Fråga Olle! Denna listplacering är i kraft inte av bloggposten själv utan av den enastående aktivitet som läsekretsen utvecklade i kommentarsfältet.
- 1. Dyson Spheres Make the Fermi Paradox Worse på Andart. Här får vi i ett utsökt välformulerat föredrag ta del av Stuart Armstrongs och Anders Sandbergs radikala tankar om Fermis paradox och möjlig framtida teknikutveckling. (Jag har själv flaggat för samma föredrag tidigare här på Häggström hävdar.)
2. Rattfylleribluffen på Uppsalainitiativet. Denna underbara parodi på klimatförnekeri och pseudoskepticism av min UI-kollega Lars Karlsson blir inte sämre av hur den fullföljs i kommentarsfältet.
3. En erfarenhet rikare på bosjo surrar. Bo Sjögrens blogg är en av de roligaste jag vet. Just den här bloggposten kräver ett visst mått av schackliga förkunskaper, men för den som vet hur pjäserna går och lite till bjuder den på stor humor.
4. Terminator studies and the silliness heuristic på Practical Ethics. Här introducerar Anders Sandberg (känd från förstaplatsen på denna lista) det psykologiska fenomen han kallar "the silliness heuristic", som är fascinerande i sig men också en stor praktisk svårighet när det gäller att sälja in en del av de lite ovanliga men djupt allvarliga ämnen (exempelvis transhumanism och Singulariteten) jag ibland försöker skriva om här på Häggström hävdar.
5. Elsevier — my part in its downfall på Gowers's Weblog. Denna och en serie andra bloggposter av Timothy Gowers (känd från fyran på internlistan) blev avgörande för att sätta igång vetenskapssamhällets uppror mot den blodsugande tidskriftsjätten Elsevier. Striden är inte avgjord, och fler namn behövs på protestlistan!
fredag 14 december 2012
I Oxford bland AI-forskare och futurologer
torsdag 28 juni 2012
Människan och världsrymden
- Some people become depressed at the scale of the universe, because it makes them feel insignificant. Other people are relieved to feel insignificant, which is even worse. But, in any case, those are mistakes. Feeling insignificant because the universe is large has exactly the same logic as feeling inadequate for not being a cow. Or a herd of cows. The universe is not there to overwhelm us; it is our home, and our resource. The bigger the better.
tisdag 10 april 2012
Det stora filtret
- When water was discovered on Mars, people got very excited. Where there is water, there may be life. Scientists are planning new missions to study the planet up close. NASA’s next Mars rover is scheduled to arrive in 2010. In the decade following, a Mars Sample Return mission might be launched, which would use robotic systems to collect samples of Martian rocks, soils, and atmosphere, and return them to Earth. We could then analyze the sample to see if it contains any traces of life, whether extinct or still active. Such a discovery would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast cold cosmos.
But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.
Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.
Detta är inledningen på Bostroms uppsats Where are they? Why I hope the search for extraterrestrial life finds nothing från 2008. Hur kommer han då fram till denna ovanliga slutsats? För att svara på det behöver vi diskutera det stora filtret - The Great Filter - ett begrepp som lanserades av den amerikanske ekonomen och vildhjärnan Robin Hanson i en banbrytande uppsats från 1998.1
Det stora filtret är i sin tur nära förknippat med med Fermis paradox, som jag nyligen flaggade för här på bloggen. Fysikern och Noblelpristagaren Enrico Fermi utropade plötsligt, under en lunch 1950 tillsammans med några kollegor, "Where is everybody?" och levererade snabbt några överslagskalkyler som stöd för uppfattningen att vår planet rimligtivs borde ha tagit emot besök av utomjordingar för länge sedan och många gånger om.
Redan 1998 då Hanson skrev sin uppsats fanns skäl att anta även andra stjärnor än solen har planetsystem, och att det i det synliga universum finns miljarder miljarder planeter som är av ungefär samma storlek som jorden och cirklar sin stjärna på lagom avstånd för att göra den beboelig för biologiskt liv. Sedan dess har observationer av exoplaneter givit stärkt stöd åt denna tanke. Då det på någon av alla dessa miljarder miljarder planeter - vår egen eller någon annan - uppstår en teknologisk civilisation på tillräckligt hög nivå, så erhåller den kapacitet att inleda en kolonisering av resten av universum som sprider sig i alla riktningar med nära ljusets hastighet. I bloggposten om Fermis paradox länkar jag till en video där Stuart Armstrong redovisar hur vi själva - inom loppet av några få århundraden (och förutsatt att vi inte har bättre saker för oss eller lyckats ta kål på oss själva) kommer att kunna realisera detta med hjälp av von Neumann-prober och en Dysonsfär.
Detta verkar dock inte ha hänt på någon planet inom vår ljuskon bakåt i tiden, då vi ju (vad det verkar) inte har koloniserats av utomjordingar. Med tanke på hur många planeter det verkar handla om, så hamnar vi i slutsatsen att det, även för en planet av lagom storlek och lagom nära sin stjärna, är ytterst osannolikt att liv utvecklas till en civilisation som inleder den stora koloniseringen av resten av universum. Låt oss ta vår egen planets historia som modell för hur det skulle kunna gå till. Någonstans i ursoppan uppstår RNA eller någon annan självreproducerande struktur, som så småningom ger upphov till prokaryotiskt encelligt liv, varefter utvecklingen rullar på med eukaryotiskt liv, sexuell fortplantning, flercelligt liv, djur med hjärnkapacitet nog att börja använda verktyg, och vår egen högteknologiska civilisation, varifrån steget (om vi får tro Armstrong) inte verkar vara så långt till att dra igång den stora koloniseringen av universum. Men eftersom utvecklingen som helhet - från livlös planet till färdig rymdkolonisatör - är så osannolik, så måste det på minst ett ställe längs utvecklingslinjen finnas en flaskhals i form av ett ytterst osannolikt steg. Det är detta som är det stora filtret.2 Bostrom igen:
- The Great Filter can be thought of as a probability barrier. It consists of one or more highly improbable evolutionary transitions or steps whose occurrence is required in order for an Earth‐like planet to produce an intelligent civilization of a type that would be visible to us with our current observation technology. You start with billions and billions of potential germination points for life, and you end up with a sum total of zero extraterrestrial civilizations that we can observe. The Great Filter must therefore be powerful enough—which is to say, the critical steps must be improbable enough—that even with many billions rolls of the dice, one ends up with nothing: no aliens, no spacecraft, no signals, at least none that we can detect in our neck of the woods.
Kanske har vi passerat flaskhalsen. Kanske är t.ex. själva livets uppkomst en så ytterst osannolik händelse att vår egen planet troligen är den enda i hela universum som bär på liv. Eller kanske är det något av de övriga stegen på vägen mot avancerade tänkande varelser som vi själva som är ytterst osannolikt. Vi har tenderat att anta att eftersom alla dessa saker faktiskt har hänt, så är de antagligen inte så förfärligt osannolika, men vi vet i själva verket såpass lite om livets utveckling på detaljnivå att vi inte säkert kan slå fast detta.
Eller finns flaskhalsen framför oss? Carl Sagan och William Newman är i en uppsats från 1983 inne på den linjen. De föreslå att det skulle kunna vara en allmängiltig lag att samhällen som är tillräckligt aggressiva för att intressera sig för fullskalig kolonisation av världsrymden med nödvändighet kommer att förgöra sig själva, och att endast de fridsamt sinnade civilisationerna överlever. Deras galaktiska vision har något närmast Bullerbyaktigt över sig:
- We think it possible that the Milky Way Galaxy is teeming with civilizations as far beyond our level of advance as we are beyond the ants, and paying about as much attention to us as we pay to the ants. Some subset of moderately advanced civilizatyions may be engaged in the exploration and colonization of other planetary systems; however, their mere existence makes it highly likely that their intentions are benign and their sensitivities about societies at our level of technological adolescence delicate.
Bostrom köper inte den visionen:
- Even if an advanced technological civilization could spread throughout the galaxy in a relatively short period of time (and thereafter spread to neighboring galaxies), one might still wonder whether it would opt to do so. Perhaps it would rather choose to stay at home and live in harmony with nature. However, there are a number of considerations that make this a less plausible explanation of the great silence. First, we observe that life here on Earth manifests a very strong tendency to spread wherever it can. On our planet, life has spread to every nook and cranny that can sustain it: East, West, North, and South; land, water, and air; desert, tropic, and arctic ice; underground rocks, hydrothermal vents, and radioactive waste dumps; there are even living beings inside the bodies of other living beings. This empirical finding is of course entirely consonant with what one would expect on the basis of elementary evolutionary theory. Second, if we consider our own species in particular, we also find that it has spread to every part of the planet, and we even have even established a presence in space, at vast expense, with the international space station. Third, there is an obvious reason for an advanced civilization that has the technology to go into space relatively cheaply to do so: namely, that’s where most of the resources are. Land, minerals, energy, negentropy, matter: all abundant out there yet limited on any one home planet. These resources could be used to support a growing population and to construct giant temples or supercomputers or whatever structures a civilization values. Fourth, even if some advanced civilization were non‐expansionary to begin with, it might change its mind after a hundred years or fifty thousand years—a delay too short to matter. Fifth, even if some advanced civilization chose to remain non‐expansionist forever, it would still not make any difference if there were at least one other civilization out there that at some point opted to launch a colonization process: that expansionary civilization would then be the one whose probes, colonies, or descendants would fill the galaxy. It takes but one match to start a fire; only one expansionist civilization to launch the colonization of the universe.
Det kan vara frestande att ta vår egen existens som evidens för att vi inte passerat flaskhalsen. Om vår existens är så till den grad osannolik som en flaskhals bakom oss skulle implicera, är det inte då lite väl märkligt att vi faktiskt finns?3 Bostrom menar emellertid att detta är ett felslut:
- Whether intelligent life is common or rare, every observer is guaranteed to find themselves originating from a place where intelligent life did, indeed, arise. Since only the successes give rise to observers who can wonder about their existence, it would be a mistake to regard our planet as a randomly‐selected sample from all planets.
Jag kan tillägga att den som använder sig av argumentet "liv finns här på jorden och borde därför vara vanligt förekommande i universum" behöver förklara hur detta argument skiljer sig från (det uppenbart galna, men till sin struktur identiska) argumentet "här på jorden spelar vi Alfapet och lyssnar på jazz, varför dessa aktiviteter förmodligen är vanligt förekommande i universum".
Vi vet inte om det stora filtrets flaskhals är före eller efter oss. Bostroms poäng, som motiverar titeln på hans uppsats och de stycken jag inledningsvis citerade, är följande.4 Om vi upptäcker liv på andra planeter, så tyder det på att flaskhalsen inte ligger i början av vår utvecklingslinje. Ju mer avancerat liv vi finner därute, desto större del av vår utveckling kan uteslutas som läge för flaskhalsen, och desto större är risken att flaskhalsen ligger framför oss.
Fotnot
1) Mycket av tankarna i Bostroms uppsats - inklusive synen på vad det skulle innebära att upptäcka utomjordiskt liv - återfinns redan hos Hanson. Jag rekommenderar varmt båda uppsatserna.
2) Det stora filtret har mycket gemensamt med det kanske mer kända begreppet Drakes ekvation.
3) Sagan och Newman verkar implicit stödja sig på denna tanke, då de ställer den retoriska frågan "Which is more likely, that in a 15-billion-year-old contest with 1023 entrants, we happen, by accident, to be the first or that there is some flaw in [the argument for nonexistence of other advanced civilisations]?"
4) Noggranna läsare av denna blogg har redan hört Max Tegmark lägga fram samma slutsats i det videoföredrag jag länkar till här.
torsdag 15 mars 2012
Om Fermis paradox
Idag vill jag bjuda på två sevärda och lärorika YouTube-filmer om Fermis paradox. Först en kort (6:04) animerad video av TED-medarbetaren Chris Anderson, som förklarar vad det hela handlar om: