måndag 27 januari 2014

KVA-möte den 17 mars: Emergenta teknologier och mänsklighetens framtid

Måndagen den 17 mars i år äger ett möte rum på Kungliga Vetenskapsakademien (KVA) i Stockholm, som jag är med och arrangerar och som kan vara av stort intresse för de läsare som delar mitt intresse för mänsklighetens framtid och hur denna kan komma att påverkas av radikal teknikutveckling på olika områden. Det rör sig om ett heldagsmöte med rubriken Emerging technologies and the future of humanity. Mötet är öppet för allmänheten och kostnadsfritt, men föranmälan är obligatorisk. Så här har vi valt att formulera dess övergripande tema:
    A number of emergening and future technologies have the potential to transform - for better or for worse - society and the conditions for humanity. These include, e.g., geoengineering, artificial intelligence, nanotechnology, and ways to enhance human capabilities by genetic, pharmaceutical or electronic means. In order to avoid a situation where in effect we run blindfolded at full speed into unknown and dangerous territories, we need to understand what the possible and probable future scenarios are, with respect to these technological developments and their potential positive and negative impacts on humanity. The purpose of the meeting is to shed light on these issues and discuss how a more systematic treatment might be possible.
Fyra föredrag av lika många skarpsinniga, visionära och kontroversiella framtidsforskare (Seth Baum, Anders Sandberg, Karim Jebari och Roman Yampolskiy) kommer att presenteras, och mötet avrundas med en paneldiskussion där föredragshållarna kompletteras med tre inbjudna diskutanter (Ulf Danielsson, Gustaf Arrhenius och Katarina Westerlund) som kan väntas komma med kritiska synpunkter. Detaljerat tidsprogram finns tillgängligt på KVA:s hemsida, liksom följande sammanfattningar av föredragen.
  • Seth Baum, Global Catastrophic Risk Institute, NY, USA: The great downside dilemma for risky emerging technologies

    Many emerging technologies show potential to solve major global challenges, but some of these technologies come with possible catastrophic downsides. In this talk I will discuss the great dilemma that these technologies pose: should society accept the risks associated with using these technologies, or should it instead accept the burdens that come with abstaining from them? I will use stratospheric aerosol geoengineering as an illustrative example, which could protect humanity from many burdens of global warming but could also fail catastrophically. Other technologies that pose this great downside dilemma include certain forms of biotechnology, nanotechnology, and artificial intelligence, whereas nuclear fusion power shows less downside.

  • Anders Sandberg, Oxford University, UK: The future of humanity

    The key to the current ecological success of Homo sapiens is that our species has been able to deliberately reshape its environment to suit its needs. This in turn hinges on our ability to maintain a culture that grows cumulatively. We are now starting to develop technologies that enable us to reshape ourselves and our abilities - genetic engineering, cognitive enhancement, man-machine symbiosis. This talk will examine some of the implications of a self-enhancing humanity and possible paths it could take.

  • Karim Jebari, Kungliga Tekniska högskolan, Stockholm: Global catastrophic risk and engineering safety

    Engineers have maneged risk in complex systems for hundreds of years. Their approach differs radically from what economists refer to as “risk mangement”. The heuristics developed by engineers are very useful to understand and develop strategies to reduce global catastrophic risks. I discuss a potentially disruptive technology: control of ageing and argue that the risks of such a program have been underestimated.

  • Roman Yampolskiy, University of Louisville, KY, USA: Artificial general intelligence and the future of humanity

    Many scientists, futurologists and philosophers have predicted that humanity will achieve a technological breakthrough and create artificial general intelligence (AGI) within the next one hundred years. It has been suggested that AGI may be a positive or negative factor in the global catastrophic risk. After summarizing the arguments for why AGI may pose significant risk, I will survey the field’s proposed responses to AGI risk, with particular focus on solutions advocated in my own work.

Varmt välkomna den 17 mars!

5 kommentarer:

  1. Jag avser att komma. Jag kan inte tänka mig ett värdigare sätt att fira Irlands nationaldag!

  2. If we could predict the future would we be morally obligated to try to prevent some events from happening? How would you advise a judge on whether or not there will be an earthquake tomorrow? Or a tsunami? I doubt that legally we can treat something that is hypothetical as if it were a fact. Should Bayesian argumentation be given the same weight as prima facie evidence? Inverse probabilities are not as general the probabilities associated with a stochastic process. It would be better if they were handled on a case by case basis. An association fallacy is not considered a legitimate form of argument.

  3. We all recognize that suspicion is not the same as proof. A vagrant or someone without a visible means of support is not necessarily a criminal. Yet in ancient times such individuals were rounded up and enslaved. We should all be thankful for the Skara stadga which was issued on this day in 1335.

  4. Låter superintressant! Kommer det filmas och göras tillgängligt i efterhand?

    1. Det kan jag inte lova säkert, så du gör nog säkrast i att infinna dig...