lördag 22 mars 2014

Bans on research - an unseemly topic to discuss at a science academy?

It may seem immodest of me to say so as an organizer of the event, but I do think that the symposium Emerging Technologies and the Future of Humanity, held at the Royal Swedish Academy of Sciences (KVA) in Stockholm last Monday, was a success. The meeting saw illuminating and insightful discussion of possible prospects and consequences of a number of radical emerging and future technologies, thereby achieving the ambition stated beforehand:
    A number of emergening and future technologies have the potential to transform - for better or for worse - society and the conditions for humanity. These include, e.g., geoengineering, artificial intelligence, nanotechnology, and ways to enhance human capabilities by genetic, pharmaceutical or electronic means. In order to avoid a situation where in effect we run blindfolded at full speed into unknown and dangerous territories, we need to understand what the possible and probable future scenarios are, with respect to these technological developments and their potential positive and negative impacts on humanity. The purpose of the meeting is to shed light on these issues and discuss how a more systematic treatment might be possible.
Discussions focused not only on possible outcomes, but to some exent also on what we might do to improve the odds of an outcome favorable to humanity. Some of the speakers mentioned the possibility of prohibiting certain lines of research judged to pose particularly severe risks for global catastrophes - not as the only way forward, but as one of several possible strategies worth considering; in particular, Roman Yampolskiy, in his talk based on the paper Responses to Catastrophic AGI Risk: A Survey coauthored with Kaj Sotala, discussed a very wide range of such strategies for the case of artificial intelligence. Still, the mere mentioning of the possibility of a ban seemed to be enough to agitate a couple of members of the audience, including science journalist Waldemar Ingdahl, who tweeted about it. In an attempt to elicit a clarification of his point of view, I responded to his tweet, leading to the following discussion:
    WI: Many calls to shut down free research at KVA today

    OH: Are you agitated by that, or just making a slightly amusing observation?

    WI: like others in the audience and some of the speakers at the KVA event, I found calls to shut down free research worrying

    OH: Shutdowns should not be done frivolously, but reflecting on it when the future of humanity is at stake should not be taboo.

    OH: Here's my take on it: http://haggstrom.blogspot.se/2011/10/angaende-akademisk-frihet-en.html

    WI: shutdowns of free research tend to spread to other areas, see the fate of Swedish GMO and biotech research. That's worrying.

    OH: Yes, it's worrying. This worry needs to be weighed against others, e.g., new research leading to extinction of humanity.

Unfortunately, Ingdahl left our exchange at that point. I say "unfortunately", because it is still not clear to me what Ingdahl's position on this issue really is. I see a number of possibilities, including the following, for how he, depending on his position, might respond:
    (1) Granted, there may be conflicting ideals to balance there. I just wanted to remind you guys that there are serious downsides to banning research.

    (2) Why do you keep pestering me about this? I've already demonstrated a downside to banning research. Such a demonstration is enough to show that banning research is wrong.

    (3) Freedom of research is more important than the survival of humanity.

    (4) As a liberal (or a libertarian) I consider freedom, including academic freedom, a fundamental thing that I am under no circumstances willing to negotiate away.

    (5) The future will always be uncertain and dangerous, and we will never know enough about what a future technology can bring us to warrant a ban.

If Ingdahl's response is (1), then we are in fact in agreement. Response (2), on the other hand, is a simple (but, it seems to me, not uncommon) mistake: the fact that a course of action A has downsides is not automatically a decisive argument against A, because not doing A might have even worse downsides. Response (3) reflects a value judgement that I strongly disagree with - while I do think freedom of research is an important thing, I think the survival and long-term prospects of humanity is even more important. Response (4) reflects a position that can be considered a special case of (3), and that involves a praticular mistake, namely the failure to recognize that an unconditional call for the simultaneous enjoyment of all kinds of freedom collapses due to the observation (made earlier on this blog) that various freedoms are in conflict with each other. In this case, there may, e.g., be a conflict between on one hand freedom of research, and on the other hand the freedom to live on without being slaughtered by a killer robot or a synthetic killer virus.

Response (5) has some merit, as the consequences of future radical technologies are notoriously difficult to predict, but I think its conclusion goes way too far. Given the enormity of what's at stake, I think we have an imperative to try to act with as much foresight as we possibly can, which is precisely why discussions like those at the KVA meeting last Monday (and research projects like those of its main speakers) are so badly needed. And it is certainly not the case that we can never know anything about future technology and its consequences. We do not know at present what the best course of action is, but we should be openminded in trying to figure out an answer. To blindly declare at this stage that we will never find ourselves in a position that calls for a research ban seems just silly to me.

It may of course very well be that I've totally misunderstood Ingdahl, and that his actual position on this issue resembles none of the five alternatives I've sketched. I hope he will tell us.

41 kommentarer:

  1. Olle, there have already been a number of bans on scientific research and, in our own country, some are in effect -– like the ban on any research involving germ-line genetic modification or cloning of humans. also similar bans on in situ experiments involving GMOs in general. In addition, ordinary regulation around research makes certain research impossible or at least extremely difficult and/or expensive to undertake and, quite probably some knowledge is blocked for access that way. In fact, already criminal law has such effects. How do you feel about such cases and, having them in mind, are there any good evidence for those restrictions to spread to other areas? Isn't it rather that restrictions already applied (and accepted) in other areas are finally applied to sceince/research?

    SvaraRadera
  2. I forgot to add: Thgere are at least two pretty recent cases where, in fact, the scientific community has tied its own hands in so-called moratoriums. First, when hybridization of DNA was discovered, that moratorium lasted, I think, seven year. The next I know of is on so-called xenotransplantation (transplantation of genetically modified animal organs to humans), where the ban was only on the human trial step, due to theoretical risks of awakening "sleeping" so-called endogenous retroviruses, present in the animal DNA, and creating a global pandemic. This moratorium, as I understand, holds to this day.

    SvaraRadera
    Svar
    1. Thanks, Christian, for these highly relevant remarks! I think research bans and research moratoria may be very good ideas in certain situations. I am less happy about current bans on stem-cell research and GMOs, but need to learn more before I can proclaim anything with certainty. What's your view?

      Radera
    2. The bans on (Embryonic) Stem Cell research in some countries are based on a particular ethical view of embryos (destroying them = murder) and if one doesn't share that idea, as in my case, they look unjustified. If you do hold that destrying embryos en masse to harvest stem cells is akin to genocide, however, it does make sense to have a ban. But, still, the current actual bans (e.g. in Ireland, Poland) are mostly rather strange, since they allow such research as long as the embryonic stem cells were harvested before a certain date. And the US ban is, of course, even more strange, since it's only on use of federal funds, the research itself is legal. As to GMO, there' no categorical/principled ban on in situ experimentation (in vitro and contained in vivo is allowed), as far as I know. It's about risk assessment in view of possible benefits and the huge uncertainties of live GMO farming. This far, in spite of the Golden Rice (which is a particular exception from the rule), the benefits of GMO products don't convince me as justifying the many uncertainties involved (+ the genetic drift that's actually been taking place). There are interesting developments, though, smart ways of getting around particular risk-pathways (as in a recent salmon-experiment in the US: http://bit.ly/1gegTjU), but these, at the same time, usually decrease the possible benefits and serve to boost some other drawbacks - such as increasing people's dependence on Big Food industry - again affecting the risk-benefit profile adversely.

      Radera
  3. I think the intuition that science must be free comes from past good experience with it. Yes, it was annoying to have the geocentric world toppled or find that our models of human ancestry were wrong, but the clearer picture and the many useful results outweigh that by a great deal. Attempts to control science have been done by nasty groups, often for reasons that look parochial or selfish from an outside/historical perspective. There is also a link to the recognition that free thinking and speech produces robust, open societies that can fix their problems - it is hard to prevent free inquiry if there is freedom of thought.

    But this is largely inductive. If we are increasingly dealing with things that can blow up badly, then the intuition might be misleading.

    There is however some difference between science and technology: we are concerned with risks (or immorality) from *doing* things rather than *knowing* things. We can know how to do something dangerous without actually doing it, at least with the right safeguards. I never seem to find plutonium in my hardware store.

    Banning research can be about trying to prevent the ability to do something really nasty, which in some situations might be a good thing (in others, knowing about how it is done can help us reduce risk; this is why I am on the fence in the argument about gain-of-function H5N1 research and support cryptoanalysis research). Or banning research can try to prevent us acquiring knowledge itself, which seems far more problematic. Nick Bostrom has a nice typology of information hazards ( http://www.nickbostrom.com/information-hazards.pdf ) that shows that there are definitely some kinds of 'bad' true information that we might be better off not knowing... but all evidence shows that information is remarkably hard to contain. In both cases such a decision needs to be based on evidence and reasonably convincing arguments.

    So given the above, it seems that what we really need is a better way updating our intuitions to (1) new technological and scientific domains that may be riskier than past, (2) big information societies where disclosure of information hazards is easier and independent rediscovery more likely. I suspect most of our existing institutions do not have very good intuitions or ways of reasoning about this outside traditional domains.

    SvaraRadera
    Svar
    1. Thanks, Arenamontanus, those are all good points. That Bostrom paper has been discussed in an earlier blog post of mine.

      Radera
  4. I think that the journalist hadn't thought too much before he made his comment. You supplied food for thought for him. Perhaps this is not what he wanted to achieve. Rather, a story for his newspaper/news channel might have been his purpose. I don't know. I'm only conjecturing.

    Research should not be banned on "moral" or "religious" reasons (c.f. G W Bush and stem cell research and other such cases). However, there is a limit to freedom. Freedom of expression is one thing, and we all value it. But freed of implementation is another. It also depends what we mean by "research". For instance, it is well-known that one can "print" their own guns using 3D printers (they did that last year in Austin, TX), guns which can fire and kill. Suppose you get a research proposal aiming at wide distribution of this technology. Would you fund it? Depends. If it's for the use of individuals, perhaps not. I'm trivializing here, but it is not inconceivable that there are limitations to "free research".

    Eugenics was considered research at some point. Is it banned now? I suppose it is. I think most people would agree that it is not the thing to do anymore... But when it comes to a new technology, banning seems more sensitive because it does not necessarily carry the stigma of something like eugenics (which was only attached to it a posteriori; at the time, it was "good" research). The word "banning" rings a bell to the ears of people, especially journalists. Easy front page article: "Banning of research at KVA!" It sells.

    SvaraRadera
  5. or 6) Twitter is a not a very good medium for discussing philosophical questions and leads to easy distractions.

    SvaraRadera
    Svar
    1. Which is precisely why I invite you to respond at greater length here, where the ban of messages exceeding 140 character is not in force.

      Radera
  6. “If the critics are right, the world has little to worry about. But if they win the debate and are wrong, it will be too late for repairs.”

    SvaraRadera
    Svar
    1. Feel free to elaborate. Who are the critics? Critics of what? And what is meant here by "the world"? (In the most common uses of the term, the world is not an entity that worries.)

      Radera
    2. That statement is nowadays common among alarmists. But the alternative of alarmist action is seldom mentioned. Predictions that humans will soon destroy the earth stand only the test of time.

      Thus, skeptics are put in the difficult position of defending their ground. Since it is logically impossible to prove a negative, one cannot prove that the earth will not end tomorrow. The onus of evidence should be on the one who asserts the positive, this is rarely the case. The results of past eschatological predictions can be revealed. Even so, as far off as many of these projections have been, it seems to make little difference.

      Original statements are revised to explain why things turned out differently, another dismal forecast is made, and the credibility of those making the initial claims remains astonishingly intact.

      When asked "what would you do about problem XYZ?" sometimes the honest, and correct, answer is that we don't have a solution, but intend to remove the institutional barriers that prevent people from solving problems themselves.

      Radera
  7. Philosopher Michael Polanyi saw knowledge, creativity, and technology charged with strong personal sentiments and ideas. In his interesting book from 1958 "Personal knowledge" he concluded that a structure of liberty is essential for the advancement of science - that the freedom to pursue science for its own sake is a prerequisite for the production of knowledge through peer review and the scientific method. There is of course an important statement on freedom of inquiry here, since there is no obligation to partake or fund a particular line of research. That is a process of negociation.

    SvaraRadera
  8. The precautionary principle is a blunt instrument, a throwback out of place in an era of "smart solutions" and big data. the precautionary principle encourages evasion of responsibility for the status quo. When people argue to block change, for fear of unknown consequences, they rarely assume responsibility for the consequences of current problems. So the opposite of precaution is not some free-for-all. It is to develop refined and sensible decisions, with consistency and a far broader context. Unfortunately, this has far too seldom been the way vague global risks have been tackled.

    SvaraRadera
  9. Karl Popper envisioned the closed society by its claim to certain knowledge and ultimate truth lead to the attempted imposition of one version of reality. Uncertainty today fuels a rather intriguing version of Pascal's wager.

    SvaraRadera
  10. We cannot allow an AI gap!

    SvaraRadera
  11. Waldemar Ingdahl 17:23/17:27/17:33/17:42

    So it actually turns out that by formulating your position a bit more concisely, you could have responded within the Twitter 140 character format, by simply saying "My response is (5), extrapolating from the fact that we're still here, i.e., all previous Doomsday preachers were wrong".

    SvaraRadera
    Svar
    1. "The enormity of what is at stake" justification is a weak argument, as it always can invoked in the particular case. Pascal's wager does not account for all alternative costs. Using the same sense of the wager: what if we happen to worship the wrong God and now she's very, very angry?

      Radera
    2. While I agree with your negative assessment of Pascal's wager, I do resent sloppily invoking it as a parallel in the present context, because it involves the implicit (but unjustified) assumption that, say, a malign intelligence explosion or human extinction by a synthetic killer virus are (and will always remain) extremely unlikely events.

      Radera
    3. Malign intelligence explosions or synthetic killer viruses are actually highly unlikely events. The main issue in many scientific-policy discussion are about events of a very low probability but with disastrous global consequences. That's awfully similar to Pascal contemplating the slim chance of the existence of God and thus eternal damnation.

      Radera
    4. That "malign intelligence explosions or synthetic killer viruses are actually highly unlikely events" is an unsubstantiated prejudice. The issue of whether they are likely or unlikely, and how their likelihood might be influenced by our decisions, is pressing and deserves further study.

      Radera
    5. I wasn't the one arguing to ban research, so please go ahead.

      It is a tricky subject for study, as humanity has never been extinct before. It could well happen, of natural causes or something of human creation or perhaps something we are unaware of. There is indeed a risk of selection bias in the studies. But studying these risks are similar to other risk studies, with comparisons to alternate costs, otherwise we're back at Pascal's wager.

      Checking up on the Global Catastrophic Risks Survey, Technical Report, 2008, Future of Humanity Institute I find their estimate of human extinction before the year 2100 at 2 per cent. It is advisable to take such an estimate with a generous pinch of salt, I find the risk tolerable especially if comtemplating both the benefits of genetically engineered biological medicines and the quite more certain devastation wrought on humanity by diseases that could be curable.

      Radera
    6. Two comments here, Waldemar:

      1. Your claim that Global Catastrophic Risks Survey from 2008 estimates the probability of human extinction by 2100 to be 2% is an outright falsehood. The experts' median estimate of that probability is given in the report as 19%.

      2. I am shocked by your nonchalant judgement that a 2% probability of human extinction by 2100 is a "tolerable risk". With a population of 7bn, that amounts to an expected death toll of 0.02*7bn = 140 million (and that counts only those presently alive, ignoring all those future generations that would never come about). Viewing the killing of on average 140 million people as "tolerable" suddenly makes that old bastard Stalin sound like a comparatively nice guy.

      Radera
    7. To further illustrate Waldemar's claim that 2% extinction probability per century is a tolerable risk, observe that this gives an extinction risk of almost 20% per millennium and an expected time to extinction of humainity of 5000 years. The chance that our species survives for another 100000 years (which is much less than the time humans have already been around) would be 0.000000002.
      Byt, hey, maybe that's good; we wouldn't have to worry so much about our nuclear waste.

      Radera
    8. Discussing global catastrophic risks will by *its nature* rack up risks for huge numbers of casualties, unfortunately. That is why the other side of the equation, the benefits, is so important in order to form an honest opinion.

      Radera
    9. Another consequence, Waldemar, of the huge number of causalties is that discussions of global catastrophic risk are extremely unsuitable for the kind of bullshitting you're doing. It's not a sign of serious intentions that when you're caught with your pants down, grossly misquoting a figure in order to strengthen your argument, you don't even stop to acknowledge that, but instead just keep babbling about other things.

      Such as (in your comment 16:44 below) your quick dismissal of scientific consensus as a guide in decision making - again bordering on radical skepticism, and an attitude that blends in better among climate denialists than on a serious blog like mine where I'd really prefer not to have to listen to that kind of crap.

      Radera
  12. Society asks that scientists act responsibly but banning research can be akin to impeding an investigation. Society also exerts pressures on scientists to publish results in order to get credit for their discoveries and at the same time publishing too soon can limit the useful lifetime of patents and the ability to profit from one's investment. From a legal point of view scientists like other citizens are subject to bounds in behavior. Subjecting others to reckless endangerment can have serious consequences for a researcher. Research can be limited to secure facilities to reduce risks to society. In effect one needs a license or permit to do research which society often grants in order to know how to defend itself and establish appropriate bounds for society as a whole. Another reason that we might want to limit research is to prevent it from becoming too chaotic and allow time for the review process to do its job. Feigenbaum's constant tells us that there are processes that become chaotic if certain bounds are exceeded.

    SvaraRadera
    Svar
    1. Quite seriously, there should be some more control in what people publish. There is so much pollution in certain fields that a newcomer must struggle to find his/her way through. There are scientists (mathematicians included) who do not understand basic concepts in their fields and yet they publish and publish too much. In some fields, publications may be crucial indeed. In others, the pollutants (the polluting publications) can only cause disturbance to the experts in the field.

      I don't think that it is Society (in general) which exerts pressure on scientists to publish too much and too fast. To find out, we should ask the "legalistic" question: cui bono? (who benefits?) Does Society (at large) benefit from presumably asking scientists to publish results in order to get credit for their discoveries and at the same time publish them too soon? No. But some academic administrators, for instance, do. For they can reduce their work. Instead of understanding what scientists are doing, they would rather count papers or base their evaluations on metrics (h-index and other dubious measures). There are others who may benefit, and we should understand who. Certainly, science (and this includes mathematics) does not benefit by publishing too much too fast. Society would benefit more from scientists who, besides publishing, also understand their field.

      Radera
    2. Takis, I do believe the problem you discuss is real and important, but I also think it is different from (and only tangentially related to) the one discussed at the KVA meeting and in my blog post. You talk about how publication of junk papers may hinder the advancement of science, while the blog post is concerned with how the advancement of science in certain areas may contribute to global catastrophic risk. Both discussions are important, but in order to avoid unneccesary confusion they are better kept separate.

      Radera
  13. The basic issue is the notion that technology is optional and has social effects we can fully determine and evaluate in advance. Real research is unpredictable, even at the application stage. Medications often encounter unexpected adverse events during clinical testing , new materials may meet with opposition from more or less informed environmentalists or vested interests on the market, and products may prove difficult to market. Research is a risky investment. History has shown time and again how centrally planned systems have failed to invest right. Other driving forces, from political considerations into bureaucratic fads, affect decisions too much. The task of the researcher becomes to convince a few experts rather than to show good results. When investments in research do not deliver the desired results they are seen as a failure, when the goal was just a planned and orderly development. A freer research model, opens up for several models of research and different lines, thus creating resilience in research. One of the dangers of current research is, despite the largest number of researchers in recorded history, some fields have become quite brittle.

    SvaraRadera
  14. I sense an underlying motivation in your reasoning Waldemar. I fail to see how anyone could argue against a ban if an emerging technology is likely (or even given a 1% chance according to leading experts) to become a life killer. Alternative (1) is therefore the correct answer.

    Could it be that the true motivation for your arguments can be found close to alternative (4) in the blog post, and that you are grasping for whatever other arguments you find? Is the fear of restrictions causing you to disregard the consensus of experts? This would not be entirely unheard of unfortunately.

    Hans

    SvaraRadera
    Svar
    1. Well, it could be a 1 per cent risk of something terrible happening (that's quite high) but what about the opportunity costs? Let's say that it a population wide genetic modification that eliminates cancer. Suddenly the risk is worth contemplating. Discussing existential risks without opportunity costs is simply using the precautionary principle in a much too simple and dangerous fashion.

      I am indeed favoring dynamism, rather than stasism. Constant change, creativity and exploration in the pursuit of progress are better at harvesting the benefits of emerging technologies while also being a better protection against possible disasters.

      Trying to contain progress by cautious planning of experts often puts all eggs in one basket, while being unable to draw the opportunities from changes.

      The precautionary principle and the propagation of risk aversion has a high cost. It makes society brittle, and lacking in resilience, due to its lack of complexity.

      My image of the global disasters is something rather simple, but society is unable to react as it lacks the dynamism to react when things go wrong.

      Radera
    2. Waldemar. Thank you for clarifying your stance. You are right in that we should weigh the opportunity costs against the risks. We are all in agreement on that point. That is also stated in alternative (1).

      But then you let your belief and trust in freedom and eternal progress outweigh the consensus expert opinion. If the conclusion of the scientific community (which represents the closest thing we have to the sum of human knowledge of the world) is that a path of research has a 1% chance of ending human life, the rational thing to do is to take this advice seriously when forming legislation. You, on the other hand, suggest that we should ignore the sum of human wisdom in favour of your unwarranted trust in an ideology. Trusting an ideology over rational reasoning is dogmatism, and I think you fall into that trap. Just because trusting the free human enterprise "feels right" to you doesn't mean that it will always be the answer to global issues.

      Hans

      Radera
    3. A consensus is always open to change, even if merely a single researcher has a valid argument. Putting scientific consensus in the place of a human wisdom, is a dangerous ideological choice Hans makes.

      The German philosopher Martin Heidegger stressed that humanity is not in charge of technology, technology shapes humanity through forming our worldview. The essence of technology is to enframe the world and to it make quantifiable, rationalised and destructively instrumental. It might seem strange that thoughts from such an anti-humanist philosopher filled with agrarian nostalgia as Heidegger have had such an impact on the philosophy of technology, but his technological determinism has remained. It spawns the ideological desire for control that gets in the way for making a rational choice about technology.

      The use of technology is best seen as a process of negotiations, a ”marketplace of ideas”. In fact tacit knowledge such as guesses, hunches and personal visions are as decisive as informed, committed actions in determining how a specific technology will be applied. The choice of how technology will work is ultimately ours to make.

      Radera
    4. I wouldn't call my trust in science and reasoning an ideology. I'm highly aware of it's flaws and limitations. I simply think that the more a claim about nature is verified by observation, the more likely it is to remain a decent approximation of nature. Thats all. Rejection of that statement is post modernism, and I'm not sure if understand your stance enough to place you there. If you are comfortable with that label then I understand your position and no further discussion is necessary.

      Scientific consensus represents the most reality checked source of knowledge that we have. If you don't believe that represents the pinnacle of knowledge about the world, then what does? Which of the many ideas, philosophies, religions, opinions, "guesses, hunches and personal visions" should we trust? The answer is usually "mine".

      I don't think your enthusiasm over, and dogmatic trust in, endless technological progress has much empirical support. I'd give your view as much consideration as I'd give the cries of a devoted technology pessimist. But if all guesses carry equal weight then that should be fine with you.

      Hans

      Radera
    5. Consensus in science is important because it tells us about the general state of agreement in a field, not in order to blindly believe the experts in the issue. After all they could be wrong, or more certainly not have the final opinion on a matter.

      Studying the arguments of a consensus builds a responsible opinion, especially when comes to arguments that you would have difficulties to find out yourself (for instance, if you lack access to a powerful piece of research equipment). From this we can understand and critically evaluate the experts’ views.

      Scientists do not conform to Thomas Kuhn's concept of "normal science". After all, in an active and vibrant field of research the consensus will be brittle as new facts steadily enter the debate.

      A stated scientific consensus can be a very important tool for debate indeed. But its importance, and scientific vigor, derives from its transparency and how it facilitates debate. What scientist agrees with whom, on what ground, and just as importantly, where do we find disagreement within the consensus, on which areas and for what reasons? Otherwise, you start regarding science as a rigid article of faith.

      Radera
  15. How do you prove a negative, that the earth will not end tomorrow?

    We do have a string of predictions of disaster, incurring quite real costs. As far off as many of these predictions have been, the original statements are often just revised to prolong its validity or change the topic. The consequence is an increasing risk of the Chicken Little syndrome.

    SvaraRadera
    Svar
    1. Of course there is no way to prove that the Earth will not end tomorrow, but to fall back on such an observation as an excuse for inaction is moronic and borders on radical skepticism. You seem to be in a state of confusion regarding Popperian falsificationism, but perhaps my blog post No nosense can help clearing up some of your confusion.

      Radera
  16. The human mind, even of experts, is not that well prepared for the task of considering global catastrophic risks. In this regard I sense two major fallacies.

    The first is short-termism. In the absence of knowledge on the future risks of emerging technologies, precaution draws on the fears and prejudices tied to our present. It risks recycling old fears and prejudices dressed up in new clothes. Issues such as catastrophic climate change, hard take off artificial intelligence, global pandemics etc, etc are treated as remarkably similar despite their different natures.

    Responses thus become similiar, narrowing our options to merely to putting a stop to a practice, a particular substance or a new technology. Bans are active on a short term, and provide a quick fix impression of control over outcomes. We have to be careful of not letting a solution wander in search for a problem.

    The second fallacy is status quo bias. We are not in some happy natural state without the introduction of disruptive technologies like AI or biotechnology. Status quo bias seduces by the way of its simplicity and lack of responsibility.

    Changes often requires more responsability to be taken, but on a quite local level rather than the vague global scale often invoked. The “think globally, act locally” of René Dubos has often been discarded in favour of local action on the basis of mis-matched global thoughts. Local knowledge might well be a part of breaking down appraisal of both change and status quo in manageable parts.

    SvaraRadera
  17. Waldemar further up claimed "I find the [2 percent] risk [of human extinction] tolerable" and AFAICS haven't backed off that claim.

    That makes me want to stress something Olle only mentioned in passing. The possible existence span of humanity is vast and may, if not barred by human technology induced extinction, contain a very, very large number of satisfying human (and non-human) lives. The value at stake in terms of such future good lives should be at the forefront in discussions on extinction risks.

    Waldemar wrote that "precaution draws on the fears and prejudices tied to our present".

    That is true, to some extent. But to do argumentative work that claim needs specification. How much is current science and public policy distorted by those factors? What is the evidence for thinking so? Keep in mind that anti-precaution arguments and views may also be biased by other distorting factors.

    Olle: was the KVA meeting recorded? I hope videos will become available online.

    SvaraRadera