måndag 20 februari 2017

Vulgopopperianism

I have sometimes, in informal discussions, toyed with the term vulgopopperianism, although mostly in Swedish: vulgärpopperianism. What I have not done so far is to pinpoint and explain (even to myself) what it means, but now is the time.

Karl Popper (1902-1994) was one of the most influential philosophers of science of the 20th century. His emphasis on falsifiability as a criterion for a theory to qualify as scientific is today a valuable and standard part of how we think about science, and it is no accident that, in the chapter named "What is science?" in my book Here Be Dragons, I spend two entire sections on his falsificationism.

Science aims at figuring out and understanding what the world is like. Typically, a major part of this endeavour involves choosing, tentatively and in the light of available scientific evidence, which to prefer among two or more descriptions (or models, or theories, or hypotheses) of some relevant aspect of the world. Popper put heavy emphasis on certain asymmetries between such descriptions. In a famous example, the task is to choose between the hypothesis
    (S1) all swans are white
and its complement
    (S2) at least one non-white swan exists.
The idea that, unlike (S2), (S1) is falsifiable, such as by the discovery of a black swan, has become so iconic that no less than two books by fans of Karl Popper (namely, Nassim Nicholas Taleb and Ulf Persson) that I've read in the past decade feature black swans on their front page illustrations.

The swan example is highly idealized, and in practice the asymmetry between two competing hypotheses is typically nowhere near as clear-cut. I believe the failure to understand this has been a major source of confusion in scientific debate for many decades. An extreme example is how, a couple of years ago, a Harvard professor of neuroscience named Jason Mitchell drew on this confusion (in conjunction with what we might call p≤0.05-mysticism) to defend a bizarre view on the role of replications in science.

By a vulgopopperian, I mean someone who
    (a) is moderately familiar with Popperian theory of science,

    (b) is fond of the kind of asymmetry that appears in the all-swans-are-white example, and

    (c) rejoices in claiming, whenever he1 encounters two competing hypotheses one of which he for whatever reasons prefers, some asymmetry such that the entire (or almost the entire) burden of proof is on proving the other hypothesis, and insisting that until a conclusive such proof is presented, we can take for granted that the preferred hypothesis is correct.

Jason Mitchell is a clear case of a vulgopopperian. So are many (perhaps most) of the proponents of climate denialism that I have encountered during the decade or so that I have participated in climate debate. Here I would like to discuss another example in just a little bit more detail.

In May last year, two of my local colleagues in Gothenburg, Shalom Lappin and Devdatt Dubhashi, gave a joint seminar entitled AI dangers: imagined and real2, in which they took me somewhat by surprise through spending large parts of the seminar attacking my book Here Be Dragons and Nick Bostroms's Superintelligence. They did not like our getting carried away by the (in their view) useless distraction of worrying about a future AI apocalypse in which things go bad for humanity due to the emergence of superintelligent machines whose goals and motivations are not sufficiently in line with human values and promotion of human welfare. What annoyed them was specifically the idea that the creation of superintelligent machines (machines that in terms of general intelligence, including cross-domain prediction, planning and optimization, vastly exceed current human capabilities) might happen within forseeable future such as a century or so.

It turned out that we did have some common ground as regards the possibility-in-principle of superintelligent machines. Neither Shalom or Devdatt, nor myself, believe that human intelligence comes out of some mysterious divine spark. No, the impression I got was that we agree that intelligence arises from natural (physical) processes, and also that biological evolution cannot plausibly be thought to have come anywhere near a global intelligence optimum by the emergence of the human brain, so that there do exist arrangements of matter that would produce vastly-higher-than-human intelligence.3 So the question then is not whether superintelligence is in principle possible, but rather whether it is attainable by human technology within a century or so. Let's formulate two competing hypotheses about the world:
    (H1) Achieving superintelligence is hard - not attainable (other than possibly by extreme luck) by human technological progress by the year 2100,
and
    (H2) Achieving superintelligence is relatively easy - within reach of human technological progress, if allowed to continue unhampered, by the year 2100.

There is some vagueness or lack of precision in those statements, but for the present discussion we can ignore that, and postulate that exactly one of the hypotheses is true, and that we wish to figure out which one - either out of curiosity, or for the practical reason that if (H2) is true then this is likely to have vast consequences for the future prospects for humanity and we should try to find ways to make these consequences benign.

It is not a priori obvious which of hypotheses (H1) and (H2) is more plausible than the other, and as far as burden of proof is concerned, I think the reasonable thing is to treat them symmetrically. A vulgopopperian who for one reason or another (and like Shalom Lappin and Devdatt Dubhashi) dislikes (H2) may try to emphasize the similarity with Popper's swan example: humanly attainable superintelligent AI designs are like non-white swans, and just as the way to refute the all-swans-are-white hypothesis is to exhibit a non-white swan, the way to refute (H1) here is to actually build a superintelligent AI; until then, (H1) can be taken to be correct. This was very much how Shalom and Devdatt argued at their seminar. All evidence in favor of (H2) (this kind of evidence makes up several sections in Bostrom's book, and Section 4.5 in mine) was dismissed as showing just "mere logical possibility" and thus ignored. For example, concerning the relatively concrete proposal by Ray Kurzweil (2006) of basing AI-construction on scanning the human brain and copying the computational structures to faster hardware, the mere mentioning of a potential complication in this research program was enough for Shalom and Devdatt to dismiss the whole proposal and throw it on the "mere logical possibility" heap. This is vulgopopperianism in action.

What Shalom and Devdatt seem not to have thought of is that a vulgopopperian who takes the opposite of their stance by strongly disliking (H1) may invoke a mirror image of their view of burden of proof. That other vulgopopperian (let's call him O.V.) would say that surely, if (H2) is false, then there is a reason for that, some concrete and provably uncircumventable obstacle, such as some information-theoretic bound showing that superintelligent AI cannot be found by any algorithm other than brute force search for an extremely rare needle in a haystack the size of Library of Babel. As long as no such obstacle is exhibited, we must (according to O.V.) accept (H2) as the overwhelmingly more plausible hypothesis. Has it not occured to Shalom and Devdatt that, absent their demonstration of such an obstacle, O.V. might proclaim that their arguments go no further than to demonstrate the "mere logical possibility" of (H1)? I am not defending O.V.'s view, only holding it forth as an arbitrarily and unjustifiably asymmetric view of (H1) and (H2) - it is just as arbitrarily and unjustifiably asymmetric as Shalom's and Devdatt's.

Ever since the seminar by Shalom and Devdatt in May last year, I have thought about writing something like the present blog post, but have procrastinated. Last week, however, I was involved in a Facebook discussion with Devdatt and a few others, where Devdatt expressed his vulgopopperian attitude towards the possible creation of a superintelligence so clearly and so unabashedly that it finally triggered me to roll up my sleeves and write this stuff down. The relevant part of the Facebook discussion began with a third party asking Devdatt whether he and Shalom had considered the possibility that (H2) might have a fairly small probability of being true, yet large enough so that given the values at stake (including, but perhaps not limited to, the very survival of humanity) makes it worthy of consideration. Devdatt's answer was a sarcastic "no":
    Indeed neither do we take into account the non-zero probability of a black hole appearing at CERN and destroying the world...
His attempt at analogy annoyed me, and I wrote this:
    Surely you must understand the crucial disanalogy between the Large Hadron Collider black hole issue, and the AI catastrophe issue. In the former case, there are strong arguments (see Giddings and Mangano) for why the probability of catastrophe is small. In the latter case, there is no counterpart of the Giddings-Mangano paper and related work. All there is, is people like you having an intuitive hunch that the probability is small, and babbling about "mere logical possibility".
Devdatt struck back:
    Does one have to consider every possible scenario however crazy and compute its probability before one is allowed to say other things are more pressing?
Notice that what Devdatt does here is that he rejects the idea that his arguments for (H1) need to go beyond establishing "mere logical possibility" and give us reason to believe that the probability of (H1) is large (or equivalently, that the probability of (H2) is small). Yet, a few lines down in the same discussion, he demands just such going-beyond-mere-logical-possibility from arguments for (H2):
    You have made it abundantly clear that superintelligence is a logical possibility, but this was preaching to the choir, most of us believe that anyway. But where is your evidence?
The vulgopopperian asymmetry in Devdatt's view of the (H1) vs (H2) problem could hardly be expressed more clearly.

Footnotes

1) Perhaps I should apologize for writing "he" rather then "he or she", but almost all vulgopopperians I have come across are men.

2) A paper by the same authors, with the same title and with related (although not quite identical) content recently appeared in the journal Communications of the ACM.

3) For simplicity, let us take all this for granted, although it is not really crucial for the discussion; a reader who doubts any of these statements may take their negations and include disjunctively into hypothesis (H2).

13 kommentarer:

  1. Did you read this article by David Deutsch? You may have commented it in your book (which I haven't read), but I have never seen you refer to it directly. I think he does an extremely good job there of summing up the core argument that I think is behind most computer scientists view on the subject, even though we might not realize it. I didn't quite see it before reading that, anyway.

    SvaraRadera
    Svar
    1. I have read it before, but will read it again with your recommendation in mind.

      Radera
    2. A few months after reading it, I reviewed Turing's 1950 article, and noted that “Lady Lovelace's objection” might, generously interpreted, be taken to be essentially the same as Deutsch's argument about creativity. In any case, Turing misses the point when he tries to sidestep the Lovelace quote by trying to substitute “originate” with “surprise”, and mixes in the irrelevant topic of consciousness. That tells me something about the depth of Deutsch's article: me gave me the ability to spot an error by Alan Turing! To his defense, he may only have had only that sentence from Lovelace, plus the distracting comment by Hartree, to react to.

      Radera
  2. Somewhat tangential to the post, but by year 2100 is not only early but very early. For the problem at hand (solving the alignment problem) year 2400 or so should probably also be considered early (though then maybe not very early)?
    /tonyf

    SvaraRadera
    Svar
    1. Assuming you are right about the alignment problem (but I don't see why), does this have any bearing on the timing of emergence of superintelligence? Well, maybe, if the entire AI community comes together to agree upon "wait guys, hold it, we haven't yet solved the alignment problem, so let's postpone launching a superintelligence until it is solved". That might happen. And then again, it might not.

      Radera
  3. I had expected Devdatt to react, swiftly and massively, in this comments section, but that does not seem to be happening. Elsewhere (on Facebook) he has, however, reacted with a long (and I'd say somewhat Trumpian) rant. To illustrate the qaulity of his arguments, let me give just one snapshot from his rant. He writes:

    "Simplistic arguments about AI systems rewriting themselves or continually self-improving betray a total lack of understanding of where AI currently is."

    Well, forgive me for saying so, but this comment betrays a total lack of understanding of what such escalating-spiral-of-self-improvement arguments are about. They are not about what is happening right now or what current AI state-of-the-art is capable of producing. They are about what might happen upon an AI breakthrough in the (possibly distant) future.

    SvaraRadera
  4. But then recent development of AI is irrelevant to the superintelligence discussion. The point is that current AI methods lack any mechanism from which it is reasonable to think that superintelligence might develop, and does not have any bearing on whether such a philosophical breakthrough is imminent or even possible.

    SvaraRadera
    Svar
    1. See the references given in the blog post.

      Radera
    2. Apologies, ctail, for my unnecessarily grumpy comment 12:44. It's just that in your comment 10:24, you make three statements, and all three strike me as ungrounded and clearly wrong.

      Radera
  5. No problem, I don't expect to be able to convince you and no need to reiterate the contents of your sources. I was merely trying to help you understand what Devdatt meant, since it appeared you didn't. It seems to me that your processing of counterarguments consists to a large part of elaborate theorizing to allow you to bypass them and keep your locked-in position. And you will never be able to convince me since the fear of artificial superintelligence is so obviously unfounded from my computer science perspective. But if you truly want to understand the computer scientists position (or at least mine, but it seems to harmonize with that of all computer scientists I know), I urge you to reread the Deutsch article with an open mind. Not that I think it will convince you.

    SvaraRadera
    Svar
    1. Thanks. Some telegraphic reactions.

      1. I am not (consciously) trying to bypass anything. I honestly want to understand the superintelligence-is-silly position and am trying my best, but have so far not been successful.

      2a. I appreciate the clever ironic passage "...your locked in position. And you will never be able to convince me...".

      2b. It is you who have a strong opinion about which of hypotheses (H1) and (H2) is true - not me! So who of us has a "locked in position"? (Rhetorical question.)

      3. In view of the surveys reported, e.g., by Dafoe and Russell, it seems like a pretty bad misnomer to describe your position as "the computer scientists' position".

      Radera
    2. 4. I reread the piece by Deutsch fairly slowly, but it didn't do magic for me. I read it as reiterating the familiar (and IMHO respectable) statement that true creativity or true ability to think outside the box involves something that we do not (yet) understand or know how to formalize.

      Radera
  6. I fully recognize that we both have locked-in positions, and sorry for not being updated on computer scientist surveys. It's just that to me the superintelligence fear has strong similarities with “Jesus is our Lord and savior” – a position also supported by many intelligent people. I'll read a little that pops up now and then for amusement and annoyance, but for as long as no new evidence turns up there is no argument that can bring me to the other side, and I can't be bothered to get myself fully updated on the discussion, even though I realize it weakens my credibility.

    SvaraRadera