I have sometimes, in informal discussions, toyed with the term vulgopopperianism, although mostly in Swedish: vulgärpopperianism. What I have not done so far is to pinpoint and explain (even to myself) what it means, but now is the time.
Karl Popper (1902-1994) was one of the most influential philosophers of science of the 20th century. His emphasis on
falsifiability as a criterion for a theory to qualify as scientific is today a valuable and standard part of how we think about science, and it is no accident that, in the chapter named "What is science?" in my book
Here Be Dragons, I spend two entire sections on his falsificationism.
Science aims at figuring out and understanding what the world is like. Typically, a major part of this endeavour involves choosing, tentatively and in the light of available scientific evidence, which to prefer among two or more descriptions (or models, or theories, or hypotheses) of some relevant aspect of the world. Popper put heavy emphasis on certain asymmetries between such descriptions. In a famous example, the task is to choose between the hypothesis
and its complement
(S2) at least one non-white swan exists.
The idea that, unlike (S2), (S1) is falsifiable, such as by the discovery of a black swan, has become so iconic that no less than two books by fans of Karl Popper (namely,
Nassim Nicholas Taleb and
Ulf Persson) that I've read in the past decade feature black swans on their front page illustrations.
The swan example is highly idealized, and in practice the asymmetry between two competing hypotheses is typically nowhere near as clear-cut. I believe the failure to understand this has been a major source of confusion in scientific debate for many decades. An extreme example is how, a couple of years ago, a Harvard professor of neuroscience named Jason Mitchell drew on this confusion (in conjunction with what we might call
p≤0.05-mysticism) to defend
a bizarre view on the role of replications in science.
By a vulgopopperian, I mean someone who
(a) is moderately familiar with Popperian theory of science,
(b) is fond of the kind of asymmetry that appears in the all-swans-are-white example, and
(c) rejoices in claiming, whenever he1 encounters two competing hypotheses one of which he for whatever reasons prefers, some asymmetry such that the entire (or almost the entire) burden of proof is on proving the other hypothesis, and insisting that until a conclusive such proof is presented, we can take for granted that the preferred hypothesis is correct.
Jason Mitchell is a clear case of a vulgopopperian. So are many (perhaps most) of the proponents of climate denialism that I have encountered during the decade or so that I have participated in climate debate. Here I would like to discuss another example in just a little bit more detail.
In May last year, two of my local colleagues in Gothenburg, Shalom Lappin and Devdatt Dubhashi, gave
a joint seminar entitled AI dangers: imagined and real2, in which they took me somewhat by surprise through spending large parts of the seminar attacking my book
Here Be Dragons and Nick Bostroms's
Superintelligence. They did not like our getting carried away by the (in their view) useless distraction of worrying about a future AI apocalypse in which things go bad for humanity due to the emergence of superintelligent machines whose goals and motivations are not sufficiently in line with human values and promotion of human welfare. What annoyed them was specifically the idea that the creation of superintelligent machines (machines that in terms of general intelligence, including cross-domain prediction, planning and optimization, vastly exceed current human capabilities) might happen within forseeable future such as a century or so.
It turned out that we did have some common ground as regards the possibility-in-principle of superintelligent machines. Neither Shalom or Devdatt, nor myself, believe that human intelligence comes out of some mysterious divine spark. No, the impression I got was that we agree that intelligence arises from natural (physical) processes, and also that biological evolution cannot plausibly be thought to have come anywhere near a global intelligence optimum by the emergence of the human brain, so that there do exist arrangements of matter that would produce vastly-higher-than-human intelligence.
3 So the question then is not whether superintelligence is in principle possible, but rather whether it is attainable by human technology within a century or so. Let's formulate two competing hypotheses about the world:
(H1) Achieving superintelligence is hard - not attainable (other than possibly by extreme luck) by human technological progress by the year 2100,
and
(H2) Achieving superintelligence is relatively easy - within reach of human technological progress, if allowed to continue unhampered, by the year 2100.
There is some vagueness or lack of precision in those statements, but for the present discussion we can ignore that, and postulate that exactly one of the hypotheses is true, and that we wish to figure out which one - either out of curiosity, or for the practical reason that if (H2) is true then this is likely to have vast consequences for the future prospects for humanity and we should try to find ways to make these consequences benign.
It is not a priori obvious which of hypotheses (H1) and (H2) is more plausible than the other, and as far as burden of proof is concerned, I think the reasonable thing is to treat them symmetrically. A vulgopopperian who for one reason or another (and like Shalom Lappin and Devdatt Dubhashi) dislikes (H2) may try to emphasize the similarity with Popper's swan example: humanly attainable superintelligent AI designs are like non-white swans, and just as the way to refute the all-swans-are-white hypothesis is to exhibit a non-white swan, the way to refute (H1) here is to actually build a superintelligent AI; until then, (H1) can be taken to be correct. This was very much how Shalom and Devdatt argued at their seminar. All evidence in favor of (H2) (this kind of evidence makes up several sections in Bostrom's book, and Section 4.5 in mine) was dismissed as showing just
"mere logical possibility" and thus ignored. For example, concerning the relatively concrete proposal by
Ray Kurzweil (2006) of basing AI-construction on scanning the human brain and copying the computational structures to faster hardware, the mere mentioning of a potential complication in this research program was enough for Shalom and Devdatt to dismiss the whole proposal and throw it on the
"mere logical possibility" heap. This is vulgopopperianism in action.
What Shalom and Devdatt seem not to have thought of is that a vulgopopperian who takes the opposite of their stance by strongly disliking (H1) may invoke a mirror image of their view of burden of proof. That other vulgopopperian (let's call him O.V.) would say that surely, if (H2) is false, then there is a
reason for that, some concrete and provably uncircumventable obstacle, such as some information-theoretic bound showing that superintelligent AI cannot be found by any algorithm other than brute force search for an extremely rare needle in a haystack the size of
Library of Babel. As long as no such obstacle is exhibited, we must (according to O.V.) accept (H2) as the overwhelmingly more plausible hypothesis. Has it not occured to Shalom and Devdatt that, absent their demonstration of such an obstacle, O.V. might proclaim that their arguments go no further than to demonstrate the
"mere logical possibility" of (H1)? I am not defending O.V.'s view, only holding it forth as an arbitrarily and unjustifiably asymmetric view of (H1) and (H2) - it is just as arbitrarily and unjustifiably asymmetric as Shalom's and Devdatt's.
Ever since the seminar by Shalom and Devdatt in May last year, I have thought about writing something like the present blog post, but have procrastinated. Last week, however, I was involved in a Facebook discussion with Devdatt and a few others, where Devdatt expressed his vulgopopperian attitude towards the possible creation of a superintelligence so clearly and so unabashedly that it finally triggered me to roll up my sleeves and write this stuff down. The relevant part of the Facebook discussion began with a third party asking Devdatt whether he and Shalom had considered the possibility that (H2) might have a fairly small probability of being true, yet large enough so that given the values at stake (including, but perhaps not limited to, the very survival of humanity) makes it worthy of consideration. Devdatt's answer was a sarcastic "no":
Indeed neither do we take into account the non-zero probability of a black hole appearing at CERN and destroying the world...
His attempt at analogy annoyed me, and I wrote this:
Surely you must understand the crucial disanalogy between the Large Hadron Collider black hole issue, and the AI catastrophe issue. In the former case, there are strong arguments (see Giddings and Mangano) for why the probability of catastrophe is small. In the latter case, there is no counterpart of the Giddings-Mangano paper and related work. All there is, is people like you having an intuitive hunch that the probability is small, and babbling about "mere logical possibility".
Devdatt struck back:
Does one have to consider every possible scenario however crazy and compute its probability before one is allowed to say other things are more pressing?
Notice that what Devdatt does here is that he rejects the idea that his arguments for (H1) need to go beyond establishing
"mere logical possibility" and give us reason to believe that the probability of (H1) is large (or equivalently, that the probability of (H2) is small). Yet, a few lines down in the same discussion, he demands just such going-beyond-mere-logical-possibility from arguments for (H2):
You have made it abundantly clear that superintelligence is a logical possibility, but this was preaching to the choir, most of us believe that anyway. But where is your evidence?
The vulgopopperian asymmetry in Devdatt's view of the (H1) vs (H2) problem could hardly be expressed more clearly.
Footnotes
1) Perhaps I should apologize for writing "he" rather then "he or she", but almost all vulgopopperians I have come across are men.
2)
A paper by the same authors, with the same title and with related (although not quite identical) content recently appeared in the journal
Communications of the ACM.
3) For simplicity, let us take all this for granted, although it is not really crucial for the discussion; a reader who doubts any of these statements may take their negations and include disjunctively into hypothesis (H2).