fredag 27 oktober 2017

On Tegmark's Life 3.0

I've just read Max Tegmark´s new book Life 3.0: Being Human in the Age of Artificial Intelligence.1 My expectations were very high after his previous book Our Mathematical Universe, and yes, his new book is also good. In Tegmark's terminology, Life 1.0 is most of biological life, whose hardware and software are both evolved; Life 2.0 is us, whose hardware is mostly evolved but who mostly create our own software through culture and learning; and Life 3.0 is future machines who design their own software and hardware. The book is about what a breakthrough in artificial intelligence (AI) might entail for humanity - and for the rest of the universe. To a large extent it covers the same ground as Nick Bostrom's 2014 book Superintelligence, but with more emphasis on cosmological perspectives and on the problem of consciousness. Other than that, I found less novelty in Tegmark's book compared to Bostrom's than I had expected, but one difference is that while Bostrom's book is scholarly quite demanding, Tegmark's is more clearly directed to a broader audience, and in fact a very pleasant and easy read.

There is of course much I could comment upon in the book, but to keep this blog post short, let me zoom in on just one detail. The book's Figure 1.2 is a very nice diagram of ways to view the possibility of a future superhuman AI, with expected goodness of the consequences (utopia vs dystopia) on the x-axis, and expected time until its arrival on the y-axis. In his discussion of the various positions, Tegmark emphasizes that "virtually nobody" expects superhuman AI to arrive within the next few years. This is what pretty much everyone in the field - including myself - says. But I've been quietly wondering for some time what the actual evidence is for the claim that the required AI breakthrough will not happen in the next few years.2 Almost simultaneously with reading Life 3.0, I read Eliezer Yudkowsky's very recent essay There’s No Fire Alarm for Artificial General Intelligence, which draws attention to the fact that all of the empirical evidence that is usually held forth in favor of the breakthrough not being imminent describes a general situation that can be expected to still hold at a time very close to the breakthrough.3 Hence the purported evidence is not very convincing.

Now, unless I misremember, Tegmark doesn't actually say in his book that he endorses the view that a breakthrough is unlikely to be imminent - he just says that this is the consensus view among AI futurologists. Perhaps this is not an accident? Perhaps he has noticed the lack of evidence for the position, but chooses not to advertise this? I can see good reasons to keep a low profile on this issue. First, when one discusses topics that may come across as weird (AI futurology clearly is such a topic), one may want to somehow signal sobriety - and saying that an AI breakthrough in the next few years is unlikely may serve as such a signal. Second, there is probably no use in creating panic, as solving the so-called Friendly AI problem seems unlikely to be doable in just a few years. Perhaps one can even make the case that these reasons ought to have compelled me not to write this blog post.

Footnotes

1) I read the Swedish translation, which I typically do not do with books written in English, but this time I happened to receive it as a birthday gift from the translators Helena Sjöstrand Svenn and Gösta Svenn. The translation struck me as highly competent.

2) An even more extreme version of this question is to ask what the evidence is that, in a version of Bostrom's (2014) notion of the treacherous turn, the superintelligent AI already exists and is merely biding its time. It was philosopher John Danaher, in his provocative 2015 paper Why AI doomsayers are like sceptical theists and why it matters, who brought attention to this matter; see my earlier blog post A disturbing parallel between AI futurology and religion.

3) Here is Yudkowsky's own summary of this evidence:
    Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:

    (A) The author does not know how to build AGI using present technology. The author does not know where to start.

    (B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.

    (C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.

Inga kommentarer:

Skicka en kommentar