My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the fourth in a series of five in which I offer various additional comments on the book (here is an index page for the series).
*
The topic of Bostrom's Superintelligence is dead serious: the author believes the survival and future of humanity is at stake, and he may well be right. He treats the topic with utmost seriousness. Yet, his subtle sense of humor surfaces from time to time, diverting nothing from his serious intent, but providing bits of enjoyment for the reader. Here I wish to draw attention to a footnote which I consider a particularly striking example of Bostrom's way of exhibiting a slightly dry humor at the same time as he means every word he writes. What I have in mind is Footnote 10 in the book's Chapter 14, p 236. The context is a discussion on whether it improves or worsens the odds of a favorable outcome of an AI breakthrough with a fast takeoff (a.k.a. the Singularity) if, prior to that, we have performed transhumanistic cognitive enhancement of humans. As usual, there are pros and cons. Among the pros, Bostrom suggests that improved cognitive skills may make it easier for individual researchers as well as society as a whole to recognize the crucial importance of what he calls the control problem, i.e., the problem of how to turn an intelligence explosion into a controlled detonation with consequences that are in line with human values and favorable to humanity. And here's the footnote:
- Anecdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution, though there could be alternative explanations of this impression. If the field becomes fashionable, it will undoubtedly be flooded with mediocrities and cranks.
I like that kind of honesty, even though it carries with it a nonnegligible risk of antagonizing others. Yudkowsky, in fact, has been known for going far - much further than Bostrom does here - in speaking openly about his own cognitive talents. And he does receive a good deal of shit for that, such as in Alexander Kruels's recent blogpost devoted to what he considers to be "Yudkowsky’s narcissistic tendencies".
All this makes the footnote multi-layered in a humorous kind of way. I also think the footnote's final sentence about what happens "if the field becomes fashionable" carries with it a nice touch of humor. Bostrom has a farily extreme propensity to question premises and conclusions, he is well aware of this, and I do think this last sentence (which points out a downside to what is clearly a main purpose of the book - namely to draw attention to the control problem) is written with a wink to that propensity.
Yudkowskys "Fun theory" or rather "theories of fun" are interesting. They're pretty close to my own thinking, but doesn't seem to have got loads of attention yet. It also doesn't seem like Yudkowski has related "Fun theory" very much to questions about friendly or less friendly AI.
SvaraRaderaThanks for the tip! Somehow I've managed to miss this part of Yudkowsky's writings.
RaderaAs for Yudkowsky's writings, is not this this explanation of Bayes' theorem partially rather confusing? For example, he claims the following about Bayesian priors: "Actually, priors are true or false just like the final answer - they reflect reality and can be judged by comparing them against reality. For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong." This sounds like a frequentist interpretation of probability, but Bayesian reasoning with priors is connected with subjectivist interpretations (even if the theorem itself is uncontroversial), and the distinction is never made clear in the article.
SvaraRaderaYes. Confusing and wrong.
Radera