My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the fourth in a series of five in which I offer various additional comments on the book (here is an index page for the series).
The topic of Bostrom's Superintelligence is dead serious: the author believes the survival and future of humanity is at stake, and he may well be right. He treats the topic with utmost seriousness. Yet, his subtle sense of humor surfaces from time to time, diverting nothing from his serious intent, but providing bits of enjoyment for the reader. Here I wish to draw attention to a footnote which I consider a particularly striking example of Bostrom's way of exhibiting a slightly dry humor at the same time as he means every word he writes. What I have in mind is Footnote 10 in the book's Chapter 14, p 236. The context is a discussion on whether it improves or worsens the odds of a favorable outcome of an AI breakthrough with a fast takeoff (a.k.a. the Singularity) if, prior to that, we have performed transhumanistic cognitive enhancement of humans. As usual, there are pros and cons. Among the pros, Bostrom suggests that improved cognitive skills may make it easier for individual researchers as well as society as a whole to recognize the crucial importance of what he calls the control problem, i.e., the problem of how to turn an intelligence explosion into a controlled detonation with consequences that are in line with human values and favorable to humanity. And here's the footnote:
- Anecdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution, though there could be alternative explanations of this impression. If the field becomes fashionable, it will undoubtedly be flooded with mediocrities and cranks.
I like that kind of honesty, even though it carries with it a nonnegligible risk of antagonizing others. Yudkowsky, in fact, has been known for going far - much further than Bostrom does here - in speaking openly about his own cognitive talents. And he does receive a good deal of shit for that, such as in Alexander Kruels's recent blogpost devoted to what he considers to be "Yudkowsky’s narcissistic tendencies".
All this makes the footnote multi-layered in a humorous kind of way. I also think the footnote's final sentence about what happens "if the field becomes fashionable" carries with it a nice touch of humor. Bostrom has a farily extreme propensity to question premises and conclusions, he is well aware of this, and I do think this last sentence (which points out a downside to what is clearly a main purpose of the book - namely to draw attention to the control problem) is written with a wink to that propensity.