- Hanson and Yudkowsky are both driven by an eager desire to understand each other's arguments and to pinpoint the source of their disagreement. This drives them to bring a good deal of meta into their discussion, which I think (in this case) is mostly a good thing. The discussion gets an extra nerve from the facts that they both aspire to be (as best as they can) rational Bayesian agents and that they are both well-acquainted with Robert Aumann's Agreeing to Disagree theorem, which states that whenever two rational Bayesian agents have common knowledge of each other's estimates of the probability of some event, their estimates must in fact agree.2 Hence, as long as Hanson's and Yudkowsky's disagreement persists, this is a sign that at least one of them is irrational. Especially Yudkowsky tends to obsess over this.
- All scientific reasoning involves both induction and deduction, although the proportions can vary quite a bit. Concerning the difference between Hanson's and Yudkowsky's respective styles of thinking, what strikes me most is that Hanson relies much more on induction,3 compared to Yudkowsky who is much more willing to engage in deductive reasoning. When Hanson reasons about the future, he prefers to find some empirical trend in the past and to extrapolate it into the future. Yudkowsky engages much more in mechanistic explanations of various phenomena, and combining then in order to deduce other (and hitherto unseen) phenomena.4 (This difference in thinking styles is close or perhaps even identical to what Yudkowsky calls the "outside" versus "inside" views.)
- Yudkowsky gave, in his very old (1996 - he was a teenager then) paper Staring into the Singularity, a beautiful illustration based on Moore's law of the idea that a self-improving AI might take off towards superintelligence very fast:
- If computing speeds double every two years,
what happens when computer-based AIs are doing the research?
Computing speed doubles every two years.
Computing speed doubles every two years of work.
Computing speed doubles every two subjective years of work.
Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again.
Six months - three months - 1.5 months ... Singularity.
- In Chapter 18 (Surprised by Brains), Yudkowsky begins...
- Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do
- They made fun of Galileo, and he was right.
They make fun of me, therefore I am right.
- The central concept that leads Yudkowsky to predict an intelligence explosion is the new positive feedback introduced by recursive self-improvement. But it isn't really new, says Hanson, recalling (in Chapter 2: Engelbart as UberTool?) the case of Douglas Engelbart, his 1962 paper Augmenting Human Intellect: A Conceptual Framework, and his project to create computer tools (many of which are commonplace today) that will improve the power and efficiency of human cognition. Take word processing as an example. Writing is a non-negligible part of R&D, so if we get an efficient word processor, we will get (at least a bit) better at R&D, so we can then device an even better word processor, and so on. The challenge here to Yudkowsky is this: Why hasn't the invention of the word processor triggered an intelligence explosion, and why is the word processor case different from the self-improving AI feedback loop?
An answer to the last question might be that the writing part of the R&D process is not really all that crucial to the R&D process, taking up maybe just 2% of the time involved, as opposed to the stuff going on in the AI's brain which makes up maybe 90% of the R&D work. In the word processor case, no more than 2% improvement is possible, and after each iteration the percentage decreases, quickly fizzling out to undetectable levels. But is there really a big qualitative difference between 0.02 and 0.9 here? Won't the 90% part of the R&D taking place inside the AI's brain similarly fizzle out after a number of iterations of the feedback loop, with other factors (external logistics) taking on the role as dominant bottlenecks? Perhaps not, if the improved AI brain figures out ways to improve the external logistics as well. But then again, why doesn't that same argument apply to word processing? I think this is an interesting criticism from Hanson, and I'm not sure how conclusively Yudkowsky has answered it.