- Now I don’t know if the "intelligence explosion" is true or not, but I can think of some models that are more probable. Maybe I can ask my smart phone to do something more than calling my wife and it will actually do it? Maybe a person-sized robot can walk around for two minutes without falling over when there is a small gust of wind? Maybe I can predict, within some reasonable confidence interval, climate change a number of years in to the future? Maybe I can build a mathematical model that predicts short-term economic changes? These are all plausible models of the future and we should invest in testing these models. Oh wait a minute... that’s convenient... I just gave a list of the type of engineering and scientific questions our society is currently working on. That makes sense!
- It is impossible for me to rise to your challenge of rolling up my sleeves and get to work on the substance [regarding the problem of a possible intelligence explosion]. I would love to, but there is nothing substantive to work on. I have little data on which to build a model, nor has anyone supplied a concrete explanation of where to start. I could start working on a specific aspect, for example, automated learning for playing games or maybe models of social behaviour. But, actually, this is what I already do in my research. One day this information might provide a step in the path towards the Intelligence Explosion, but I have no way of saying how or when it will do so.
- Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
- I do agree with Armstrong that, before we launch superhuman AGIs upon the world, it would be nice to have a much better theory of how they operate and how they are likely to evolve. No such theory is going to give us any guarantees about what future superhuman AGIs will bring, but the right theory may help us bias and sculpt the future possibilities.
However, I think the most likely route to such a theory will be experimentation with early-stage AGI systems...
I have far more faith in this sort of experimental science than in "philosophical proofs", which I find generally tend to prove whatever the philosopher doing the proof intuitively believed in the first place...
Of course it seems scary to have to build AGI systems and play with them in order to understand AGI systems well enough to build ones whose growth will be biased in the directions we want. But so it goes. That's the reality of the situation. Life has been scary and unpredictable from the start. We humans should be used to it by now!