Research on artificial intelligence (AI) tends in general to focus too much on making the AI capable, and too little on making its effects (once it becomes capable) benign to humans and humanity. A non-obvious but important thing to realize here is that for an AI to do bad things, it is not necessary that it was built with bad intentions. Sometimes the intentions really are bad, such as when building AI systems to fly drones and shoot at people, but it is also possible for an AI built purely with good intentions to do bad things. For a drastic example, let me quote from my review of Nick Bostrom's important book Superintelligence:
- Talk of the danger of AI may trigger us to think about drones and other military technology, but Bostrom emphasizes that even an AI with seemingly harmless tasks may bring about disaster. He exemplifies with a machine designed to produce paperclips. If such a machine becomes the seed of an intelligence explosion, then, unless we have planned it with extreme caution, it may well result in the entire solar system (including ourselves) being turned into a grotesque heap of paperclips.
An open letter calling for putting higher priority on research to make AI safe and socially beneficial has recently been posted, and signed by over a thousand names, including myself, along with prominent AI experts, philosophers, scientists and entrepreneurs like Stuart Armstrong, Nick Bostrom, Erik Brynjolfsson, Ben Goertzel, Katja Grace, Stephen Hawking, Luke Muehlhauser, Elon Musk, Peter Norvig, Steve Omohundro, Martin Rees, Stuart Russell, Anders Sandberg, Jaan Tallinn, Max Tegmark, Eliezer Yudkowsky, Roman Yampolskiy and many more. Read the letter here! And if you argee with us, do consider signing it!