I recommend reading the news article on existential risk research featured in Science Magazine this week. Several prominent researchers are interviewed, including physicist Max Tegmark, who should by now be well known to regular readers of this blog. Like philosopher Nick Bostrom, whose work (including his excellent 2014 book Superintelligence) is discussed in the article, he emphasizes a possible future breakthrough in artificial intelligence (AI) as one of the main existential risks to humanity. Tegmark explains, as Bostrom has done before him and as I have done in an earlier blog post, that the trial-and-error method that has served humanity so well throughout history is insufficient in the presence of catastrophic risks of this magnitude:
- Scientists have an obligation to be involved, says Tegmark, because the risks are unlike any the world has faced before. Every time new technologies emerged in the past, he points out, humanity waited until their risks were apparent before learning to curtail them. Fire killed people and destroyed cities, so humans invented fire extinguishers and flame retardants. With automobiles came traffic deaths—and then seat belts and airbags. "Humanity's strategy is to learn from mistakes," Tegmark says. "When the end of the world is at stake, that is a terrible strategy."
On the other hand, this line of research is controversial in some circuits, whence today's media logic dictates that its adversaries are heard. Recently, cognitive scientist Steven Pinker has become the perhaps most visible such adversary, and he gets to have his say in the Science Magazine article. Unfortunately, he seems to have nothing to offer beyond recycling the catchy oneliners he used when I met him face to face at the EU Parliament in Brussels in October last year - oneliners whose hollowness I later exposed in my blog post The AI meeting in Brussels last week and at greater length in my paper Remarks on artificial intelligence and rational optimism. Pinker's poor performance in these discussions gives the impression (which I will not contradict) that proponents of the position "Let's not worry about apocalyptic AI risk!" do not have good arguments for it. The impression is reinforced by how even leading AI researchers like Yann LeCun, when trying to defend that position, choose to revert to arguments on about the same level as those employed by Pinker. To me, that adds to the evidence that apocalyptic AI risk does merit taking seriously. Readers who agree with me on this and want to learn more can for instance start by reading my aforementioned paper, which offers a gentle introduction and suggestions for further reading.
Inga kommentarer:
Skicka en kommentar