tisdag 20 juni 2023

A question for Émile Torres

Dear Émile,

Since my rewarding and truly enjoyable experience in 2017 of serving as your host in Gothenburg during the GoCAS program on existential risk to humanity there has been plenty of water under the bridges, including unfortunately a great deal of friction between the two of us.1 But never mind (at least for the time being) all that, because I now have a specific question regarding your view of an issue that is dear to my heart: the importance of avoding the extinction of Homo sapiens by unaligned AI.

On one hand, you brought up this topic in a Washington Post op-ed as recently as in August last year, and seemed to agree with me about the increasingly urgent need to avoid the creation of an unaligned superintelligent AI that kills us all.

On the other hand, there is the recent episode of the podcast Dave Troy Presents with you and Timnit Gebru. Throughout most of the episode, the derogatory term "AI doomer" is used about those of us who take seriously the extinction risk from unaligned AI.2 Given what you wrote in the Washington Post I would have expected you to protest against this language, as well as against Gebru's extensive monologue (starting about 01:09:40 into the episode and lasting around five minutes) about how extinction risk from AI is nonsense and a distraction from much more pressing and importat problems having to do with AIs exhibiting racial bias and the underrepresentation of women speking at AI conferences. You had plenty of opportunity to add nuance to the discussion by pointing out that extinction risk from AI is actually a real thing, but at no point of the episode was there any hint of disagreement between you and Gebru over this (or anything else, for that matter).

I am puzzled by the contrast between what you say in the Washington Post piece and what you seem to agree with in the Dave Troy Presents episode. Have you changed your mind about AI xrisk since August 2022?3 Do you no longer think there's a serious risk from unaligned AI to the survival of our species? If so, I'd really like to know what new knowledge you have acquired to reach this conclusion, because learning the same thing could lead me to a huge change in how I currently prioritize my time and efforts. Or have you cynically chosen to downplay the risk in order to achieve a better social fit with your new allies in the Gebru camp? If this last suggestion sounds uncharitable, then please forgive me, because I'm really struggling to understand your current take on AI risk.

With kind regards,

Olle

Footnotes

1) This friction includes (but is far from limited to) your tendentious way of repeatedly quoting a passage in my 2016 book Here Be Dragons.

2) As I recently emphasized in an interview in the Danish Weekendavisen, I think the term "AI doomer" is terrible, as it brings to mind someone who shouts "just face it, we're all going to die!", in contrast to the very different message we "doomers" have, namely that we (humanity) are currently on a very dangerous trajectory where the combination of AI capabilities improving at breakneck speed and AI alignment falling far behind risks leading to an AI apocalypse, but that we can avoid this risk if we pull ourselves together with appropriate adjustments of the trajectory.

3) I am aware that you have at various times asserted your blanket disagreement with everything you've written on xrisk up to 2019(?), but if you similarly disagree with what you wrote less than a year ago in the Washington Post, that gives a whole new time frame to your change of hearts.

1 kommentar: