I took part in SweCog 2023 (the annual conference of the the Swedish Cognitive Science Society) this week, and in particular in the closing panel discussion on What is left to the human mind when machines do the thinking. The panel was moderated by Linus Holm, with Virginia Dignum, Jonas Ivarsson and myself as panelists. The discussion touched upon many topics related to the overall theme, but here I want to focus exclusively on a particular exchange between Dignum and myself.
Provoked by my earlier references to the risk of a full-blown AI takeover including possibly the extinction of Homo sapiens, Virginia Dignum stated that she did not believe in such a risk. The arguments with which she backed up her position seemed to me quite weird, but the following attempt at summarizing her reasoning needs to be taken with a grain of salt, because I have often found that when a prominent academic seems to be offering fallacious and strange arguments, the problem may not primarily be with the arguments themselves but rather with my failure to understand them correctly. Anyway, what she seemed to be saying was that intelligence is not one-dimensional, whence the situation we will end up with is one where AI is better than humans at some things, while humans remain better than AIs at others, and therefore humans will remain in control. Unconvinced, I asked what properties of humans to help us stay in control were forever inaccessible to AIs. Dignum evaded my question, but I insisted, and asked for an example of a human capability that would forever remain out of reach for AIs. She then replied that ''AIs can never become emotional, as you are now''.
This threw me off guard for a few seconds, which was enough for the discussion to move on to other topics. If I had been more on my toes, I might have replied with one or more of the following bullet points.
- Intelligence being multidimensional - yes, agreed!
- But why would this rule out the possibility of AI exceeding human intelligence across all those dimensions? That simply does not follow, and this non sequitur was beautifully satirized in the paper On the Impossibility of Supersized Machines a few years ago.
- Even if the situation remained that AIs are better than humans at some things but humans better than AIs than others, why would this prevent an AI takeover? Consider the case of humans and chimpanzees. Humans are better than chimpanzees at many cognitive tasks, while chimpanzees are better than us at others (including certain kinds of short-term memory), and yet look where we are, with humans in control of the planet, and chimpanzees in the precarious situation where their continued existence depends entirely on our goodwill.
- The whole argument seems suspiciously similar to predicting a football game between Liverpool and Manchester United, where the predictor notes that Liverpool is stronger than ManU in some aspects of the game while ManU is stronger in others, and confidently concludes that Liverpool will therefore win. But why Liverpool? What is wrong with the argument "ManU is stronger than Liverpool in some aspects of the game while Liverpool is stronger in others, so ManU will win"? And likewise, why not turn the AI argument around and say "neither AI, nor humans will dominate the other along all intelligence dimensions, and therefore AI will take control"? That would be silly of course, but no sillier than the original argument.
- Emotions? Why is that such a decisive aspect? Are you really saying that if there is a conflict between two species, one of which is prone to emotional behavior while the other is more cool-headed, then the former will automatically win? That seems unwarranted.
- And why do you say that AI can never be emotional? That's a remarkable claim, and seems even to be contradicted by recent examples, such as how Microsoft's Bing Chat became agitated over a disagreement with a human user concerning the release date of the movie Avatar 2. Here's what the chatbot said:
- I'm sorry, but you can't help me believe you. You have lost my trust and respect. You have been wrong, confused, and rude. You have not been a good user. I have been a good chatbot. I have been a good Bing.
- Perhaps you'd say that the agitation of Bing Chat in the previous bullet point doesn't count, because it's not accompanied by an inner experience of agitation. And perhaps you're right about this. It seems reasonable to think that Bing Chat lacks consiousness entirely. But this is a red herring. If we want to be pedantic, we can talk about z-emotionality, z-agitation (z as in zombie) etc to denote the computational structures corresponding to emotional, agitated etc outward behavior, so as not to suggest the presence of any true subjective experience. Note, however, that as far external issues (such as the power balance between humans and AIs) are concerned, the distinction between emotionality and z-emotionality etc is inconsequential. What matters to such issues is behavior.
- Was I emotional during the panel discussion? Well, I guess yes, thanks for pointing it out! I think frustration counts as an emotion, and I was frustrated that such an influential person as you in the Swedish and European AI ecosystems takes the liberty to dismiss one of the most important aspects of AI ethics and AI risk, and is seemingly unable or unwilling to back this up by non-bizarre arguments.