Consider the distinction between narrow AI and artificial general intelligence (AGI), the former being AI specialized on a limited range of capabilities and tasks, and the latter denoting a hypothetical future AI technology exhibiting the flexibility and the full range of cognitive capabilities that we associate with human intelligence. Timelines for when (if ever) AGI will be built are highly uncertain [MB,GSDZE], but scientific debate on the issue has become more pressing in recent years, following work on possible catastrophic consequences of an AGI breakthrough unless sufficient care is taken in aligning the machine’s goals with human values [Y08,Bo,Ru,H].
A common argument for long or infinite timelines until AGI is to point to some example where present-day AI performs badly compared to humans, and to take this as a lack of “common sense” and an indication that constructing AGI is far beyond our current capabilities, whence it cannot plausibly become reality anytime soon [Bks,W,P18a,P18b]. We have all seen YouTube videos of robots tripping over their toes used for this purpose, and a recent viral case concerned an AI meant to steer a TV camera to track the ball during a football game which instead tracked the head of a bald linesman [V]. Such utter lack of common sense! An implicit or sometimes explicit conclusion tends to be that concerns about future machines taking control of the world are overblown and can be dismissed.
I believe that the central position assigned to concept of AGI in the AI futurology and AI safety literature has, via the following implicit narrative, given undue credence to the idea that AI existential risk is exclusively an issue concerning the far future. For AI to become AGI, we need to equip it with all of the myriad competences we associate with common sense, one after the other, and only then will it be ready to enter the spiral of self-improvement that has been suggested may lead to an intelligence explosion, superintelligence, and the ability to take over the world [G,Bo,H].
But common sense is a vague notion – it tends to be simply a label we put on any ability where humans still outperform AI. Hence, by the very definition of AGI, we will be able to point to common sense failures of AI up until the very moment AGI is created, so the argument that such failures show that AGI is a long way off is unsound; see [Y17] for a similar observation.
More importantly, however: AGI may not be necessary for the self-improvement spiral to take off. A better narrative is the following. We are already at a stage where AIs are better than humans at some cognitive tasks while humans remain better than AIs at others. For instance, AIs are better at chess [Si,SR], while humans are better at tracking a football [V]. A human pointing to the football example to show that AIs lack common sense has no more force than an AI pointing to chess to show that humans lack common sense. Neither of them fully dominates the other in terms of cognitive capabilities. Still, AI keeps improving, and rather than focusing on full dominance of AI (as encouraged by the AGI concept) the right question to ask is in what range of cognitive capacities AI needs to master in order to be able to seize power from humans. Self-improvement (i.e., constructing advanced AI) is probably a key competence here, but what does that entail?
There is probably little reason to be concerned about, say, the AlphaZero breakthrough [Si,SR], as two-player zero-sum finite board games with full information is unlikely to be the most significant part of the intelligence spectrum. More significant, probably, is the use of language, as a large fraction of what a human does to control the world is via language acts. It is also a direction of AI that has seen dramatic improvements in the last few years with large-scale natural language processors like GPT-2 and GPT-3 [RWAACBD,Bwn,So,KEWGMI]. With the launch of GPT-3, it was stressed that no new methods were employed compared to GPT-2: the improved performance was solely a consequence of brute-force upscaling. Nobody knows where the ceiling is for such upscaling, and it is therefore legitimate to ask whether the crucial ingredient to make AI’s ability to control the world surpass that of humans might lie in the brute-force expansion of such language models.
If it seems counterintuitive that an AI would attain the ability to take over the world without first having mastered common sense, consider that this happens all the time for smaller tasks. As an illustrative small-scale example, take the game of Sokoban, where the player moves around in a maze with the task of pushing a collection of boxes to a set of prespecified locations. An AI system for solving Sokoban puzzles was recently devised and exhibited superhuman abilities, but apparently without acquiring the common sense insight that every human player quickly picks up: if you push a box into a corner, it will be stuck there forever [FDS,PS]. No clear argument has been given for why efficient problem solving lacking in common sense would be impossible for a bigger task like taking over the world.
To summarize, I believe that while the notion of AGI has been helpful in the early formation of ideas in the futurology and philosophy of AI, it has also been a source of some confusion, especially via the common sense argument discussed above. AI existential risk literature might therefore benefit from a less narrow focus on the AGI concept. Such a shift of focus may be on its way, such as in the recent authoritative research agenda [CK] where talk of AGI is almost entirely abandoned in favor of other notions such as prepotent AI, meaning AI systems whose deployment would be at least as transformative to the world as humanity itself and unstoppable by humans. I welcome this shift in discourse.
References
[Bo] Bostrom, N. (2014)
Superintelligence: Paths, Dangers, Strategies, Oxford University Press, Oxford.
[Bks] Brooks, R. (2017) The seven deadly sins of AI prediction, MIT Technology Review, October 6.
[Bwn] Brown, T. et al (2020) Language models are few-shot learners, https://arxiv.org/abs/2005.14165
[CK] Critch, A. and Krueger, D. (2020) AI research considerations for human existential safety (ARCHES), https://arxiv.org/abs/2006.04948
[FDS] Feng, D., Gomes, C. and Selman, B. (2020) A novel automated curriculum strategy to solve hard Sokoban planning instances, 34th Conference on Neural Information Processing Systems (NeurIPS 2020).
[G] Good, I.J. (1966) Speculations concerning the first ultraintelligent machine, Advances in Computers 6, 31-88.
[GSDZE] Grace, K., Salvatier, J., Dafoe, A., Zhang, B. and Evans, O. (2017) When will AI exceed human performance? Evidence from AI experts, https://arxiv.org/abs/1705.08807
[H] Häggström, O. (2021) Tänkande maskiner: Den artificiella intelligensens genombrott, Fri Tanke, Stockholm.
[KEWGMI] Kenton, Z., Everitt, T., Weidinger, L., Gabriel, I., Mikulik, V. and Irving, G. (2021) Alignment of language agents, https://arxiv.org/abs/2103.14659
[MB] Müller, V. and Bostrom, N. (2016) Future progress in artificial intelligence: A survey of expert opinion, in Fundamental Issues of Artificial Intelligence (ed. V. Müller), Springer, Berlin, p 554-571.
[PS] Perry, L. and Selman, B. (2021) Bart Selman on the promises and perils of artificial intelligence, Future of Life Institute Podcast, May 21.
[P18a] Pinker, S. (2018) Enlightenment Now: The Case for Reason, Science and Humanism, Viking, New York.
[P18b] Pinker, S. (2018) We’re told to fear robots. But why do we think they’ll turn on us? Popular Science, February 14.
[RWAACBD] Radford A., Wu, J., Amodei, D., Amodei, D., Clark, J., Brundage, M. and Sutskever, I. (2019) Better language models and their implications, OpenAI, February 14, https://openai.com/blog/better-language-models/
[Ru] Russell, S. (2019) Human Compatible: Artificial Intelligence and the Problem of Control, Viking, New York.
[SR] Sadler, M. and Regan, N. (2019) Game Changer: AlphaZero’s Ground-Breaking Chess Strategies and the Promise of AI, New In Chess, Alkmaar, NL.
[Si] Silver, D. et al (2018) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science 362, 1140-1144.
[So] Sotala, K. (2020) I keep seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd collect a thread of them, Twitter, July 15.
[V] Vincent, J. (2020) AI camera operator repeatedly confuses bald head for soccer ball during live stream, The Verge, November 3.
[W] Waters, R. (2018) Why we are in danger of overestimating AI, Financial Times, February 5.
[Y08] Yudkowsky, E. (2008) Artificial intelligence as a positive and negative factor in global risk, in Global Catastrophic Risks (ed. N. Bostrom and M. Cirkovic), Oxford University Press, Oxford, p 308–345.
[Y17] Yudkowsky, E. (2017) There’s no fire alarm for artificial general intelligence, Machine Intelligence Research Institute, Berkeley, CA.