On April 7 I gave the keynote opening talk AI alignment and our momentous imperative to get it right at the 2022 GAIA (Gothenburg AI Alliance) conference. When asked to write a brief summary for the promotion of the conference, I gave them this:
- We are standing at the hinge of history, where actions taken today can lead towards a long and brilliant future for humanity, or to our extinction. Foremost among the rapidly developing technologies that we need to get right is AI. Already in 1951, Alan Turing warned that "once the machine thinking method had started, it would not take long to outstrip our feeble powers", and that "at some stage therefore we should have to expect the machines to take control." If and when that happens, our future hinges on what these machines' goals and incentives are, and in particular whether these are compatible with and give sufficient priority to human flourishing. The still small but rapidly growing research area of AI Alignment aims at solving the momentous task of making sure that the first AIs with power to transform our world have goals that in this sense are aligned with ours.
At least one member of the audience complained afterwards about how short my talk was (the video is just under 29 minutes) and how he would have liked to hear more. To him and others, I offer the lecture series on AI risk and long-term AI safety that I gave in February this year.
Inga kommentarer:
Skicka en kommentar