fredag 22 december 2017

Three papers on AI futurology

Here are links to three papers on various aspects of artificial intelligence (AI) futurology that I've finished in the past six months, arranged in order of increasing technical detail (i.e., the easiest-to-read paper first, but none of them is anywhere near the really technical math papers I've written over the years).
  • O. Häggström: Remarks on artificial intelligence and rational optimism, accepted for publication in a volume dedicated to the STOA meeting of October 19.

    Introduction. The future of artificial intelligence (AI) and its impact on humanity is an important topic. It was treated in a panel discussion hosted by the EU Parliament’s STOA (Science and Technology Options Assessment) committee in Brussels on October 19, 2017. Steven Pinker served as the meeting’s main speaker, with Peter Bentley, Miles Brundage, Thomas Metzinger and myself as additional panelists; see the video at [STOA]. This essay is based on my preparations for that event, together with some reflections (partly recycled from my blog post [H17]) on what was said by other panelists at the meeting.

  • O. Häggström: Aspects of mind uploading, submitted for publication.

    Abstract. Mind uploading is the hypothetical future technology of transferring human minds to computer hardware using whole-brain emulation. After a brief review of the technological prospects for mind uploading, a range of philosophical and ethical aspects of the technology are reviewed. These include questions about whether uploads will have consciousness and whether uploading will preserve personal identity, as well as what impact on society a working uploading technology is likely to have and whether these impacts are desirable. The issue of whether we ought to move forwards towards uploading technology remains as unclear as ever.

  • O. Häggström: Strategies for an unfriendly oracle AI with reset button, in Artificial Intelligence Safety and Security (ed. Roman Yampolskiy), CRC Press, to appear.

    Abstract. Developing a superintelligent AI might be very dangerous if it turns out to be unfriendly, in the sense of having goals and values that are not well-aligned with human values. One well-known idea for how to handle such danger is to keep the AI boxed in and unable to influence the world outside the box other than through a narrow and carefully controlled channel, until it has been deemed safe. Here we consider the special case, proposed by Toby Ord, of an oracle AI with reset button: an AI whose only available action is to answer yes/no questions from us and which is reset after every answer. Is there a way for the AI under such circumstances to smuggle out a dangerous message that might help it escape the box or otherwise influence the world for its unfriendly purposes? Some strategies are discussed, along with possible countermeasures by human safety administrators. In principle it may be doable for the AI, but whether it can be done in practice remains unclear, and depends on subtle issues concerning how the AI can conceal that it is giving us dishonest answers.

Inga kommentarer:

Skicka en kommentar