tisdag 1 december 2020

New AI paper with James Miller and Roman Yampolskiy

The following words by Alan Turing in 1951, I have quoted before:
    My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely. [...] Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. [...] It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.
It is but a small step from that ominous final sentence about machines taking over to the conclusion that when that happens, everything hinges on what they are motivated to do. The academic community's reaction to Turing's suggestion was a half-century of almost entirely ignoring it, and only the last couple of decades have seen attempts to seriously address the issues that it gives rise to. An important result of the early theory-building that has come out of this work is the so-called Omohundro-Bostrom framework for instrumental vs final AI goals. I have discussed it, e.g., in my book Here Be Dragons and in a 2019 paper in the journal Foresight.

Now, in collaboration with economist James Miller and computer scientist Roman Yampolskiy, we have another paper on the same general circle of ideas, this time with emphasis on the aspects of Omohundro-Bostrom theory that require careful scrutiny in light of the game-theoretic considerations that arise for an AI living in an environment where it needs to interact with other agents. The paper, whose title is An AGI modifying its utility function in violation of the strong orthogonality thesis, is published in the latest issue of the journal Philosophies. The abstract reads as follows:
    An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly competitive environments might converge to having nearly the same utility function, one optimized to favorably influencing other agents through game theory. Nothing in our analysis weakens arguments concerning the risks of AGI.
Read the full paper here!

2 kommentarer:

  1. Interesting paper! Nice to see it published under Open Access.

    But wouldn't it be possible for the AGI defending the castle in section 6, to improve it’s odds by using an (publicly observable) RNG and a strategy in which it remains silent with probability p when observing a strong castle?

    Would it be possible to use evolutionary game theory to study the evolution of preferences/utility functions of an AGI given the expected environment? (Maybe “expected environment” is the key phrase here…)

    (The paper also relates, in an interesting way, to the ongoing discussion regarding people’s possibilities to influence their own utility functions. People in general fulfills both condition 2 and 3 of the Conditions for Utility Function Self-Modification and resides in a (hyper?) competitive environment.)

    SvaraRadera
  2. Very well. I still don´t see how the AGI could be *aware* of: 1) its own goals, 2) the external world, 3) other minds, and 4) the future.

    SvaraRadera