fredag 18 december 2020

My conversation with Seán Ó hÉigeartaigh on AI governance and AI risk

Seán Ó hÉigeartaigh, co-director of the Centre for the Study of Existential Risk at the University of Cambridge, is a leading expert on AI governance and AI risk. Earlier this week I had a stimulating discussion with him on these topics. This took place as part of the AI ethics seminar series at Chalmers, but deserves (in my humble opinion) to be seen more widely:

torsdag 17 december 2020

Min medverkan i SVT:s Utrikesbyrån igår

Angående min medverkan i SVT:s Utrikesbyrån igår (ca 11:20-12:50 in i videon på SVT Play; se även detta nyhetsklipp) vill jag framhålla att mina kritiska ord om FHM-direktör Johan Carlsons prediktion i början av mars av hur hårt Sverige i värsta fall kan komma att drabbas av coronasmittan inte rör sig om simpel efterklokhet. Minnesgoda läsare av denna blogg kan påminna sig att jag redan när det begav sig var starkt kritisk mot hans besinningslösa bedömning.

Programmet, vilket gavs rubriken Pricka rätt 2021, tycker jag som helhet blev helt ok, och inte minst är det bra att de lyfte Philip Tetlocks arbete med superforecasting. Och naturligtvis anser jag att det var ett klokt drag av dem att anlita mig som expertkommentator, men...

...de hade kunnat få betydligt större utväxling på just den saken om de hade hållt sig till den ursprungliga intention de signalerade till mig inför intervjun, nämligen att på ett mer övergripande plan diskutera de matematiska modellernas roll i prognosmakeri, istället för den snabba inzoomningen på Kinas osäkra covid-19-data.

tisdag 1 december 2020

New AI paper with James Miller and Roman Yampolskiy

The following words by Alan Turing in 1951, I have quoted before:
    My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely. [...] Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. [...] It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.
It is but a small step from that ominous final sentence about machines taking over to the conclusion that when that happens, everything hinges on what they are motivated to do. The academic community's reaction to Turing's suggestion was a half-century of almost entirely ignoring it, and only the last couple of decades have seen attempts to seriously address the issues that it gives rise to. An important result of the early theory-building that has come out of this work is the so-called Omohundro-Bostrom framework for instrumental vs final AI goals. I have discussed it, e.g., in my book Here Be Dragons and in a 2019 paper in the journal Foresight.

Now, in collaboration with economist James Miller and computer scientist Roman Yampolskiy, we have another paper on the same general circle of ideas, this time with emphasis on the aspects of Omohundro-Bostrom theory that require careful scrutiny in light of the game-theoretic considerations that arise for an AI living in an environment where it needs to interact with other agents. The paper, whose title is An AGI modifying its utility function in violation of the strong orthogonality thesis, is published in the latest issue of the journal Philosophies. The abstract reads as follows:
    An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly competitive environments might converge to having nearly the same utility function, one optimized to favorably influencing other agents through game theory. Nothing in our analysis weakens arguments concerning the risks of AGI.
Read the full paper here!