fredag 19 april 2024

Future of Humanity Institute 2005-2024

The news that University of Oxford's Future of Humanity Institute (FHI), after nearly two decades of existence, closed down earlier this week (Tuesday, April 16) made me very sad. The institute was Nick Bostrom's brainchild, and it was truly pioneering in terms of formulating some of the most profound and important questions about how to ensure a flourshing future for mankind, as well as in beginning the work of answering them. Their work has more or less uninterruptedly been at the forefront of my mind for more than a decade, and although I only visited their physical headquarters twice (in 2012 and 2016), it is clear to me that it was a uniquely powerful and creative research environment.

In the first draft of this blog post I used the acronym RIP in the headline, but decided to change that, because I wish that what remains from the institute - the minds and the ideas that it fostered - will not rest in peace, but instead continue to sparkle and help create a splendid future. They can do this at the many subsequent research institutes and think tanks that FHI helped inspire, such as The Centre for the Study of Existential Risk in Cambridge, The Future of Life Institute in Massachussetts, The Global Priorities Institute in Oxford, and The Mimir Center at the Institute for Future Studies in Stockholm. And elsewhere.

My friend Anders Sandberg was a driving force at the institue almost from the start and then until the very end. His personal memoir of the institute, entitled Future of Humanity Institute 2005-2024: Final Report offers a summary and many wonderful glimpses from their successful work, including a generous collection of photographs.1 Reading it is great consolation at this moment. Along with the successes, Anders also tells us briefly about the institute's downfall:
    Starting in 2020, the Faculty [of Philosophy] imposed a freeze on fundraising and hiring. Unfortunately, this led to the eventual loss of lead researchers and especially the promising and diverse cohort of junior researchers, who have gone on to great things in the years since. While building an impressive alumni network and ecosystem of new nonprofits, these departures severely reduced the Institute. In late 2023, the Faculty of Philosophy announced that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down. [p 19]
Later, on p 60-61, he offers three short paragraphs about what failings on the FHI's side may have led to such harsh treatment from the Faculty. What he offers is hardly the full story, and I have no specific insight into their organization that can add anything. Still, let me offer a small speculation, mostly based in introspection into my own mind and experience, about the kind of psychological and social mechanisms that may have contributed:

If you are an FHI kind of person (as I am), it will likely seem to you that lowering P(doom) by as little as a ppm is so obviously urgent and important that it appears superfluous and almost perverse to argue for such work using more traditional academic measuring sticks and rituals. That may lead you to ignore (some of) those rituals. If this clash of cultures continues for sufficiently long without careful intervention, the relations to the rest of the university are likely to decline and eventually collapse.


1) See also his latest blog post.

tisdag 2 april 2024

Interviewed about AI risk in two new episodes of The Evolution Show

Three years ago, shortly after the release of the first edition of my book Tänkande maskiner, I was interviewed about AI risk by Johan Landgren in his YouTube podcast The Evolution Show. The amount of water under the bridges since then has been absolutely stupendous, and the issue of AI risk has become much more urgent, so last month Johan decided it was time to record two more episodes with me:

In his marketing of our discussion he put much emphasis on "7 years" as a timeline until the decisive AI breakthrough that will make or break humanity. I'm not sure I even mentioned that figure explicitly in our conversations, but admittedly it was implicit in some of the imagery I held forth. Still, I should emphasize that timelines are extremely uncertain, to the extent that an exact figure like 7 years needs to be taken with a huge grain of salt. It could happen in 2 years, or 5, or 10, or - provided either some severe unforseen technical obstacle or a collective decision to pause the development of frontier AI - even 20 years or more. This uncertainty subtracts nothing, however, from the urgency of mitigating AI existential risk.

Another part of Johan's marketing of our conversation that I'd like to quote is his characterization of it as "kanske det viktigaste jag haft i mitt liv" ("perhaps the most important one in my entire life"). This may or may not be an exagerration, but I do agree with him that the topics we discussed are worth paying attention to.