tisdag 2 april 2024

Interviewed about AI risk in two new episodes of The Evolution Show

Three years ago, shortly after the release of the first edition of my book Tänkande maskiner, I was interviewed about AI risk by Johan Landgren in his YouTube podcast The Evolution Show. The amount of water under the bridges since then has been absolutely stupendous, and the issue of AI risk has become much more urgent, so last month Johan decided it was time to record two more episodes with me:

In his marketing of our discussion he put much emphasis on "7 years" as a timeline until the decisive AI breakthrough that will make or break humanity. I'm not sure I even mentioned that figure explicitly in our conversations, but admittedly it was implicit in some of the imagery I held forth. Still, I should emphasize that timelines are extremely uncertain, to the extent that an exact figure like 7 years needs to be taken with a huge grain of salt. It could happen in 2 years, or 5, or 10, or - provided either some severe unforseen technical obstacle or a collective decision to pause the development of frontier AI - even 20 years or more. This uncertainty subtracts nothing, however, from the urgency of mitigating AI existential risk.

Another part of Johan's marketing of our conversation that I'd like to quote is his characterization of it as "kanske det viktigaste jag haft i mitt liv" ("perhaps the most important one in my entire life"). This may or may not be an exagerration, but I do agree with him that the topics we discussed are worth paying attention to.

1 kommentar:

  1. Att Future of Humanity Institute nu lagts ner låter inte som någon prima nyhet.

    SvaraRadera