tisdag 28 mars 2023

OpenAI and the Manhattan project

Here's another short (30 minutes) video lecture of mine, following up on the one from December 16 last year, and commenting on the current breakneck speed at which AI is now advancing, including in particular OpenAI's releases in the last two weeks, first of GPT-4 and then of ChatGPT plugins. Perhaps I will be criticized for the parallel to aspects of the Manhattan project, but so be it then, because the comparison is apt.

lördag 25 mars 2023

Three recent podcast episodes where I talk about GPT and other AI

Over the course of just over a month, I've appeared on three different podcasts for in-depth interviews about the current very rapid development in artificial intelligence (AI) and what it means for our future. As to the subject matter discussed, there is plenty of overlap between the three episodes, but also significant non-overlap. Perhaps importantly to some of my readers, the first two are in Swedish and the third one is in English.

Let me also note that I am a returning guest on all three of these podcasts. Prior to the episodes listad above, I have previously appeared twice in Fri Tanke-podden (in December 2017 and May 2021), once in Om filosofers liv och tankar (in June 2021), and twice in Philosophical Disquisitions (in July 2019 and September 2022).

fredag 24 mars 2023

Jag skriver om GPT-4 och GPT-5 i Ny Teknik idag

I Ny Teknik skriver jag idag om AI-risk i samband med OpenAI:s nya GPT-4 och dess eventuella efterföljare GPT-5. Min debattartikel fick rubriken Vi kan inte ta för givet att vi överlever en otillräckligt säkrad GPT-5 och inleds på följande vis:
    Vilka egenskaper ligger bakom den dominans på vår planet som människan byggt upp från nästan noll de senaste 100 000 åren? Det handlar inte om muskelstyrka eller fysiska uthållighet, utan nästan enbart om vår intelligens, vars enorma kraft också pekar på vilket avgörande skede vi befinner oss i nu, då vi är på väg att automatisera den och överföra den till maskiner.

    Googles VD Sundar Pichai överdrev knappast när han i ett tal 2018 utnämnde AI (artificiell intelligens), till ”troligen det viktigaste mänskligheten någonsin arbetat med – viktigare än både elektriciteten och elden”. AI kan bli nyckeln till att lösa alla de stora miljö-, naturresurs- och samhällsproblem som vi brottas med i dag, och till att lägga grunden för en hållbar och blomstrande framtid.

    Men det finns också stora risker. En del av dessa är relativt jordnära – som den att AI-verktyg medverkar till diskriminering eller blir ett redskap för spridning av individanpassad desinformation och spam. Dessa är viktiga att hantera, men den i slutändan allra främsta AI-risken handlar om ett eventuellt framtida AI-genombrott som skapar det som ibland kallas AGI (artificiell generell intelligens). Redan datavetenskapens grundare Alan Turing skrev i en framåtblickande text 1951 om en punkt där maskinerna till slut överträffar mänsklig tankeförmåga, och hur vi då inte längre kan räkna med att behålla kontrollen över dem.

    I ett sådant läge hänger mänsklighetens fortsatta öde på vilka mål och drivkrafter de superintelligenta maskinerna har. Det skulle dröja mer än ett halvsekel efter Turings varningsord innan det forskningsområde som i dag benämns ”AI alignment” sakta började komma igång. AI alignment går ut på att se till att de första AGI-maskinerna får värderingar som är i linje med våra egna, och som prioriterar mänsklig välfärd. Om vi lyckas med det blir AGI-genombrottet det bästa som någonsin hänt oss, men om inte så blir det sannolikt vår undergång.

    Den som har överdoserat på Terminator och liknande filmer kan lätt tro att avancerad robotik är nödvändigt för en AI-katastrof. En av de insikter som ai alignment-pionjären Eliezer Yudkowsky gjorde på 00-talet var dock att ett ai-övertagande inte förutsätter robotik, och att en kanske troligare väg till maktövertagandet åtminstone initialt går via...

Med denna cliffhanger uppmanar jag er att läsa resten av artikeln hos Ny Teknik.

lördag 4 mars 2023

AI och den högre utbildningens framtid: ett halvdagsseminarium

I torsdags, den 2 mars 2023, anordnade CHAIR (Chalmers AI Research Centre) ett halvdagsseminarium med rubriken Artificiell intelligens och den högre utbildningens framtid. Det välbesökta mötet finns nu möjlighet att ta del av via CHAIR:s YouTube-kanal. Det inleddes med ett föredrag av mig rubricerat AI före och efter 2022 – tekniken och dess samhällskonsekvenser, varefter Thore Husfeldt talade över ämnet Hur påverkar generativ artificiell intelligens högre utbildning:

Efter fikapausminglet återupptogs mötet med Jenny de Fine Lichts föredrag Möjligheter och utmaningar med AI-teknik inom samhällsvetenskaplig utbildning, följt av en avslutande paneldiskussion modererad av Elin Götmark och med talartrion förstärkt av Thomas Hillman:

onsdag 1 mars 2023

More on OpenAI and the race towards the AI precipice

The following misguided passage, which I've taken the liberty to translate into English, comes from the final chapter of an otherwise fairly reasonable (for its time) book that came out in Swedish way back in 2021. After having outlined, in earlier chapters, various catastrophic AI risk scenarios, along with some tentative suggestions for how they might be prevented, the author writes this:
    Some readers might be so overwhelmed by the risk scenarios outlined in this book, that they are tempted to advocate halting AI development altogether [...]. To them, my recommendation would be to drop that idea, not because I am at all certain that the prospects of further AI development outweigh the risks (that still seems highly unclear), but for more pragmatic reasons. Today's incentives [...] for further AI development are so strong that halting it is highly unrealistic (unless it comes to a halt as a result of civilizational collapse). Anyone who still pushes the halting idea will find him- or herself in opposition to an entire world, and it seems to make more sense to accept that AI development will continue, and to look for ways to influence its direction.
The book is Tänkande maskiner, and the author is me. Readers who have been following this blog closely may have noticed, for instance in my lecture on ChatGPT and AI alignment from December 16 last year, that I've begun to have a change of heart on the questions of whether (parts of) AI development can and should be halted or at least slowed down. I still think that doing so by legislative or other means is a precarious undertaking, not least in view of international race dynamics, but I have also come to think that the risk landscape has become so alarming, and that technical solutions to AI safety problems are so likewise precarious, that we can no longer afford to drop the idea of using AI regulation to buy ourselves time to find these solutions. For those of you who read Swedish, the recent debate in Swedish newspaper Göteborgs-Posten on the role of the leading American AI developer OpenAI in the race towards AGI (artificial general intelligence) further underlines this shift in my opinion: In the middle of this debate, the highly relevant document Planning for AGI and beyond was released by OpenAI on February 24 - a document in which their CEO Sam Altman spelled out the company's view about their participation in the race towards creating AGI and the risks and safety aspects involved. I could have discussed this document in my final rejoinder of the GP debate, but decided against that for reasons of space. But now let me tell you what I think about it. My initial reaction was a sequence of thoughts, roughly captured in the following analogy:
    Imagine ExxonMobil releases a statement on climate change. It’s a great statement! They talk about how preventing climate change is their core value. They say that they’ve talked to all the world’s top environmental activists at length, listened to what they had to say, and plan to follow exactly the path they recommend. So (they promise) in the future, when climate change starts to be a real threat, they’ll do everything environmentalists want, in the most careful and responsible way possible. They even put in firm commitments that people can hold them to.

    An environmentalist, reading this statement, might have thoughts like:

    • Wow, this is so nice, they didn’t have to do this.
    • I feel really heard right now!
    • They clearly did their homework, talked to leading environmentalists, and absorbed a lot of what they had to say. What a nice gesture!
    • And they used all the right phrases and hit all the right beats!
    • The commitments seem well thought out, and make this extra trustworthy.
    • But what’s this part about “in the future, when climate change starts to be a real threat”?
    • Is there really a single, easily-noticed point where climate change “becomes a threat”?
    • If so, are we sure that point is still in the future?
    • Even if it is, shouldn’t we start being careful now?
    • Are they just going to keep doing normal oil company stuff until that point?
    • Do they feel bad about having done normal oil company stuff for decades? They don’t seem to be saying anything about that.
    • What possible world-model leads to not feeling bad about doing normal oil company stuff in the past, not planning to stop doing normal oil company stuff in the present, but also planning to do an amazing job getting everything right at some indefinite point in the future?
    • Are they maybe just lying?
    • Even if they’re trying to be honest, will their bottom line bias them towards waiting for some final apocalyptic proof that “now climate change is a crisis”, of a sort that will never happen, so they don’t have to stop pumping oil?
    This is how I feel about OpenAI’s new statement, Planning for AGI and beyond.
Well, to be honest, this is not actually my own reaction, but my favorite AI blogger Scott Alexander's. However, since my initial reaction to OpenAI's document was close to identical to his, it seemed fine to me to just copy-and-paste his words. They are taken from his brand new blog post in which he, after laying out the above gut reaction, goes into a well-reasoned, well-informed and careful discussion of what parts of the gut reaction are right and what parts are perhaps less so. I know I have an annoying habit of telling readers of this blog about further stuff that they absolutely need to read, but listen, this time I really mean it: in order to understand the (probably) by far most important and consequential action going on right now in the world, you must read OpenAI's statement followed by Scott Alexander's commentary.