The following misguided passage, which I've taken the liberty to translate into English, comes from the final chapter of an otherwise fairly reasonable (for its time) book that came out in Swedish way back in 2021. After having outlined, in earlier chapters, various catastrophic AI risk scenarios, along with some tentative suggestions for how they might be prevented, the author writes this:
- Some readers might be so overwhelmed by the risk scenarios outlined in this book, that they are tempted to advocate halting AI development altogether [...]. To them, my recommendation would be to drop that idea, not because I am at all certain that the prospects of further AI development outweigh the risks (that still seems highly unclear), but for more pragmatic reasons. Today's incentives [...] for further AI development are so strong that halting it is highly unrealistic (unless it comes to a halt as a result of civilizational collapse). Anyone who still pushes the halting idea will find him- or herself in opposition to an entire world, and it seems to make more sense to accept that AI development will continue, and to look for ways to influence its direction.
- O. Häggström: Ledande AI-företag spelar rysk roulette med mänsklighetens överlevnad, GP, February 18, 2023.
- D. Dubhashi, M. Johansson and D. Sumpter: Sprid inte skrämselpropaganda om artificiell intelligens, GP, February 24, 2023.
- O. Häggström: Forskareliten är övertygad om att AI snart kan bli smartare än människan, GP, February 27, 2023.
-
Imagine ExxonMobil releases a statement on climate change. It’s a great statement! They talk about how preventing climate change is their core value. They say that they’ve talked to all the world’s top environmental activists at length, listened to what they had to say, and plan to follow exactly the path they recommend. So (they promise) in the future, when climate change starts to be a real threat, they’ll do everything environmentalists want, in the most careful and responsible way possible. They even put in firm commitments that people can hold them to.
- Wow, this is so nice, they didn’t have to do this.
- I feel really heard right now!
- They clearly did their homework, talked to leading environmentalists, and absorbed a lot of what they had to say. What a nice gesture!
- And they used all the right phrases and hit all the right beats!
- The commitments seem well thought out, and make this extra trustworthy.
- But what’s this part about “in the future, when climate change starts to be a real threat”?
- Is there really a single, easily-noticed point where climate change “becomes a threat”?
- If so, are we sure that point is still in the future?
- Even if it is, shouldn’t we start being careful now?
- Are they just going to keep doing normal oil company stuff until that point?
- Do they feel bad about having done normal oil company stuff for decades? They don’t seem to be saying anything about that.
- What possible world-model leads to not feeling bad about doing normal oil company stuff in the past, not planning to stop doing normal oil company stuff in the present, but also planning to do an amazing job getting everything right at some indefinite point in the future?
- Are they maybe just lying?
- Even if they’re trying to be honest, will their bottom line bias them towards waiting for some final apocalyptic proof that “now climate change is a crisis”, of a sort that will never happen, so they don’t have to stop pumping oil?
An environmentalist, reading this statement, might have thoughts like:
Det genuint farliga är nog om man utrustar människor med en riktigt avancerad intelligens som fortfarande är styrd av människors hittills mycket illa analyserade och kontrollerade mänskliga begär. Det är ganska sannolikt människor snarare än super-AIs som är farliga.
SvaraRadera