Here, in pdf, is my latest essay Our AI future and the need to stop the bear.1
In it, I describe humanity's current predicament regarding existential AI risk, and our urgent need to solve the situation. This is important stuff. I didn't intend it when I started writing, but towards the end it turns into a pamphlet, a manifesto, a call to action. If you read just one thing by me in 2025, let this essay be it. Here!
Footnote
1) Perhaps you wonder about the bear. I rarely write about bears, so what about the bear? It comes from a metaphor by Connor Leahy, which I quote on p 2 of my essay:
-
My emphasis on the risks stems from a realization of how urgent the need to mitigate them has become. In a recent podcast, AI researcher Connor Leahy went further in his colorful motivation for this prioritization by stating that “it is not useful to philosophize about the communist utopia if a bear is currently ramming down your door”.
Tack Olle, för din initierade och välformulerade essä om en av vår tids mest avgörande frågor. Hoppas den sprider stora ringar på vattnet.
SvaraRaderaTack Henrik!
RaderaRobokalyps
SvaraRaderaKlimathotet känns numera en aning passé, eftersom AI-revolutionen och robotrevolutionen antingen kommer rädda oss eller väldigt mycket mer troligt döma oss. Putin och Trump har visat att vi inte skall räkna med någon internationalistisk "humanism", utan istället med AI-apokalyps och robokalyps. Det är vad de har i beredskap för oss. Idioterna inbillar sig att inskränkt nationalism kan göra en positiv skillnad, vilket den förstås inte kan. Det går åt helvete i alla fall.
Det talas en hel del om AI-apokalyps men väldigt lite om robokalyps, trots att det i slutändan är robotar av olika slag som kommer döda oss. Tanken brukar ju vara att super-AIsarna kommer använda en eller annan form av robotar för att eliminera oss. Men vi klarar det faktiskt alldeles själva. Människor världen över fortsätter att bygga allt mer avancerade mördarrobotar och vill du veta hur man totar ihop sin egen mördarrobot för en billig slant så kan du sannolikt fråga Elon Musks AI-chat Grok, för Musk uppskattar ju yttrandefrihet, åtminstone så länge den är riktigt hårt supermiljardärsstyrd. Att det går åt helvete bekymrar han sig föga om, eftersom han tror att han kan flytta till Mars när han förstört Jorden. Peter Thiel hoppas att det skall räcka att fly i ett privatjet till en avlägsen gård på Nya Zealand. De gullar bägge med Donald Trump. Oklart var Trump tänkt fly när mördarrobotsvärmarna jagar honom.
https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-takeover-might-happen-in-2-years
Följande passage på s 11-12 i min essä förklarar varför jag talar så lite om robotar som jag gör:
Radera"Somewhat more abstractly, I can still say something about how I’d expect an AI takeover to play out. Robotics is lagging somewhat behind other parts of AI research, including the developments of frontier LLMs, and therefore in case of an AI takeover within the next few years I do not expect it to be carried out using robots. While it may be tempting to think about an AI catastrophe in terms of humanoid robots running around shooting at people with machine guns as in the Terminator franchise, to me that seems quite an unlikely scenario.
At this point the skeptic might object that without access to robotics, how in the world could
LLMs threaten human hegemony over the physical world, since such an LLM is restricted to
producing text in text windows? One answer here is that the view that LLMs can only produce
text is by now outdated, and that leading AI developers have already released versions of
frontier models that can be given charge of a computer, such as Anthropic’s Claude 3.5 Sonnet which is marketed as being able to “move a cursor around their computer’s screen, click on relevant locations, and input information via a virtual keyboard, emulating the way people interact with their own computer” (Anthropic, 2024). We are not quite there yet, but in principle this can equip an AI with the same level of agency as a human remote worker, who (as many of us woke up to during the covid pandemic) does have ample ability to influence the physical world from their laptops over considerable distances.
A deeper answer, however, to the skeptic’s objection, is that as long as an advanced AI does not have access to robotics, there is the alternative of using humans to do its bidding out there in the physical world. The key competence for an AI wishing to employ humans in this way is social manipulation, for which the text format is well-suited. We have already seen plenty of examples of AIs exhibiting such competence, one of the most famous anecdotes being that of how GPT-4, when put in a simulated situation where it needed access to a CAPTCHA-protected webpage, lied to a human TaskRabbit worker about being a visually impaired human, thereby successfully getting the worker to solve the CAPTCHA for it (OpenAI, 2023a). See Park et al (2024) for a more systematic discussion of such deception and manipulation cases. What we have seen so far is mostly isolated instances, which can at most hurt individual humans, but seem not yet to be on a level where they can cause systemic societal damage. One thing to be concerned about here is whether there is some critical capability threshold, possibly quite near the level attained by today’s frontier models, where the models are able to carry out social manipulation more broadly, and for goals that are unknown to us, having emerged deep inside the black box that their neural networks constitute. If and when that threshold is crossed, the situation can get truly dangerous."
I påföljande stycke hänvisar jag till samma LessWrong-text som du.