Visar inlägg med etikett Sam Altman. Visa alla inlägg
Visar inlägg med etikett Sam Altman. Visa alla inlägg

torsdag 7 augusti 2025

With great power comes great responsibility

As I've argued at length elsewhere (and nowhere at greater length than in my latest book), what is currently going on at the leading AI companies in Silicon Valley and San Francisco is likely to have stupendous influence on our lives and the entire future of humanity. Their ambitions for the intended transformation of society are on a stratospheric and hitherto unmatched level, but so are the risks. With great power comes great responsibility, yet they are proceeding at reckless speed, and they have not asked us for permission to risk our lives. It is hardly an exaggeration to call their behavior a unilateral moral trespass against humanity.

I expand on this matter in my brand-new manuscript Advanced AI and the ethics of risking everything, which also serves as a complement to my latest blog post, on OpenAI's CEO Sam Altman, in which I express my opinion about his behavior in just a few crisp swear words. In the new text I elaborate more extensively and with somewhat more polished language. Here is how it begins:
    Imagine sitting in the back seat of a taxi with your two closest family members. As you approach the bridge across the canal, you notice that a ship is waiting to pass and that the bridge has slowly begun to lift. The driver, visibly annoyed by the delay, turns to you and your loved ones and asks “Shall we try and jump it?”. You quickly calculate that doing so would save five minutes, at the cost of a 1% probability of a crash that kills everyone in the car. Do you give the driver your permission?

    In a Hollywood movie, you would probably say yes. But this is real life and you are not insane, so of course you politely decline the driver’s reckless suggestion.

    Next consider Sam Altman, CEO of OpenAI, facing the decision of whether to release the newly developed GPT-5. (There’s a good chance that when this reaches the reader, release of GPT-5 has already happened, but at the time of writing, in August 2025, it is still a hypothetical future event.)

Read the rest of my manuscript here!

fredag 1 augusti 2025

On Sam Altman

Sam Altman is CEO of OpenAI, and in that capacity he qualifies easily on my top ten list of the world's most influential people today. So when a biography of him is published, it does make some sense to read it. But Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future turned out to be a disappointment.1 One of the things I want most from a biography, regardless of whether it is about someone I admire or someone I consider morally corrupt, is a window into the subject's inner world that allows me (at least to some extent) to understand and to empathize with them. The Optimist does not achieve this, because even though the main focus of every chapter of the book is Altman, he remains an opaque and distant character throughout. I am uncertain about whether this opacity is a personality trait of Altman (despite his often powerful and spellbinding stage performances) or a shortcoming of the book. What perhaps speaks for the latter interpretation is that all the supporting characters of the book come across as equally distant.

Overall, I found the book boring. Altman's childhood and adolescence is given only a very cursory treatment, and the description of his adventures with his first startup Loopt is equally shallow but filled with Silicon Valley and venture capital jargon. Especially boring is how, about a dozen times, the author starts a kind of one-page mini-biography of some supporting character, beginning with their place of birth and parents' occupation, etc, but one is never rewarded with any insights to which this background information comes across as particularly relevant. Somewhat more attuned to my interest are the later chapters of the book, about OpenAI, but to those of us who have been following the AI scene closely in recent years, there is very little in the direction of new revelations.

One aspect of Altman's personality and inner world that strikes me as especially important to understand (but to which the book does not have much to offer) is his view of AI existential risk. Up until "the blip" in November 2023, it seemed that Altman was fairly open about the risk that the technology he was developing might spell doom - ranging from his pre-OpenAI statement that "AI will probably most likely lead to the end of the world, but in the meantime there will be great companies created" to his repeated statements in 2023 about the possibility of "lights out for all of us" and his signing of the CAIS open letter on extinction risk the same year. But after that, he became suddenly very quiet about that aspect of AI. Why is that? Did he come across some new evidence suggesting we are fine when it comes to AI safety, or did he just realize it might be bad for business to speak about how one's product might kill everyone? We deserve to know, but remain in the dark about this.

In fact, Altman still leaks the occasional utterance suggesting he remains concerned. On July 22 this year, he tweeted this:
    woke up early on a saturday to have a couple of hours to try using our new model for a little coding project.

    done in 5 minutes. it is very, very good.

    not sure how i feel about it...

To which I replied, on a different social media platform:2
    Sam Altman, you creep, excuse my French but could you shut the f*** up? Or to state this a bit more clearly: if you feel conflicted because your next machine might do damage to the world, the right way is not to be a crybaby and treat your four million Twitter followers and all the rest of us as if we were your private therapist; the right way is to NOT BUILD THAT GODDAMN MACHINE!
That is, in a sense, worse language than I usually employ, but in this case I consider it warranted.

Footnotes

1) Another Altman biography, Karen Hao's Empire of AI, was published this spring, almost simultaneously with Hagey's. So perhaps that one is better? Could be, but Shakeel Hashim, who has read both books, actually likes The Optimist better than Empire of AI, and a lot better than I did.

2) In late 2023 I left Twitter in disgust over how its CEO was using it.

måndag 6 januari 2025

I find Sam Altman's latest words on AI timelines alarming

Estimating timelines until AI development hits the regime where the feedback loop of recursive self-improvement kicks in, leading towards the predictably transformative1 and extremely dangerous intelligence explosion or Singularity, and superintelligence, is inherently very difficult. But we should not make the mistake of inferring from this lack of predictability of timelines that they are long. They could be very short and involve transformative changes already in the 2020s, as is increasingly suggested by AI insiders such as Daniel Kokotajlo, Leopold Ashcenbrenner and Dario Amodei. I am not saying these people are necessarily right, but to just take for granted that they are wrong strikes me as reckless and irrational.

And please read yesterday's blog post by OpenAI's CEO Sam Altman. Parts of it are overly personal and cloying, but we should take seriously his judgement that the aforementioned regime change is about to happen this very year, 2025:
    We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

    We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

    This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.

The time window may well be closing quickly for state actors (in particular, the U.S. government) to intervene in the deadly race towards superintelligence that OpenAI, Anthropic and their closest rivals are engaged in.

Footnote

1) Here, by "predictably transformative", I merely mean that the fact that the technology will radically transform society and our lives is predictable. I do not mean that the details of this transformation can be reliably predicted.

tisdag 10 december 2024

Nobeldagen! Nu med AI-tema!

Idag firar vi Nobeldagen! Detta, och att AI står i centrum för såväl fysik- som kemipriset, uppmärksammar jag idag i tidskriften Kvartal. Min text bär rubriken Noblpristagaren som ändrade sig om AI och inleds med följande stycken:
    ”Att minska risken för utrotning orsakad av AI bör vara en globalt prioriterad fråga jämte andra samhällsrisker som pandemier och kärnvapenkrig.” Så lyder i sin helhet det öppna brev om existentiell AI-risk som släpptes i maj 2023, undertecknat av mer än 500 akademiker, forskare och ledare inom tekniksektorn.

    Mitt eget namn finns en bit ned på undre halvan av undertecknarlistan, men på platserna 1 och 3 finner vi de båda undertecknare som vid prisutdelningsceremonin i Konserthuset i Stockholm idag den 10 december emottar sina Nobelpris ur kung Carl XVI Gustafs hand. Det handlar om fysikpristagaren Geoffrey Hinton och kemipristagaren Demis Hassabis, som både belönas för sina insatser i den revolutionerande AI-utveckling som vi idag befinner oss mitt i.

    Frispråkigast av dem båda rörande AI-risk är Hinton. Våren 2023 gjorde han en helomvändning i synen på den AI-utveckling han själv bidragit så starkt till, och han steg till och med av från en lukrativ forskartjänst på Google för att därmed kunna tala mer fritt om saken. Vid presskonferensen i Stockholm den 8 oktober i år då fysikpriset offentliggjordes var han med via telefon och sade sig vara bekymrad över att den yttersta konsekvensen av hans och andra AI-forskares landvinningar kan bli skapandet av AI-system som är ”mer intelligenta än vi, och som slutligen tar kontrollen”.

    Jämfört med Hinton är Hassabis oftast mer återhållsam med att kommentera detta ämne, men de har alltså båda skrivit under på att risken finns att AI-tekniken utplånar Homo sapiens. En dag som denna är detta ställningstagande extra pikant i och med att det i Alfred Nobels testamente heter att priset som bär hans namn ska tilldelas dem som ”gjort mänskligheten den största nytta”.

Läs resten av texten här, utan betalvägg, för att få reda på mer om intrigspelet som lett fram till den ohyggligt farliga situation vi nu befinner oss i, plus lite grand om plaskandet i den svenska ankdammen. Persongalleriet är omfattande, och inkluderar utöver de ovan nämnda även Shane Legg, Mustafa Suleyman, John Jumper, Larry Page, Elon Musk, Walter Isaacson, Sam Altman, Dario Amodei, Helen Toner, Carl-Henric Svanberg, Erik Slottner, Yoshua Bengio och Ilya Sutskever.

lördag 21 september 2024

Some cheerful notes on the US Senate Hearing on Oversight of AI

Earlier this week, a hearing was held at the US Senate on the topic Oversight of AI: Insiders' Perspectives. Here is the full 2h 13 min video recording of the event, and here is a transcript. I strongly recommend seeing or reading the whole thing.

As regards the subject-matter content of the hearing, large parts of it can only be described as deeply troubling, provided one cares about the human civilization and the human race not being destroyed in the sort of AI catastrophe that may well become the endpoint of the ongoing and reckless race between leading tech companies towards creating superintelligent AI.1 Nevertheless the meeting cheered me up a bit, because I think it is of tremendous importance that the topics discussed reach the ears both of powerful politicians and of the general public. In addition, the following two observations had a really heartening effect on me.

1. My admiration for Senator Richard Blumenthal is on a steady increase. When he chaired an earlier session, in May 2023, on a similar topic, he was apparently unprepared to seriously take in the idea of AI-caused human extinction, and misunderstood it as being a labor market issue. Here is what he then said to OpenAI's CEO Sam Altman:
    You have said - and I'm gonna quote - development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on jobs.
This is understandable. Extinction of humanity is such a far-out concept that it can be hard to take in if you are not used to it. But over the next few hours and months, Blumenthal did take it in, and in this week's hearing he showed excellent undertanding of the issues at stake. He really does take the issues seriously, and seems to be a force for good concerning the need to involve government in mitigating AI risk. Also, not every 78-year old top politician in the United States shows such a steep learning curve.

2. Of the four witnesses, two of them - Helen Toner and William Saunders - are situated mainly on what I would call the AI safety side of AI discourse, while the two others - Margaret Mitchell and David Evan Harris - are more towards AI ethics. These are two adjacent areas without any razor-sharp boundary between them, but here is how I contrast them in my recent paper On the troubled relation between AI ethics and AI safety:
    The difference between the fields is mostly one of emphasis. Work in AI safety focuses mainly on what happens once AI attains capabilities sufficiently broad and powerful to rival humanity in terms of who is in control. It also addresses how to avoid a situation where such an AI with goals and incentives misaligned with core human values goes on to take over the world and possibly exterminate us. [...] In contrast, work in AI ethics tends to focus on more down-to-Earth risks and concerns emanating from present-day AI technology. These include, e.g., AI bias and its impact on social justice, misinformation based on deepfakes and related threats to democracy, intellectual property issues, privacy concerns, and the energy consumption and carbon footprint from the training and use of AI systems.
As discussed at some length in my paper, a tension between representatives in these fields has in recent years been salient, often with accusations that people on the other side are wasting time and resources on the wrong problems. This is extremely unproductive, but all the more wonderful was to see how the witnesses at this Senate hearing showed no such tendencies whatsoever, but instead were eager to emphasize agreements, such as around the need to regulate AI, the dangers involved in naively hoping that the tech companies will self-regulate, and the importance of whistleblower protection. I would like to think that this is a sign that the two camps are beginning to get along better and to unite in the struggle against the true enemy: the tech company executives who are letting (to quote the words OpenAI's former head of safety Jan Leike used as he left in disgust) "safety culture and processes [take] a backseat to shiny products".

*

A final word of caution: Do not take my cheerful observations above as an excuse to say "phew, I guess we're all right then". We're not. The Senate hearing this week was a step in the right direction, but there's a long, difficult and uncertain road ahead towards getting the necessary governmental grip on AI risk - in the United States and internationally.

Footnotes

1) Here are two passages from statements by the witnesses at the hearing. For me personally, it's nothing new, but it is very good to hear them artucilated clearly in this setting. First, former2 OpenAI board member Helen Toner:
    This term AGI isn't well-defined, but it's generally used to mean AI systems that are roughly as smart or capable as a human. In public and policy conversations talk of human level AI is often treated as either science fiction or marketing, but many top AI companies, including OpenAI, Google, Anthropic, are building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years, and some believe could be as close as one to three years away. More to the point, many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction. The companies in question often say that it's too early for any regulation because the science of how AI works and how to make it safe is too nascent.

    I'd like to restate that in different words. They're saying we don't have good science of how these systems work or how to tell when they'll be smarter than us or don't have good science for how to make sure they won't cause massive harm. But don't worry, the main factors driving our decisions are profit incentives and unrelenting market pressure to move faster than our competitors. So we promise we're being extra, extra safe.

    Whatever these companies say about it being too early for any regulation, the reality is that billions of dollars are being poured into building and deploying increasingly advanced AI systems, and these systems are affecting hundreds of millions of people's lives even in the absence of scientific consensus about how they work or what will be built next.

Second, former OpenAI safety researcher William Saunders:
    When I thought about this [i.e., timelines to AGI], there was at least a 10% chance of something that could be catastrophically dangerous within about three years. And I think a lot of people inside of OpenAI also would talk about similar things. And then I think without knowing the exact details, it's probably going to be longer. I think that I did not feel comfortable continuing to work for an organization that wasn't going to take that seriously and do as much work as possible to deal with that possibility. And I think we should figure out regulation to prepare for that because I think, again, if it's not three years, it's going to be the five years or ten years the stuff is coming down the road, and we need to have some guardrails in place.

2) Toner was pushed off the board as a consequence of Sam Altman's Machiavellean manueverings during the tumultuous days at OpenAI in November last year.

torsdag 2 november 2023

En intensiv vecka i AI-politiken

Det är ännu bara torsdag, men ändå har mer hänt denna vecka i fråga om statliga och mellanstatliga AI-politiska initiativ än vi normalt ser på... jag vet inte ens vad jag skall klämma till med för tidsrymd här, för det politiska intresset för AI-frågor är ju så nyvaket att det inte finns något steady state att relatera ordet "normalt" till. De två stora händelser jag har i åtanke är följande.
  • I måndags: President Bidens direktiv om Safe, Secure, and Trustworthy Artificial Intelligence.
  • Igår och idag: Den första globala AI Safety Summit, på Bletchley Park och med Storbritanniens prämiärminister Rishi Sunak som initiativtagare och värd, samt deltagande såväl av toppolitiker (med Kamala Harris och Ursula von der Leyen i spetsen) som av AI- och teknikbranschprofiler (Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk, ...).
Redan igår, på Bletchley Park-mötets första dag, släppte de sin Bletchley Declaration, undertecknad av företrädare för EU, USA, Kina, Indien, Storbritanninen och en rad andra länder, och med formueringar som denna:
    There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of [frontier] AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.
I Bidens presidentorder finns tal om krav på...
    companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,
där jag gärna vill tänka mig att "national security, national economic security, or national public health and safety" är ett slags placeholder för "existential risk to humanity" som ännu inte riktigt får plats inom Overtonförnstret på denna politiska nivå.

Fastän båda dokumenten är utspädda med tal om AI-frågor av jämförelsevis sekundär betydelse, och fastän det i båda fallen inte handlar om något med status av reglering eller bindande avtal utan blott avsiktsförklaringar och storstilade ambitioner, så ser jag de ovan citerade formuleringarna som ett bevis på hur otroligt långt vi under 2023 har lyckats flytta Overtonförnstret för publika AI-diskussioner, där jag menar att de båda öppna brev jag i våras var med och undertecknade (det som organiserades av FLI i mars och det av CAIS i maj) har haft en icke oväsentlig betydelse. Trots den remarkabelt snabba omsvängningen i diskussionsklimatet känner jag ändå en kvardröjande oro om att det kanske inte går snabbt nog för att hinna avvärja katastrof, men en vecka som denna kan jag inte annat än känna mig gladare och hoppfullare än veckan innan.

Jag har inte hunnit smälta dokumentens innehåll tillräckligt för att kommentera dem mer i detalj, men vad gäller Bidens presidentorder har den ständigt läsvärde och gedigne Zvi Mowshowitz varit snabbt på plats med två utförliga texter som jag i stora drag är böjd att instämma i: On the executive order och Reactions to the executive order. Om jag känner honom rätt så kan vi inom någon dag eller två vänta oss en ungefär lika ambitiös reaktion från honom på Bletchley-deklarationen.

Jag vill passa på att nämna att jag som engagerad åskådare till Bletchley Park-mötet gjort min stämma hörd i ett par sammanhang:

Edit 8 november 2023: Nu finns den förutskickade texten av Zvi Mowshowitz om Bletchley Park-mötet.

måndag 29 maj 2023

I Kvartal om AI-utvecklingen och senatsförhören med Sam Altman

Idag skriver jag, under rubriken OpenAI:s vd talar med kluven tunga, i tidskriften Kvartal. Liksom så många andra texter jag skrivit detta nådens år 2023 handlar den om de ofantliga risker som dagens extremt snabba AI-utveckling för med sig, och denna gång tar jag avstamp i senatsförhören den 16 maj i denna fråga med bland andra Sam Altman. Följande passager i början av min artikel förklarar dess rubrik:
    AI-utvecklingen har de senaste åren – och ännu mer de senaste månaderna – gått så rasande snabbt att allt fler yrvaket inser att tekniken kan komma att transformera samhället på ett genomgripande sätt, och att riskerna är enorma. Uppvaknandet har nu även nått Washington DC, där senatsförhör hölls i förrförra veckan kring denna utveckling. Huvudperson i vittnesbåset var Sam Altman, vd för det San Francisco-baserade teknikföretaget OpenAI som i vintras tog världen med storm med chatboten ChatGPT, och som i nuläget leder den skenande AI-utvecklingen.

    Med inlevelse och allvar talade Altman om de radikala förändringar och stora risker vi har framför oss, och om hur ansvarsfullt OpenAI agerar för att hantera dessa risker och samtidigt se till att den ofantliga potential AI-tekniken har på bästa sätt kommer alla människor till godo. [...]

    [Han] hade uppenbarligen förberett sig minutiöst, men det kan vara värt att jämföra hans välavvägda ord i senaten med hur han uttryckt sig i mer avslappnade sammanhang. I en paneldiskussion 2015, kort före grundandet av OpenAI, undslapp han sig att ”AI troligast kommer att leda till världens undergång, men tills dess kommer vi att se ett antal fantastiska företag”.

    [...]

    Dessvärre är Altmans cynism från 2015 en minst lika bra beskrivning av den verksamhet OpenAI idag driver, jämfört med hans mer välartade ord i kongressen i förra veckan. De försöker tämja sina AI-modeller till att undvika exempelvis rasistiska uttalanden eller instruktioner om hur man genomför kriminella handlingar, och gång på gång misslyckas de men släpper ändå produkter som inte lever upp till denna standard. Som en drastisk illustration till [deras] ansvarslöshet [...] vill jag peka på...

Läs hela artikeln här!

onsdag 1 mars 2023

More on OpenAI and the race towards the AI precipice

The following misguided passage, which I've taken the liberty to translate into English, comes from the final chapter of an otherwise fairly reasonable (for its time) book that came out in Swedish way back in 2021. After having outlined, in earlier chapters, various catastrophic AI risk scenarios, along with some tentative suggestions for how they might be prevented, the author writes this:
    Some readers might be so overwhelmed by the risk scenarios outlined in this book, that they are tempted to advocate halting AI development altogether [...]. To them, my recommendation would be to drop that idea, not because I am at all certain that the prospects of further AI development outweigh the risks (that still seems highly unclear), but for more pragmatic reasons. Today's incentives [...] for further AI development are so strong that halting it is highly unrealistic (unless it comes to a halt as a result of civilizational collapse). Anyone who still pushes the halting idea will find him- or herself in opposition to an entire world, and it seems to make more sense to accept that AI development will continue, and to look for ways to influence its direction.
The book is Tänkande maskiner, and the author is me. Readers who have been following this blog closely may have noticed, for instance in my lecture on ChatGPT and AI alignment from December 16 last year, that I've begun to have a change of heart on the questions of whether (parts of) AI development can and should be halted or at least slowed down. I still think that doing so by legislative or other means is a precarious undertaking, not least in view of international race dynamics, but I have also come to think that the risk landscape has become so alarming, and that technical solutions to AI safety problems are so likewise precarious, that we can no longer afford to drop the idea of using AI regulation to buy ourselves time to find these solutions. For those of you who read Swedish, the recent debate in Swedish newspaper Göteborgs-Posten on the role of the leading American AI developer OpenAI in the race towards AGI (artificial general intelligence) further underlines this shift in my opinion: In the middle of this debate, the highly relevant document Planning for AGI and beyond was released by OpenAI on February 24 - a document in which their CEO Sam Altman spelled out the company's view about their participation in the race towards creating AGI and the risks and safety aspects involved. I could have discussed this document in my final rejoinder of the GP debate, but decided against that for reasons of space. But now let me tell you what I think about it. My initial reaction was a sequence of thoughts, roughly captured in the following analogy:
    Imagine ExxonMobil releases a statement on climate change. It’s a great statement! They talk about how preventing climate change is their core value. They say that they’ve talked to all the world’s top environmental activists at length, listened to what they had to say, and plan to follow exactly the path they recommend. So (they promise) in the future, when climate change starts to be a real threat, they’ll do everything environmentalists want, in the most careful and responsible way possible. They even put in firm commitments that people can hold them to.

    An environmentalist, reading this statement, might have thoughts like:

    • Wow, this is so nice, they didn’t have to do this.
    • I feel really heard right now!
    • They clearly did their homework, talked to leading environmentalists, and absorbed a lot of what they had to say. What a nice gesture!
    • And they used all the right phrases and hit all the right beats!
    • The commitments seem well thought out, and make this extra trustworthy.
    • But what’s this part about “in the future, when climate change starts to be a real threat”?
    • Is there really a single, easily-noticed point where climate change “becomes a threat”?
    • If so, are we sure that point is still in the future?
    • Even if it is, shouldn’t we start being careful now?
    • Are they just going to keep doing normal oil company stuff until that point?
    • Do they feel bad about having done normal oil company stuff for decades? They don’t seem to be saying anything about that.
    • What possible world-model leads to not feeling bad about doing normal oil company stuff in the past, not planning to stop doing normal oil company stuff in the present, but also planning to do an amazing job getting everything right at some indefinite point in the future?
    • Are they maybe just lying?
    • Even if they’re trying to be honest, will their bottom line bias them towards waiting for some final apocalyptic proof that “now climate change is a crisis”, of a sort that will never happen, so they don’t have to stop pumping oil?
    This is how I feel about OpenAI’s new statement, Planning for AGI and beyond.
Well, to be honest, this is not actually my own reaction, but my favorite AI blogger Scott Alexander's. However, since my initial reaction to OpenAI's document was close to identical to his, it seemed fine to me to just copy-and-paste his words. They are taken from his brand new blog post in which he, after laying out the above gut reaction, goes into a well-reasoned, well-informed and careful discussion of what parts of the gut reaction are right and what parts are perhaps less so. I know I have an annoying habit of telling readers of this blog about further stuff that they absolutely need to read, but listen, this time I really mean it: in order to understand the (probably) by far most important and consequential action going on right now in the world, you must read OpenAI's statement followed by Scott Alexander's commentary.