Sam Altman is CEO of OpenAI, and in that capacity he qualifies easily on my top ten list of the world's most influential people today. So when a biography of him is published, it does make some sense to read it. But Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future turned out to be a disappointment.1 One of the things I want most from a biography, regardless of whether it is about someone I admire or someone I consider morally corrupt, is a window into the subject's inner world that allows me (at least to some extent) to understand and to empathize with them. The Optimist does not achieve this, because even though the main focus of every chapter of the book is Altman, he remains an opaque and distant character throughout. I am uncertain about whether this opacity is a personality trait of Altman (despite his often powerful and spellbinding stage performances) or a shortcoming of the book. What perhaps speaks for the latter interpretation is that all the supporting characters of the book come across as equally distant.
Overall, I found the book boring. Altman's childhood and adolescence is given only a very cursory treatment, and the description of his adventures with his first startup Loopt is equally shallow but filled with Silicon Valley and venture capital jargon. Especially boring is how, about a dozen times, the author starts a kind of one-page mini-biography of some supporting character, beginning with their place of birth and parents' occupation, etc, but one is never rewarded with any insights to which this background information comes across as particularly relevant. Somewhat more attuned to my interest are the later chapters of the book, about OpenAI, but to those of us who have been following the AI scene closely in recent years, there is very little in the direction of new revelations.
One aspect of Altman's personality and inner world that strikes me as especially important to understand (but to which the book does not have much to offer) is his view of AI existential risk. Up until "the blip" in November 2023, it seemed that Altman was fairly open about the risk that the technology he was developing might spell doom - ranging from his pre-OpenAI statement that "AI will probably most likely lead to the end of the world, but in the meantime there will be great companies created" to his repeated statements in 2023 about the possibility of "lights out for all of us" and his signing of the CAIS open letter on extinction risk the same year. But after that, he became suddenly very quiet about that aspect of AI. Why is that? Did he come across some new evidence suggesting we are fine when it comes to AI safety, or did he just realize it might be bad for business to speak about how one's product might kill everyone? We deserve to know, but remain in the dark about this.
In fact, Altman still leaks the occasional utterance suggesting he remains concerned. On July 22 this year, he tweeted this:
-
woke up early on a saturday to have a couple of hours to try using our new model for a little coding project.
done in 5 minutes. it is very, very good.
not sure how i feel about it...
-
Sam Altman, you creep, excuse my French but could you shut the f*** up? Or to state this a bit more clearly: if you feel conflicted because your next machine might do damage to the world, the right way is not to be a crybaby and treat your four million Twitter followers and all the rest of us as if we were your private therapist; the right way is to NOT BUILD THAT GODDAMN MACHINE!
Footnotes
1) Another Altman biography, Karen Hao's Empire of AI, was published this spring, almost simultaneously with Hagey's. So perhaps that one is better? Could be, but Shakeel Hashim, who has read both books, actually likes The Optimist better than Empire of AI, and a lot better than I did.
2) In late 2023 I left Twitter in disgust over how its CEO was using it.