Last Thursday American economist James Miller and I recorded an episode of his podcast Future Strategist on the same topic. The episode can now be heard on Soundcloud or Poddtoppen or whatever is your favorite pod platform. I tried to give a fair and level-headed account of the Swedish approach to covid-19 and how it differs from those of other countries, but I'll leave it to the listener to judge whether I've mitigated the damage to Sweden's international reputation done by Linde, or if I exasperated the situation even further. Either way, the sound quality was unfortunately not great.
Since reading the excellent book The Coddling of the American Mind by Greg Lukianoff and Jonathan Haidt last year, I have become more acutely aware of how increasingly bad the situation is becoming for academic freedom in the United States. We see similar tendencies in Sweden, but the situation across the Atlantic appears worse.
Further incidents after the publication of Lukianoff and Haidt underline how bad the situation is. A few days ago I learned (via Slate Star Codex) about the case of Stephen Hsu, a physics professor at Michigan State University, who was also (until yesterday) their VP in charge of research and graduate studies. Since June 10, he has been that target of an angry mob (starting with a Twitter thread from the Michigan State Graduate Employees Union) calling for his removal as VP, based on unsubstantiated and (frankly speaking) ridiculous charges of racism.
I am now reached by the highly distressing news that the president of Michigan State University has caved in to the mob and asked Hsu to resign. Hsu complied, albeit under protest. This is terrible. Academic freedom is under assault, and (to borrow Hsu's own words) "Academics and Scientists must not submit to mob rule".
I was recently asked to contribute a 150-word text about my relation to effective altruism (EA), for publication in a pamphlet-of-sorts along with similar texts from other supporters of EA. I submitted the text below. After a couple of rounds of email exchange with the editor about how my text could be made less controversial we came to a standstill, and the text was rejected. I nevertheless like it, so am happy to share it here, with URL links added for your convenience:
About a decade ago I began my mid-career shift from being a typical ivory tower mathematics professor towards an increasing focus on AI safety, existential risk and related topics. This move was largely driven by the EA-like idea of wanting to address the world's most pressing issues. Yet the EA movement appeared on my radar only gradually. Initially I was skeptical, as ranking ways to do good in terms of efficiency reminded me of Bjorn Lomborg’s ploy that we shouldn’t fight climate change because fighting malaria is more cost effective. Of course there are many things we ought to do! But when in 2017 I read Will MacAskill’s Doing Good Better, I realized that current EA thinking is less naive than I thought. I have since then become increasingly impressed by all the good theoretical and practical work done in EA, and I now consider myself a warm supporter of both the movement and its core ideas.
How does the pandemic affect the long-distance race between (and now I borrow Nick Bostrom's metaphor) the galloping stallion which is humanity's technological capability, and the foal on unsteady legs which is humanity's wisdom? I don't know, but it does seem that AI development is the kind of activity that can move on pretty much unhampered by lockdowns etc. Last week, OpenAI announced their stunning natural language processor GPT-3, which is an enormously scaled-up sequel to their amazing and much-discussed GPT-2 last year. Yannic Kilcher at ETH has gone over their paper carefully and offers his insights in a very instructive video lecture:
The attentive viewer will notice that Kilcher, while accepting that GPT-3 does a bunch of impressive-looking things, what it does is not so much reasoning as pattern matching and copy-paste. This can be interpreted as Kilcher leaning in the direction of a negative answer to the question in the title of the present blog post. I am personally somewhat more inclined towards a positive answer, as is prominent tech blogger gwern:
GPT-3 is scary because it’s a tiny model compared to what’s possible, with a simple uniform architecture trained in the dumbest way possible (prediction of next text token) on a single impoverished modality (random Internet text dumps) on tiny data (fits on a laptop), and yet, the first version already manifests crazy runtime meta-learning - and the scaling curves still are not bending! [...] In 2010, who would have predicted these enormous models would just develop all these capabilities spontaneously, aside from a few diehard connectionists written off as willfully-deluded old-school fanatics by the rest of the AI community? [...] GPT-3 is hamstrung by its training & data, but just simply training a big model on a lot of data induces meta-learning without even the slightest bit of meta-learning architecture being built in; and in general, training on more and harder tasks creates ever more human-like performance, generalization, and robustness.
The semi-fictitional dialogue between an anonymous machine learning researcher NN, and Scott Alexander, in connection with last year's release of GPT-2, is worth recalling again:
NN: I still think GPT-2 is a brute-force statistical pattern matcher which blends up the internet and gives you back a slightly unappetizing slurry of it when asked.
SA: Yeah, well, your mom is a brute-force statistical pattern matcher which blends up the internet and gives you back a slightly unappetizing slurry of it when asked.