måndag 18 juli 2022

On systemic risk

For the latest issue of ICIAM Dianoia - the newsletter published by the International Council for Industrial and Applied Mathematics - which was released last week, I was invited to offer my reflections on a recent document namned Briefing Note on Systemic Risk. The resulting text can be found here, and is reproduced below for the convenience of readers of this blog.

* * *

Brief notes on a Briefing Note

I have been asked to comment on the Briefing Note on Systemic Risk, a 36 page document recently released jointly by the International Science Council, the UN Office for Disaster Risk Reduction, and an interdisciplinary network of decision makers and experts on disaster risk reduction that goes under the acronym RISKKAN. The importance of the document lies not so much in the concrete subject-matter knowledge (of which in fact there is rather little) that an interested reader can take away from it, but more in how it serves as a commitment from the three organizations to take the various challenges associated with systemic risk seriously, and to work on our collective ability to overcome these challenges and to reduce the risks.

So what is systemic risk? A first attempt at a definition could involve requiring a system consisting of multiple components, and a risk that cannot be understood in terms of a single such component, but which involves more than one of them (perhaps the entire system) and arises not just from their individual behavior but from their interactions. But more can be said, and an appendix to the Briefing Note lists definitions offered by 22 different organizations and groups of authors, including the OECD, the International Monetary Fund and the World Economic Forum. Recurrent concepts in these definitions include complexity, shocks, cascades, ripple effects, interconnectedness and non-linearity. The practical approach here is probably that we give up on the hope for a clear set of necessary and sufficient conditions on what constitutes a systemic risk, and accept that the concept has somewhat fuzzy edges.

A central theme in the Briefing Note is the need for good data. A system with many components will typically also have many parameters, and in order to understand it well enough to grasp its systemic risks we need to estimate its parameters. Without good data that cannot be done. A good example is the situation the world faced in early 2020 as regards the COVID pandemic. We were very much in the dark about key parameters such as R0 (the basic reproduction number) and the IFR (infection fatality rate), which are properties not merely of the virus itself, but also of the human population that it preys upon, our social contact pattern, our societal infrastructures, and so on – in short, they are system parameters. In order to get a grip on these parameters it would have been instrumental to know the infection’s prevalence in the population and how that quantity developed over time, but the kind of data we had was so blatantly unrepresentative of the population that experts’ guesstimates differed by an order of magnitude or sometimes even more. A key lesson to be remembered for the next pandemic is the need to start sampling individuals at random from the population to test for infection as early as possible.

Besides parameter estimation within a model of the system, it is of course also important to realize that the model is necessarily incomplete, and that system risk can arise from features not captured by it. At the very least, this requires a well-calibrated level of epistemic humility and an awareness of the imprudence of treating a risk as nonexistent just because we are unable to get a firm handle on it.

Early on in the Briefing Note, it is emphasized that while studies of systemic risk have tended to focus on “global and catastrophic or even existential risks”, the phenomenon appears ”at all possible scales – global, national, regional and local”. While this is true, it is also true that it is systemic risk at the larger scales that carry the greatest threat to society and arguably are the most crucial to address. An important cutoff is when the amounts at stake become so large that the risk cannot be covered by insurance companies, and another one is when the very survival of humanity is threatened. As to the latter kinds of risk, the recent monograph by philosopher Toby Ord gives the best available overview and includes a chapter on the so-called risk landscape, i.e., how the risks interact in systemic ways.

Besides epidemics, the concrete examples that feature the most in the Briefing Note are climate change and financial crises. These are well-chosen due both to their urgent need to be addressed and their various features typical of systemic risk. Still, there are other examples whose absence in the report constitute a rather serious flaw. One is AI risk, which is judged by Ord (correctly, in my view) to constitute the greatest existential risk of all to humanity in the coming century. A more abstract one, but nonetheless important, is the risk of human civilization ending up more or less irreversibly in the kind of fixed point – somewhat analogous to mutual defection in the prisoners’ dilemma game but typically much more complex and pernicious – that Scott Alexander calls Moloch and that Eliezer Yudkowsky speaks more prosaically of as inadequate equilibria.

Inga kommentarer:

Skicka en kommentar