lördag 30 juli 2022

What are the chances of that?

Some months ago I was asked by the journal The Mathematical Intelligencer to review a recent popular science introduction to probability: Andrew Elliot's What are the Chances of that? (Oxford University Press, 2021). My review has now been published, and here is how it begins:
    A stranger approaches you in a bar and offers a game. This situation recurs again and again in Andrew Elliot’s book What are the Chances of that? How to Think About Uncertainty, which attempts to explain probability and uncertainty to a broad audience. In one of the instances of the bar scene, the stranger asks how many pennies you have in your wallet. You have five, and the stranger goes on to explain the rules of the game. On each round, three fair dice are rolled, and if a total of 10 comes up you win a penny from the stranger, whereas if the total is 9 he wins a penny from you, while all other sums lead to no transaction. You then move on to the next round, and so on until one of you is out of pennies. Should you accept to play? To analyze the game, we first need to understand what happens in a single round. Of the 63=216 equiprobable outcomes of the three dice, 25 of them result in a total of 9 while 27 of them result in a total of 10, so your expected gain from each round is (27-25)/216 = 0.009 pennies. But what does this mean for the game as a whole? More on this later.

    The point of discussing games based on dice, coin tosses, roulette wheels and cards when introducing elementary probability is not that hazard games are a particularly important application of probability, but rather that they form an especially clean laboratory in which to perform calculations: we can quickly agree on the model assumptions on which to build the calculations. In coin tossing, for instance, the obvious approach is to work with a model where each coin toss, independently of all previous ones, comes up heads with probability 1/2 and tails with probability 1/2. This is not to say that the assumptions are literally true in real life (no coin is perfectly symmetric, and no croupier knows how to pick up the coin in a way that entirely erases the memory of earlier tosses), but they are sufficiently close to being true that it makes sense to use them as starting points for probability calculations.

    The downside of such focus on hazard games is that it can give the misleading impression that mathematical modelling is easy and straightforward – misleading because in the messy real world such modelling is not so easy. This is why most professors, including myself, who teach a first (or even a second or a third) course on probability like to alternate between simple examples from the realm of games and more complicated real-world examples involving plane crashes, life expectancy tables, insurance policies, stock markets, clinical trials, traffic jams and the coincidence of encountering an old high school friend during your holiday in Greece. The modelling of these kinds of real-world phenomena is nearly always a more delicate matter than the subsequent step of doing the actual calculations.

    Elliot, in his book, alternates similarly between games and the real world. I think this is a good choice, and the right way to teach probability and stochastic modelling regardless of...

Click here to read the full review!

måndag 18 juli 2022

On systemic risk

For the latest issue of ICIAM Dianoia - the newsletter published by the International Council for Industrial and Applied Mathematics - which was released last week, I was invited to offer my reflections on a recent document namned Briefing Note on Systemic Risk. The resulting text can be found here, and is reproduced below for the convenience of readers of this blog.

* * *

Brief notes on a Briefing Note

I have been asked to comment on the Briefing Note on Systemic Risk, a 36 page document recently released jointly by the International Science Council, the UN Office for Disaster Risk Reduction, and an interdisciplinary network of decision makers and experts on disaster risk reduction that goes under the acronym RISKKAN. The importance of the document lies not so much in the concrete subject-matter knowledge (of which in fact there is rather little) that an interested reader can take away from it, but more in how it serves as a commitment from the three organizations to take the various challenges associated with systemic risk seriously, and to work on our collective ability to overcome these challenges and to reduce the risks.

So what is systemic risk? A first attempt at a definition could involve requiring a system consisting of multiple components, and a risk that cannot be understood in terms of a single such component, but which involves more than one of them (perhaps the entire system) and arises not just from their individual behavior but from their interactions. But more can be said, and an appendix to the Briefing Note lists definitions offered by 22 different organizations and groups of authors, including the OECD, the International Monetary Fund and the World Economic Forum. Recurrent concepts in these definitions include complexity, shocks, cascades, ripple effects, interconnectedness and non-linearity. The practical approach here is probably that we give up on the hope for a clear set of necessary and sufficient conditions on what constitutes a systemic risk, and accept that the concept has somewhat fuzzy edges.

A central theme in the Briefing Note is the need for good data. A system with many components will typically also have many parameters, and in order to understand it well enough to grasp its systemic risks we need to estimate its parameters. Without good data that cannot be done. A good example is the situation the world faced in early 2020 as regards the COVID pandemic. We were very much in the dark about key parameters such as R0 (the basic reproduction number) and the IFR (infection fatality rate), which are properties not merely of the virus itself, but also of the human population that it preys upon, our social contact pattern, our societal infrastructures, and so on – in short, they are system parameters. In order to get a grip on these parameters it would have been instrumental to know the infection’s prevalence in the population and how that quantity developed over time, but the kind of data we had was so blatantly unrepresentative of the population that experts’ guesstimates differed by an order of magnitude or sometimes even more. A key lesson to be remembered for the next pandemic is the need to start sampling individuals at random from the population to test for infection as early as possible.

Besides parameter estimation within a model of the system, it is of course also important to realize that the model is necessarily incomplete, and that system risk can arise from features not captured by it. At the very least, this requires a well-calibrated level of epistemic humility and an awareness of the imprudence of treating a risk as nonexistent just because we are unable to get a firm handle on it.

Early on in the Briefing Note, it is emphasized that while studies of systemic risk have tended to focus on “global and catastrophic or even existential risks”, the phenomenon appears ”at all possible scales – global, national, regional and local”. While this is true, it is also true that it is systemic risk at the larger scales that carry the greatest threat to society and arguably are the most crucial to address. An important cutoff is when the amounts at stake become so large that the risk cannot be covered by insurance companies, and another one is when the very survival of humanity is threatened. As to the latter kinds of risk, the recent monograph by philosopher Toby Ord gives the best available overview and includes a chapter on the so-called risk landscape, i.e., how the risks interact in systemic ways.

Besides epidemics, the concrete examples that feature the most in the Briefing Note are climate change and financial crises. These are well-chosen due both to their urgent need to be addressed and their various features typical of systemic risk. Still, there are other examples whose absence in the report constitute a rather serious flaw. One is AI risk, which is judged by Ord (correctly, in my view) to constitute the greatest existential risk of all to humanity in the coming century. A more abstract one, but nonetheless important, is the risk of human civilization ending up more or less irreversibly in the kind of fixed point – somewhat analogous to mutual defection in the prisoners’ dilemma game but typically much more complex and pernicious – that Scott Alexander calls Moloch and that Eliezer Yudkowsky speaks more prosaically of as inadequate equilibria.