So where did this uncertainty or this risk come from? Let's go through a couple of different sources or ways of understanding that risk. One way of thinking about it is the conservation of catastrophe. That is, there's a finite, certain unchanging amount of bad things that can happen, and the only thing we can do is to distribute what those bad things might happen. There's a "conservation of risk" similar to the conservation of energy, you cannot reduce it, you cannot increase it, you can simply redistribute it. You cannot eliminate it from the system, you can place it in certain parts. And the reason I have this illustration of Smokey the Bear here, is this is a very big issue with forest management. Very simply put—and again, any of you who are forest ecologists, my apologies for the simplification, you can either choose to put out as many small fires as you can, but if you do that, you run the risk of having one big fire. Or you can let the smaller fires occur, and that clears the underbrush and prevents a big fire. Now, again, the line is—the policy line is going to be somewhere in the middle. Do you want to avoid that big risk, in which case you allow these small risks? Or do you not want to have any perturbations, any kind of bad event, and you clamp those down, but what you may be doing is just waiting for a major catastrophe to happen? Sources of risk. It's important to distinguish the sources of risk. Those inside the system and those outside the systems, those you can do something about and those you can't. The specific sources of the risk. And let me just give you this illustration, I believe this is from the New Yorker. Oh, that horrible risk is your driving. "Oh, no! I need to pee!" That's a risk, okay, of needing to go to the bathroom, as you're driving. That's something that we can control, we can minimize, or we can maximize the number of amount of stops that we do. "I hope no one sneezes on me!" Well, that's going to be much harder to control. Once you're in that system of the airplane, or the bus, or whatever it might be, you have no control over whether someone sneezes on you or not. Or, again, "this room better be clean!" That is you've moved into a room, but you have no way of guaranteeing that that room is clean or not. And again, this is just a silly illustration of the various sources and your reactions to risk. And again, your particular propensity might be to be worried about any of these or not, okay, and along a spectrum. The exogenous sources of risk are those that are coming from outside the system. Again, remember that the boundary of the system is often not clear. And it's up to the observer. But whatever the boundary is, these are random or exogenous, and they're not of much interest to social science. Let me give you the example of the asteroid hitting the earth. Now, obviously, as a human being, I'm interested in whether this asteroid will happen or not, especially if it's a planet killer, that's going to destroy everyone. As social science is not very interesting, because it's either going to happen or it's not. I might be interested in how people respond to this risk, to the fear of this risk. I might be interested in seeing how people respond to a limited version of this. What we're really interested in is endogenous risk, not necessarily predictable, but also not random. And of course, the asteroid coming may not be random, we can figure out the causality that happened, but as far as we're concerned, that asteroid killing us all is random. What we're interested in is risks that are inside the system, they're not random, they might not be predictable, but they flow from the characteristics of the system. Now endogenous sources of uncertainty is a very different thing. They do not need malfeasance, disaster or God. They are built into the system And this drawing from Escher— In a sense, the hands are endogenous to the drawing, Where is the difference between the outside that's making the drawing, and the inside of the drawing? You cannot tell. Well, similarly, the most interesting sources of risk, the most interesting sources of risk to a system or to a network, thereby killing it or not allowing it to function— These are the ones that we're interested in. The characteristics that are inside the system itself that might produce these various risks. And again, start thinking about globalization, how it is the characteristics of globalization itself that might be producing the risks to globalization. I also want to distinguish between systemic and emergent risk. Systemic risk is the risk to a system that is posed by the interconnection or network of its constituent parts. It's about how local risk scales up to develop into global risk. So how does one bad apple, okay, infect all these other apples? This one bad apple could have the systemic risk for the entire bushel of apples. Now think back on the concept of emergence, which comes down that the characteristics of a system do not have—cannot be predicted from the characteristics. We know once we spot this bad apple, that if we keep them in the bushel, something bad will happen. But what about emergent risk, that is, a risk that arises from how individual parts are connected to form the whole? Okay? It is not reducible to the individual components, it's simply being in the bushel. Being in the bushel of apples might be the risk. And this is the one that we really have to be worried about. Okay? Most of the risks that we'll be talking about are systemic risk. Something bad happens in one place, and it spreads throughout. But what we really have to worry about is emergent risk, where we cannot find an original event, we cannot find an original rottenness, okay, or an original risk. The risk emerges from the very properties of the system of it coming together. And these are really—these are sort of the version of the "unknown unknowns" that we cannot estimate, we cannot predict, and we might not be able to do anything about. Risks also come from structural holes. Again, think of systems as series of networks. And what we're worried about— a network can cease to function two ways. One, the links between nodes are eliminated, attenuated or impaired. So we've got these individual nodes, these individual agents, these individual companies, these individual countries, et cetera. And all of a sudden, you break the connection between them, or you only break the connection between two of them and you maintain the others. Another way is, the nodes are intrinsically changed, rendering them incapable of receiving or transmitting the signal or the flow. That is, the attention is not on the links themselves, what connects, but it's on the nature of the node, that the node itself might be a problem. And either one of these failures can instantly isolate and starve entire sections of the network graph. And again, whether you talk about the problem being with the node or with the link, it's almost semantic, and it depends. It's sort of like the chicken and the egg. Let's give you an example: the Straits of Hormuz. We can think of this as a node. This is a node that links the production of oil throughout the Persian or the Arabian Gulf, okay, with the rest of the world. This can be a node. Or you can think of the ability of the ships to go through the Straits of Hormuz as these individual links. So you can either think of it as the node disappears, the Straits of Hormuz are closed, and all of a sudden, the world cannot rely on these kinds of energy sources, or some of the links are disturbed, some of the links no longer work, some of the ships will not go through the straits, some of the ships are destroyed. Again, what you're worried about is when you've got this kind of structural hole, that the whole system depends on this node or a particular set of nodes flowing and the links between those particular nodes flowing. You have contagion. A lot of risks are basically contagion risks. And again, we should know this certainly in light of what's happened with COVID. The risk comes from the linking, the actual link, you don't break the link; you use the link in a sense to create a problem. An undesirable contagion spreads throughout a network, systemic reliance on such critical nodes or links can prove beneficial with these choke points acting as firewalls or circuit breakers that prevent the spread of this contagion. Or they can be super connectors. So you can sometimes break off that node or break off that link, okay, through quarantine or whatever, or you can limit contact, you can limit a set of links, okay, by saying everybody stay six feet apart, or don't go to restaurants, or don't go to bars. Or you can isolate nodes, you saying "you have to stay there." Or you can make nodes immune. You can change the nature of the nodes so they are immune to that possible infection. And again, we're going to talk about this when we talk a little bit more about COVID later on in the course. A particular kind of risk that is important for what we're looking is "normal accidents." This concept was come up with by an ex-professor of mine who died recently, Chick Perrow, and his notion of a normal accident—he got this because he was asked to participate in a study of how Three Mile Island happened. And he came with this notion that a complex system exhibits complex interactions when it has unfamiliar, unplanned unexpected sequences. That is, when causal links, in a sense, that you weren't aware of all of a sudden are there. These are not visible or immediately comprehensible. The design features such as branching, feedback loops—so the information that you get from a feedback loop might actually lead to a worse situation— and opportunities for failure to jump across subsystem boundaries. All right, so what makes normal accidents, what makes accidents normal? They are inevitable. And what Chick Perrow came up with is that as you coupled the elements of a system, as you make that system, tighter and tighter and tighter, as more parts are dependent on each other, and as you make their interconnections much more complex, rather than straightforward and linear—as you get these two, you increase the likelihood of a normal accident. Why? Because you've got many more parts depending on each other, okay? And the ways in which they are depending on each other, or interacting with each other, are complex and might produce, again, an emergent risk. And we have to use normal accidents, because we often use some kind of one of these excuses when something goes wrong. We talk about sloppy management; we talk about cultural denial; we talk about the normalization of deviance, where we accept the fact there's always going to be 10% error, don't worry about it, don't worry about it. Malfeasance. You can just have bad people, you can have operator error or you can have exogenous events. Chick Perrow says yes, these are all very, very possible, but there might be situations where in the absence of these, you still have a catastrophe, because these accidents are normal, they are inevitable, they are built into the system, by the very nature of the design of the system, they will happen. Where is the accident "normal?" They emerge from characteristics of the system itself. So the system itself produces the possibility of this, the relative trigger is insignificant, the interaction with warnings, proper operator response and other systems is the problem. So it's not the size of the perturbation. Okay? We went back to this how these complex systems one small change, and it can be a very small change, it can be the equivalent of the butterfly wings flapping. Okay? That leads to certain kinds of interactions with other parts that could produce a disaster. So, for example, one of the things we often see with normal accidents is that operators get a signal of something that's going wrong. What they don't understand is that that signal is actually wrong. That signaling is actually reflecting something else, and they should be worried about some other possibility, but they don't see it. So they respond according to that warning signal. But maybe that's exactly the wrong thing to do. And the failures are not consequential or dependent on each other. That is, it's not one failure leads to another failure leads to another failure. What happens is that you can have a simultaneous failure in two parts that interacts in ways that you had not considered before. Remember, let's go back all the way to where we were talking about disorganized complexity, where the sheer number of interactions gets so high, that you simply cannot predict what they might be. Even with supercomputers, you might not be able to model all the possible interactions, and that interaction of two failures, or more failures, perhaps giving the wrong signal can lead to a normal accident. Some examples. His classic example is Three Mile Island. We might learn more from a debate that's been going on between Chick Perrow and others about whether other catastrophes were normal accidents. Chick had very, very specific characteristics and he was probably the strictest, but some people have argued that the Challenger explosion in 1986, that Chernobyl in 1986, that the global financial crisis of 2008-2009, Deepwater Horizon, Fukushima, all these could be examples of normal accidents. And it's just—and again, it's a little bit like the boundaries of a system, we might want to talk about, "well, this part of the catastrophe wasn't a normal accident," but once this and this happened, then it took over. We might have malfeasance or greed behind global financial crisis, but as we will see in a later lecture, it was the very system that produced the catastrophic result that we saw in 2008 and 2009. So not to depress you too much, so you can go have a cup of tea or whatever it might be. Let's stop there, and then for the next lecture, we'll talk about more additional sources of risk or more things that you should be worried about.