Well, now that you've had a little bit of a break from these sources of risk, let's keep talking about what the possible sources of risk might be. One possible source is simply that networks get very big very, very fast. And here's just one illustration: a fully connected network or "complete graph" of one thousand nodes. This is not very many. It's the size of a high school, if you will. Each node connected pairwise to all other nodes has nearly half a million links. Okay? So there's half a million connections and a linear increase in the number of nodes— a new kid arriving in the high school— corresponds to a quadratic increase in the number of connections. Okay? These massive modern networks are "Complex Adaptive Systems." They're unpredictable and evolving instantaneously with links and nodes constantly materialising and disappearing. So why does that matter? The reason that matters is because of the notion of a "black swan." Okay? And this is a notion from Nassim Taleb. Rare events can have huge impacts on history. The appearance of a black swan or the collapse of a particular stock or a nuclear bomb going off in the wrong place at the wrong time. These can have very serious effects. And it may be impossible to scientifically predict such rare events. They are so far down the probability distribution that we simply— we can't pay attention to them because otherwise we wouldn't be able to get up in the morning. We are blind to the importance of such events. Why? Because, and this is why they're important: They're rare. Yes. —Go back to that matrix that I had of the varieties of risk— But they have extreme impact. And they can—retrospective predictability. Once they happen, you go, "Oh! Well, of course that black swan was going to appear." "Of course that stock was going to collapse." Or "of course that bank was going to collapse." But beforehand, you simply can't. And the reason for that is the sheer number, okay? With any event, the "tails"—okay, everybody worries around here. Let's say the mean and one or two standard deviations. Okay? This is what we expect to happen. But out here and out here are these outliers, okay, that might destroy the whole system. Now as you increase the number of interactions, as you increase, okay, the number of links or the number of nodes, the probability of these things happening does not change. Okay? It's the same probability. But, you know, as you increase the number of times that you fly, you're getting outside of the predictable patterns. Okay? So if you have so many interactions, if you have so many nodes, if you have so many links, the possibility of this very, very rare event, whether it's Black Monday in 1987, for example, the Great Depression, the financial crisis, these kinds of events—you have to take them into account because you're operating the system so many ways. Think of it this way: You have a machine that will break down a thousand times. If you only use the machine a hundred times, it probably will not happen. If you use the machine 10,000 or 100,000 times, it will happen. Okay? It's the same probability, but you just— you're increasing, in a sense, the number of times that a probability can come into play. You also have something called "interactive complexity." What this means is, a system in which two or more discrete failures can interact in unexpected ways. This goes back to normal accidents, to a certain extent. You have a system that is so complex that a failure in two parts, that you never imagined— Okay? That you didn't imagine interacting, can then produce, or— An outcome can emerge from this particular set of interaction. And again, going back to the black swan, as we build these more and more complex machines, as we couple them tighter and tighter and tighter, the possibility of this interactive complexity occurring and producing some catastrophe increases. You also have indeterminacy. Failure in one part (material, human, or organization), may coincide with the failure of an entirely different part. This is unforeseeable. Again, this goes back to this notion of interactive complexity and the indeterminacy that you might not be able to see where one is going to be linked to another, in one sort or another, maybe not directly but through a second order, or a third or fourth order, or fifth order interaction. Incomprehensibility. A normal accident typically involves interactions that are "not only unexpected, but are incomprehensible." We simply don't understand what is going on. The people involved just don't figure it out quickly to see what's going on. The one pigeon says to the other, "What are other words for incomprehensibility? "Obscurity, unintelligibility, opacity, abstruteness, ambiguity, impenetrability, complexity, intricacy." Again, going back to that notion of the "unknown unknowns," the things that we might not be able to understand until it's far too late, or we can only understand retroactively and go, "oh, I shouldn't have chosen 'x.' I should have shifted over here. I shouldn't have started this." We simply cannot comprehend all the possible moves that will result from that one choice. And this is particularly important with tight coupling. Tight coupling is associated with more interdependency. That is, these parts cannot operate without the other. More coordination—you need to coordinate these kinds of activities. More information flow. Okay? As you couple these elements tighter, as you couple these, you know, possibly incomprehensible, indeterminate, and interactive complexities, as you bring the system tighter and tighter and tighter and you make all parts sort of reliant on every other part, then the possibility of some kind of failure, the risk of some failure, increases. A complex system is tightly coupled when it has time-dependent processes. That is, this could only happen once this happens. Or once this happened, this must happen. Rigidly ordered process (as in sequence A must follow sequence B). Only one path to a successful outcome. Okay? So there's only one way for the machine to work and maybe 99 ways for the machines not to work. And very little slack. You can't just do the job with enough. You have to have just the right amount. Not too much, just the right amount. Okay? Now, again, think of these—incomprehensibility, this tight coupling, et cetera— and start imagining the links that make globalization. For example, supply chains or financial links which we will get back to in a little bit, and you start seeing that these notions of risk have a great deal of applicability to the kind of systems that globalization represents. And AI, artificial intelligence, may make this worse. Why? Because, think about all the various qualities that we're talking about. Tighter coupling, incomprehensibility, interactive complexity. All these increase as we rely more and more and more on algorithms, as we rely more and more on these black boxes. It's black—there's a famous saying of the world rests on a turtle and you go, "what does the turtle rest on?" Well, it's turtles all the way down. Well, how about black boxes all the way down? You can never penetrate inside that box to actually know what's going on, because it might be too complex, it might be too hidden. All right? So a certain kind of algorithm which might determine the behavior of a system, we might not understand what all the possible consequences are of that algorithm as particular situations change, as the environments change. We become slaves, in a sense, to this new master which might make things much more efficient, which might make things much more optimal, but which makes things much more dangerous. Now this is compounded in the case of globalization because there's no exit. There's no planet B. Okay? We cannot leave. If we mess up the Earth, there is no alternative. There is no planet B. There is no exogenous actor. Again, unless you believe in an active divinity for whom prayer might bring about change. But in the absence of that external actor, okay, there can be nobody in control, there's no policeman that you can say, "okay, set this right." And there's no way of predicting these risks. We have to deal—we have to take into account our huge amount of hubris in designing these Towers of Babel, okay, that are meant to challenge any kind of human limit. Well let's just take a little bit of time, be aware of that hubris, that we might create something that we really regret and from which there might be no exit or no solution. And putting all those things together, the global, the systemic, and the risk, is the topic of the next lecture.