All right, so let's look at complex systems, and let's look at—I'm not sure mechanisms is the right word. That's the one that we came up with. I want to show you the kinds of things that go on in complex systems that we can look for, and we can use as tools for analysis. First of all, defining the boundary of a system is very, very difficult. That is, at what point does nothing that occur outside this boundary matter to the system? All right? Endogenous or exogenous. To what extent can you establish 100% endogeneity inside the system? That nothing outside of it, in a sense, can affect it. Or 95% or 90% or 80%. Okay? The boundaries are very, very difficult to measure, and the decision is ultimately made by the observer, and there's some threshold of effect. Yes, do we live on planet Earth? Is the planet inside the Solar System? Is that solar system inside a galaxy? Is that galaxy in a universe of many galaxies? Where do we draw the line? Do we draw the line around Earth? Do we include the Moon? Do we include Venus, Mercury, and Mars, or do we go the entire Solar System? Do we look at nearby stars, so the entire galaxy? Again, it's not easy. And we have to accept that sometimes complex systems can be open, that what we think is a closed box, that we actually understand what's going on in that box, is actually open and something can come in. Complexity is a function of history to a certain extent. A complex system is one whose evolution is very sensitive to initial conditions. Where you begin. So, again, with globalization, where you began this process, let's say that globalization begins in 1960. Well, where you were in 1960 is going to make a very big difference on where you are in globalization in 2020. Okay? You can't divorce that, nor can you divorce it from the various parts that— the various things that happened along the way, including small perturbations in this permanent interaction. The complex systems are systems in process that constantly evolve and unfold over time. We can't say, okay, that's it, game over, it's over. No. It's going to keep going. The system doesn't have—the same way that it might not have a finite spatial boundary, it might not have a finite temporal boundary. And a system might continue working, or we might not understand the temporal scale of a system. The important thing is that history matters. What's happened before. You don't get a tabula rasa. You don't get just to begin all over again. Whatever happened before helps determine how the system is going to evolve and where the system is going to take you. You can have systemic memory. By this, I mean the extent to which hysteresis takes place. And hysteresis is a measure basically on your—how dependent you are on history. Okay? How much can you simply bounce back? And let me take you—the most obvious example is a rubber band. Okay? One of the properties of a rubber band, or a well manufactured rubber band, is you stretch it, and it should go back —and, again, depending on the precision of your measure— to the exact same shape that it had. Or you can take a badly manufactured rubber band, or one with particular properties that is very dependent on the history, that once you stretch it, it can never go back again. Or a pair of shoes or a pair of clothing or anything that you use. To what extent, okay, does it have that memory? And here we have a variety of these examples. There's Holling's adaptive cycle where you go through this very, very clear repetition of exploration, conservation, release, and reorganization. Or you can have Odum's pulsing, which is a similar version. You have a climax transition. You have a descent. You've got a low energy restoration. And you have a growth. Or you've got Joseph Tainter's high and low gain. You can have low gain to a system. But then after a while, it turns into a high gain. But that high gain might turn back into a low gain. Can we change history, or are we always prisoners in some ways to that history? Can we understand globalization in 2020 without understanding globalization in—let's say in 1500 AD? I would argue no, because it's very dependent, the functioning of the system is very dependent on its history. It has features of nonlinearity. Again, let's go back to a nonlinearity. In linear systems, the effect is always directly proportional to the cause. Okay? I put this much energy in, you get this much energy out. Or I put this much money in, you get this much money out. Okay? In nonlinear systems, that is broken. The effect is going to be way out of proportion with the cause. And the most cited example of this is the butterfly effect, how a butterfly changing its wing's patterns or the speed with which it's flying somewhere in the Amazon two months, three months, five years, or just one minute later, can have an effect that's felt somewhere in Russia. A very, very small cause, that is the beats of the butterfly, the strength of its beats, can have all sorts of ramifications later on. And systems are nonlinear. Complex systems are nonlinear, in that you cannot predict what the size of the effect is going to be by the size of the cause, which means you have to be very, very careful, because something that you think won't make a difference might make a huge difference. There are tipping points. This is an example of a nonlinear response. Let's just take a look at the most obvious one. Let's say you are trying to get a ball up to the very top of the mountain. Okay? And you, you know, you have the slow work of getting the ball up to the mountain. All right? And you've got it at the very top of the mountain. And all you need is a very small shift in your push, in the air qualities, in something to do with the ball, and it could go all the way down the mountain. We've tried to design, actually—mass video games with this, where you get points for how close the ball can be at the very, very apex of the mountain, but you lose all your points if it rolls down. So, tipping points are, as an example of where you're just right there, you're just perfectly right, but if you go one inch, one millimeter, one pound, one gram, whatever it might be, past that, all of a sudden the dynamics change and you get a lot of momentum of the ball going down. And you can think of hundreds, of thousands, of millions of these. There are all sorts of possible interactions that are going on. If you were looking at the implementation of a particular policy, for example, you have to start thinking about all these various interactions, how family responsibilities might interact with spouse/partner influence, which might matter with the education, which might matter with psychosocial health or the amount of disclosure or the stigma. Again, all of these are constantly interacting. So, predicting a policy outcome from something, even a simple policy, might be really, really hard, depending on these various, again, these various interactions. Feedback loops are an example of this. Negative feedback loops: you get a certain signal, you dampen it, you turn it down; or you're positive: amplifying, you turn it up. Okay? And the simplest example that I can give you is if you've got a single faucet, okay, and two knobs for the hot and the cold water. The feedback is depending on the temperature that you want and the temperature that you are feeling on your hand, okay, you are going to move, you're going to dampen, or you're going to amplify the cold and the hot water. And notice, by the way, that that's not going to—as anybody who has ever taken a shower can attest, that's not going to stay permanent. You might, after a while, you might have to adjust that. You might have to amplify or dampen, depending on the desired temperature that you want. The effects of an element's behavior are fed back. Constantly you're getting feedback information, you're constantly being told how you're behaving. Think about a feedback loop as something— again, sitting at Princeton University, the grading, grading as an incentive for high school students so they can get into somewhere like Princeton. You're giving positive reinforcement to good grades. This makes them give even more, go after better grades, which makes them go for better grades, et cetera, which leads to them coming to the desired destination of getting into a place like Princeton. This means, is anybody in charge? No. Not in these systems. Now, this is—obviously this is from Michelangelo's the Sistine Chapel. If you will, the touch of the divinity never happens. The divinity might have designed, okay, Adam in a particular way. Adam is going to respond in a particular way to a response. Okay? But that divinity is not in control. Now, that doesn't mean that you have to be an atheist to believe in systems theory. You can use systems theory inside a broader perspective of believing that there is a God with particular intention that is intervening, that is acting. Or you can see it in a purely deist manner, the world in a deist manner, which is maybe some basic rules have been designed. Okay? Human beings will behave, will respond in this way, et cetera. But what happens after that? The divinity might not be involved. Again, you don't have to accept the absence of a divinity, certainly not. Okay? It just—it depends on the size of the system that you're looking at. And maybe the system or the situation or the phenomenon is broad enough or important to you enough, there you might say, ah, we're no longer talking about a system. We're looking for some external intervention. We're looking for an exogenous force, if you will. Again, depending on your particular preferences and beliefs. Now, one thing we have been doing, in some ways for the last 150 years with the rise of capitalism, but particularly over the last 30 or 40 years with the rise of digital technology, is trying to improve system efficiency. Okay? Efficiency and expense reduction have been key areas of focus in system science in operations research, in business management, financial engineering. I got an MBA before I got my Ph.D. What's the basic lesson of any business or an MBA? You want to produce it as cheaply as possible, and you want to get as much money as you can for that particular service. And that can involve the suppliers, it can involve the logistics, it can involve the operations, but you want to assure such efficiency that you are getting every single drop of water, as it were, from that particular stone. Savings in time and cost have been facilitated by computers, by internet, telecommunications. Let me just give you an example. In some cities, you can actually track where a subway or a bus might be. And you can plan. Okay? So, you don't have the inefficiency of you waiting in a station or by a sign. You are—you're getting information, you're getting feedback, okay, the system is giving you feedback and you're giving the system feedback by being there or not being there. And it works much more efficiently. Okay? You're wasting less time waiting for the train. You're wasting less time waiting for that widget to arrive. You're wasting less time for that phone call. Now, that's led to increasing profitability. Okay? And this has transformed business. It's allowed business to operate at a margin through a very, very specific margin, a very profitable margin, through specialization, just-in-time inventory, lean supply chains, overnight borrowing, flextime labor workforces, outsourcing, contract. Globalization, in a sense, is a way, if you will, of optimizing this value, of optimizing this production. Let's put the factory here. Let's make this kind of transport, et cetera. And that has resulted in very, very profitable enterprises. So, by fine tuning these dials, if you will, on a minute by minute basis, companies have removed a lot of slack, a lot of buffers, a lot of rainy day reserves. Or human beings, in a sense, because now we have all this digital information, our behavior, in a sense, is less slack. Okay? We sit around waiting for some information less time. We can operate. And that all sounds wonderful. Again, just like I talked about on a previous lecture, how globalization can produce this greater amount of stuff, this greater profitability. But, and this is a very big but... In the illustration from Mickey Mouse and the Sorcerer's Apprentice, Marx talks about this, and it's from a story by Goethe, who talks about the sorcerer's apprentice that learns, let's say, one or two spells. And those one or two spells can actually make the broom do the sweeping or the mopping or whatever it might be. But that sorcerer's apprentice doesn't understand the entire magic. That sorcerer's apprentice doesn't understand the entire system. And he or she might get him or herself in a lot of trouble, because they start a process that they can't finish, and they can't control. So, while each of these efficiencies enable increased profitability and higher returns, they result in a tightening and a rigidification of network connections. Okay? Everything gets a little bit more precise, everything gets tightly—more tightly coupled. In this way, an unintended consequence is that this pervasive focus on efficiency comes at the cost of greater risk for the individual company and increased systematic fragility for the network as a whole. Again, concepts that we're going to be visiting in future lectures. Just remember Mickey, and how delighted Mickey is as the brooms and the buckets are actually doing all his work, and what happens at the end, how he cannot control. He needs the exogenous factor, if you will, of the sorcerer coming in and actually making peace. Okay? So, do we want to depend on that sorcerer showing up and keeping us from all the trouble that we have gotten ourselves into? And that's going to be one of the themes that we're going to be talking about much further on in the course. So, as a way of talking about that, we're going to next move to the concepts of robustness and resilience, and how these two concepts of robustness and resilience can help us understand, in a sense, the danger that we, like Mickey, may be in as we play around with these systems.