Okay, so we've talked about systems in a very general way. Now I'm going to tell you about a particular kind of system, that is the complex adaptive systems, or CAS some people call them. Let me tell you a little bit what this means. A Complex Adaptive System is a dynamic network, or a system composed of many, many agents. Okay? Now, by agents, this can mean people, this can mean organizations, firms, species, depending on the level that you want to study them. So, what we're interested in is the dynamic between a whole bunch of these fairly autonomous individuals or agents. And that again can be how a species behaves. We could look at a system of how a set of species behaves in an ecosystem. Or we can look at about how a bunch of organizations behave in a system. So, it has many, many, many, many agents reacting to each other and behaving in a relatively regular way. It's composed, again, of agents. And I show this example because what the quarterback needs to do in some ways is a perfect illustration of a system. What agents do is they scan their environment. So, the quarterback is looking at the defensive setup. And they develop schema representing interpretive and action rules. That is, they develop ways, as you in your first, second, tenth, fifteenth, twentieth year as a quarterback, you come up with different schemas. Oh, I have a strong running game, and if I see the defense formed up in this way, then I know these are my various options. Okay? So, you have agents, all right, and they have—they do have some autonomy, all right? And they're reacting to the environment, and they're reacting particularly to each other. And these schemas are subject to evolution, all right? So, you've got an agent, okay, and you've got a condition action, okay, if conditions are such, then I will do such a thing. If the environment seems to be safe, I will actually bow my head down to eat some grass or drink some water or whatever it might be. You see how this affects the environment. Oh, when I do this, I get this response or that response. It picks up, and then you start making a decision again. And, again, as you learn, learning is a way of this adaption, of evolving schemas. What we learn is—when we're learning how to drive, when we're learning how to read, when we're learning how to deal with each other, we come up with schemas about, oh, in this situation, I should behave this way, or, for example, one of the hardest things that anybody has to learn when they're learning how to drive is to turn into the skid. That is the kind of schema that you can internalize. So, whenever you feel that the environment— your car is skidding, you choose that action, you choose that schema, which says you turn into the skid. Another sport way of looking at this is a bunch of agents in parallel constantly acting and reacting to what other agents are doing. So, each one of these football or soccer players, depending on your preferred language, they're seeing each other. And they also have schemas, all right? They have schemas that say, oh, when you see this kind of activity, do this. Now, those schemas aren't 100% perfect. Okay? That's not totally predicted. But you've learned in some ways to adapt to this constantly changing environment. And perhaps you have the actor of the goalie all the way back here who is shouting out the patterns that he or she sees out in the field, depending on what those instructions might be, or what those observations might be, then you move around in the field in different ways. So, the important thing about Complex Adaptive Systems is that the control of the system is highly dispersed. It is decentralized. Even the most autocratic coach is not telling the players how to step every single second. Okay? There is no centralized authority. What's going on is, control of a system is dispersed among the various players or the various parts. The overall behavior of the system, of the team, if you will, or of the machine, is the result of a huge number of decisions made every moment by many individual agents. All right? This is where it's very different from the kind of linear machine-like system that we might see, let's say, in a factory, that has been preplanned, and where every single interaction is already programmed. What you have here is a set of schemas about what you're supposed to do, and the decisions are made individually to produce a result. And the best example of this, although this kind of room no longer really exists, is of a stock market. A stock market is not being directed by a single individual saying, yes, let's go up, let's go down. What is actually happening is millions or perhaps billions of folks are making decisions about individual stocks. Okay? And they're doing so, and that—those collective, well, those collected decisions make up the outcome of what is going to be. And there's three principles that we can use for Complex Adaptive Systems. One is very much following what we've been talking about: Order is emergent as opposed to predetermined. What does that mean? That is that order arises from these interactions. It's not drawn. There's no blueprint that says this is what's going to happen. The order, that is the organization or the pattern of behavior that you might be after, is going to be dictated every single second, or every single moment, or every single possible interaction, depending on these various. So, you've got an intended strategy, a deliberate strategy, okay, this is, we're going to go down this canyon, and then when we get to the canyon, we're all going to go in this particular way. As opposed to an emergent strategy where you have each individual soldier or each individual armor or something like that be looking and be looking for its environment, and they—each one notices that this is the opening, and they go towards it. What you usually have in any kind— for example, in military strategy, is some combination of these two. You have some intended strategy. That is, go left, go right, go north, go south, whatever it might be, at such a speed, and with so much force. But you, at some point, you have to let the squad, the individual soldiers, or the platoon, or the companies, or the regiments, in a sense, make their own decisions based on their reading of the situation. And the optimal combination is some combination thereof. Okay, second principle: the system's history is irreversible. That is, once a system has done something, once the actions have occurred in the system, you cannot go back. Okay? You are stuck, in a sense, with that decision. There is no rewind button. Okay? Very much as in life. The decisions that you have made, the history of your life, the history of the system, is irreversible. You cannot take back the substances you had or the cigarettes you smoked or the food that you ate or the education that you got or whatever it might be. So, the system—yes, it's going in all these various parts, but it can't go backwards. It can't say, oh, let's do a do over. No, that history is embedded, in a sense, inside the system. Because of this, the future of the system is often unpredictable. Okay? You might take some actions through a car, some combination of the gears, the clutch and the brake, okay, and you might have some patterns about which way it's going to go. But depending on the surface, depending on the car, depending on the status of the tires, et cetera, it might go in all sorts of ways. So, systems, Complex Adaptive Systems, because they're composed, being composed constantly by these individual agents, because every decision and T plus 1 is partly based on the conditions that existed in T, and what happened before, it makes prediction, by definition, impossible. There is always—you can talk about probabilities, you can talk about likelihoods, but you cannot predict within 100% how a system is going to respond in one way or the other. And, again, think about it. Think about it, if we were watching sports events, and you could actually not run the sports event. You could do some measures of the various teams, their height, weight, ability, leg strength, health on that particular day, et cetera. You could program it into a computer and pick the winner. No. What we do is we have probabilities. Okay? And we bet on those probabilities. But that final outcome of the game, or of an election, speaking about what's been going on in November, no, you cannot predict that. There's just too many factors that are involved. Now, in order to better understand this, let's go a little bit deeper into complexity theory. And, again, this is not about complicated theory. This is about complexity theory. And I'll talk a little bit about that.