Hi. Welcome back. And I want to conclude our discussion and analysis of Lyapunov functions, and I want to do so in a particular way. I want to contrast Lyapunov functions with Markov processes, which we previously studied. 'Cause Markov processes also went to equilibria. But there's some fundamental differences between the equilibria we described with Lyapunov functions and equilibria we described with Markov processes. Now remember the broader context here is, when we look out there in the world, look at a system, there's lots of things that can happen: the system can go to equilibria, the system can be ordered, it can be random, or it can be complex. Those are the four things that we can get, and both Markov processes and Lyapunov functions gave us, sort of, conditions under which we can say for sure, this system is going to go to equilibrium. For a Lyapunov function it was really simple, right? We just had that, there's a function F, it has a maximum value, in this case it's a minimum value. If there's a maximum value, if the process isn't in equilibrium it goes up by some fixed amount k. That means since it's always going up by some fixed amount k, it has to stop. Really simple idea. So it's a way, if we can construct the function, to say that the system's going to equilibrium. Now Markov process was a very different thing. The system went in some finite number of states. They could be high values, low values, whatever, and it moved between those states. And our assumptions were that the probability of moving between those states stayed fixed over time, and it was possible to get from any one state to any other. That was assumption three. And then the third [sic] assumption was, there was no simple cycle. So then you just go like ABC, ABC, ABC. Given those assumptions, we had the Markov Convergence Theorem that said, the system goes to an equilibrium. Remember, it goes to a stochastic equilibrium. So it goes to some equilibrium, if the states are A, B, and C, it may go to some equilibrium like 1/2, 1/4, 1/4. So half the time it's in A, a quarter of the time it's in B, a quarter of the time it's in C. So, it's a stochastic equilibrium. The system's churning, there's equilibrium a percentage of the time it stays in those three states. And moreover, that equilibrium is unique. So if I have a Markov process, history doesn't matter. Initial point doesn't matter. None of those things matter. It's gonna go to this stochastic distribution over these three states, regardless of where you start. Very different from a Lyapunov function, Lyapunov function could be highly path dependent. It could depend a lot, also, on the initial conditions. So it could depend on where you start and where you go. So there could be many, many equilibria. So there's no reason to assume that it's gonna be unique if you put a Lyapunov function on a process. It could, there's lots of equilibria. There could be lots, [inaudible], it isn't necessary, but there could be. Second thing is, it's not a stochastic equilibrium. It's not, half the time you're in A, quarter of the time you're in B, quarter of the time you're in C. It's a fixed point. It's a fixed, you know, allocation of places where people go shop, you know, time periods that people go shop, it's a fixed set of trades among people. So the system stops. Whereas in a Markov process, the system keeps churning. So this, this structure we had for Lyapunov functions, right, maximum value keeps going up, is fundamentally different than the structure we had for Markov processes. Both go to equilibrium, but they go to types of equilibrium. Let's remind ourselves just some of the things we've learned though, about these processes on which we can place Lyapunov functions. First is, if you can construct the Lyapunov function, then it goes to equilibrium. Now if you can't, that doesn't mean it doesn't, but if you can, then it does. That's the first, so when you look at a process, one thing you might do is just sit there and think, huh, can I place a Lyapunov function on this? So in the case of my student with the chairs, yes she could. In the case of the offices, no she couldn't. So she could say with great conviction, "hey, let's do this trading thing with the chairs", but she could be skeptical of whether the same procedure would work with the offices, because she couldn't think of a Lyapunov function. Second: If you can compute, if you can write down a Lyapunov function, you can figure out how long it's going to take. So you can say: this is going to go to equilibrium and it's going to go pretty fast. Or you can say, it's going to go to equilibrium, it could take a long time. So you can bound how long it's going to take to go to equilibrium. That's also good. Third thing, right, just talked about this, that equilibrium need not be unique or efficient. So it could be many equilibria, it could be bad equilibria, all you know is it's going to go to equilibrium. To prove it's a good equilibrium requires other techniques. It's gonna require a deeper analysis. All the Lyapunov function will tell you is, is it gonna go to an equilibrium, and how fast? And then, last: The reason a system won't go equilibrium is because there's externalities. And even more than that, I can even be more specific here: there are sort of externalities pointing in the other direction. So pointing opposite. So if you are trying to increase happiness, if the externalities cause people to become less happy, that's gonna keep the system churning. If you're trying to minimize waiting time and the externalities increase waiting time, that's gonna keep this system churning. So it's not just externalities, it's externalities that point in the opposite direction. That's what cause, that's what causes a system not to have a Lyapunov function, and that's what makes it possibly undecidable. So let's think back to the HOTPO case, the Collatz problem. Remember, that was "half or three plus one". So some of the time it was going down by half, and other times it was going up by three plus one. So we had a system that is going up and down. Systems that go up and down, you can't necessarily say their gonna go to equilibrium. It's those externalities that point in the opposite direction that cause a system that's trying to go down, to go up, or a system that's trying to go up, to go down. And it's externalities that prevent equilibrium. That's the same lesson we learned, remember, from the Langton lambda in that very simple cellular automata model. When we have externalities, when one person's action or one cell's action depends on the actions of others, so if my happiness depends on other people's actions, the system's likely to churn. When what I do is unaffected by other people, so my happiness is unaffected by the actions of others, then I'm likely to get a system that equilibrates. That's it. You know [laugh] we're done now with Lyapunov functions. There's a lotta really cool stuff. And it's a really nice framework for just thinking about systems. So you just think about, let's just set something loose. You think, oh boy, have I just set something loose that's gonna go crazy? Or have I set something loose, that's just gonna very smoothly, lead to a nice equilibrium? One way to get some insight into that is through Lyapunov functions. Now, what's nice though about this as well, is, by looking at Lyapunov functions and comparing it to some of our other models, like Markov processes and the Langton model, we begin to see how having multiple models in our heads enables us to understand some of the richness we see out there in the world, and actually have deeper understandings of the processes we see. To understand, like, this process is going into equilibrium because it's a Markov process, and it's a stochastic equilibrium. And this process, an exchange market, is going to equilibrium because of the fact that it's a Lyapunov function and happiness is going up. So what you get is different processes in that equilibrium for very different reasons. And having different tools for understanding why equilibrium exists is [a] very useful thing for making sense of the world. Which is one reason why we model. Okay, thanks.