Okay. That's pretty much the end for today's lecture. We mentioned about nonlinear programs. We mentioned about gradient descent. At any point, you look at directions around you, pick the one that improves the fastest. You'll use gradient to do that and then you do some search, iterating, iterating, iterating. If you are not satisfied with these first-order approximation, you do second-order approximation, that's Newton's method. At each point, you do some second-order approximation and get to the lowest point of your second-order approximation. You keep iterating, iterating, iterating. We also mentioned about how to use [inaudible] to solve real nonlinear problems. So you don't just have theories. You also have a way to do the implementation and see what to do in practice. So the thing is that when you are combining the two parts, the theory part and the implementation part. Maybe that helps you to understand this particular screenshot. In this particular screenshot, you may see there are iterations. So I guess you know what does that mean. Either if you use gradient descent or if you use Newton's method, they are iterative algorithms. You go from here to there, and then you do the iteration again. Even though in this particular example there are constraints. So in this constraint optimization problem, you were very simple. Newton's method or gradient descent does not work. But for any other events, the algorithms modified from Newton's method or gradient descent, pretty much they are all iterative algorithms. That's why you see iterations. In this particular implementation or execution of this algorithm, we run 12 iterations to get to an optimal solution. You may also read some terms called primal and the dual. You may be a little bit curious about what does that mean? Well, I haven't told you about this. We haven't told you what do we mean by primal and dual. If you are really interested in that, we need to ask you to wait a few weeks. We need to ask you to go to course three of this course. So that when we are talking about more details about theories, we tell you what's primal, what's dual, what's the theory of duality. But still here, you may take a look at these numbers along this primal line. In this primal column, you may see the last few numbers are pretty much your optimal solutions. In this particular problem, we keep trying to do all the things and we try to minimize the total risk. We keep doing this. Initially, we are at somewhere which has a large primal solution, where the risk is too high. So we try to do a lot of iterations to cutting down the risk so that eventually we get to a lower risk solution, which is better. We keep doing this process until we get to an optimal solution. This is optimal solution. The interesting thing is that when we have this complicated constraint optimization nonlinear problems, the way that a solver tells you that this is optimal is by looking at the primal and the so-called dual thing. When the two numbers coincident, we can say when two numbers meet each other, we say, okay, this is optimum. Also again, this is not something I can tell you now. If you are really interested in that, you need to go to our course three, which talks about theory. The last thing I want to tell you is that you may see that just within this primal column, the numbers goes up and down, up and down. For example, this particular number, which is something we have after one iteration, is actually better than your optimal solution. This value is actually lower than the result of your optimal solution. Why is that? This is because in this advanced algorithms, we are not just searching within our feasible regions. If you have a non-linear problem, something like this, you may just search within feasible regions. That's possible. But that may not be the most efficient way of doing things. There are some scientists that they found that when you are doing the search, maybe you should be allowed to move outside the feasible region and then sometimes come back, maybe after a careful design that may help you to find solutions even faster. Obviously that's a very complicated, very difficult issue. There's no way for me to talk about this here, but I just want to remind you that even though we have told you about gradient descent, Newton's methods, they are still just too many things to learn. If you are interested in that, course three theory will tell you something or there may be even more advanced courses that help us design better algorithms, when we have a nonlinear program, when we have constraints, when we are facing those challenges. We are almost done with this lecture. We're almost done with this course, but there are still many things that you may learn, you may design and you may try. That's the end for today. Thank you.