Though, if that makes sense for you, now we are able to introduce, how to apply the same idea for solving single- variant non linear problems, okay? Let's say our f, the objective function is twice the friendship or not. Actually what we want, is to find our x bar, such that the first order derivative is, 0, okay? So we want to do that because, we know this is something we need. If you have a function where there is a minimum point, at that point you must satisfy the fact that the gradient must be 0, okay? So if that's the case, then pretty much we want to look for a point, where the lowest point happens at the fact that your derivative is 0. So this is the, nonlinear equation that we want to solve, okay? It's just that previously we call it f of x bar in the previous page, now is f prime of x bar, that's the only difference. So for that, we now may still use the Newton's method, we introduced in the previous page, to try to solve this nonlinear equation. So in each iteration, we're going to get again the linear approximation of f prime at xk. So the linear approximation of x prime, now naturally requires your second order derivative, okay? So that's how you get this new equation to be solved. So to approach x bar, what do we do is that, we simply do the things by considering this particular function, okay? So we first get f prime and then try to apply Newton's method to solve for the first order point, that's the whole idea. So we will keep iterating until the first order derivative, until the magnitude is less than f sum or when the difference between two points is small enough, okay? So don't forget again that your f prime of x bar does not guarantee a global minimum. Just by looking for the stationary point or the first order point cannot guarantee you a global minimum. It at most may convince you that this is required for a point that is local mean, maybe is even a local max or maybe it's just a local mean, but not a global mean, okay? So, this is not guaranteed, the first order point is not guaranteed to be a global minimum. What we need for example, maybe we need to show that our f function satisfies some required conditions, for example, convexity. So this is something that we haven't told you at this moment. And we also don't plan to tell you this. If you are interested, maybe you may take a look at convexity, convex functions by yourself online. Or we may wait until we formally introduced this. But here, all you need to know is that, having this condition is something we want to search for in Newton's method, but it does not always guarantee global optimum. So, that was one interpretation, we try to find x bar to satisfy the first order derivative being 0 or on the other way. Maybe you may also say it in this way. What we try to do is, to try to do the quadratic approximation of f at any given point, okay? So what does that mean? That means at any point for example here. I'm going to do a second order approximation for our original function, okay? So the second order approximation, again maybe obtained by just doing the second-order, Taylor expansion, okay? So is f of x plus f prime of x, times x minus your current solution xk plus your 1 over 2 and here you need your second order derivative. And then for x and xk the difference you need to take square. So this is a very typical Taylor expansion, okay? So if we have that, then now we're going to do what? We're going to move from xk to xk+1, by moving to the global minimum of your quadratic approximation. So we're going to move here, okay? And somehow this is actually equivalent to solving this particular function, all right? So if you go back and forth to look at your previous page, under this page, you will see that the two ideas, the two interpretations are actually equivalent, okay? So pretty much in each iteration. Now, we also say that okay, we want to find x such that we may minimize the second order approximation. So if we do that, then again, this is x, this is x square. And you typically would see that, this kind of function can be solved by doing one kind of first order derivative, okay? So the first order derivative of this objective function, must be 0 and that somehow gives you a function like this. So this is your equation for updating your Newton's method. So this is pretty much the conclusion we need. So one thing to keep in mind, is that here, just don't like gradient descent. You don't need to solve some kind of step size. No. There's no such thing for step size in Newton's method, because you always just find your second-order derivatives, or a second-order approximation and then you move to the lowest point and so that's the idea. Later, take a look at how this may be applied to solve some problems.