So, Taylor series is a way of approximating a function around a central point x. Here you see the definition, f of x plus dx is equal to f of x plus some polynomials and dx containing the first, second, third derivative, and another term containing the factorials. Now, I would like to start with an example approximating the cosine function defined between minus 10 and 10. Here you see an example of the function itself first, and then we basically have showed the approximation so that the Taylor series for that function with increasing order terms, and we see that the approximation gets better and better, the more terms we add. So, that's a very powerful method that's actually used almost everywhere in natural sciences. But does that help us to understand the accuracy of the finite difference approximation? Yes, it does. Let's start with the original definition. We have f of x plus dx on the left hand side. So we subtract f of x and we divide by dx, and then we are left on the right hand side with f prime, which is the first derivative plus some additional terms. Now, what happens if we neglect these additional terms? You see the terms actually start with an order one, so that basically means we describe that with O of dx, that's descriptive of order terms. If we neglect those terms, we actually again, we've seen this before, we have to replace the equal sign by an approximate sign, but now we have a quantitative answer to the question how accurate we are. We are actually accurate to first order in dx, and that's very important later to quantify in general the solution of numerical approximations to partial differential equations at least for the finite difference method. So, we learn the way of approximating first derivatives. But sometimes, actually quite often, we also have higher derivatives; second, third derivatives in the equations describing our physical phenomena. So, what about that situation? Let's start with a second derivative. So, again, let's go to a simple case. We have a function which is shown here and we now know how to estimate the first derivative at points x, x plus dx or x minus dx. So, if we know, if we already have calculated an approximation of the first derivative at those points, can we not simply take the derivative of those first derivatives, calculated at two different points, and divide again by the grid increment or the grid distance between those two points, dx or 2dx to obtain a second derivative? That's what we're going to do next. So, let's take our three points that we see here. At first, use the two right points to calculate a first derivative, between f of x plus dx minus f of x divided by dx, and we calculate another derivative to the left of x minus f of x minus dx divided by dx. Actually, those two derivatives are defined at the points x plus dx over two, and x minus dx over two, but let's not worry about this for the moment. But knowing this, we can now write down the difference between these first derivatives and divide by dx, because that's the distance between these two points where we calculated the first derivative. So, with a little bit of algebra, we end up with a definition of or an approximation for the second derivative. So, the second derivative at point x is equal to, and we have in the above, we have f of x plus dx minus two f of x plus f of x minus dx divided by dx square. Again, there must be an approximate sign because certainly, this is not an exact second derivative. But in the next step, we're going to learn a very different way, a very elegant, fun way of deriving these operators as we call them. Finite difference stencils is another way of describing them using, again, the Taylor series.