So, let's simply write down the Taylor series for these three points here. So, f of x plus dx is equal to f of x plus all the terms that follow. We actually also write down simply the term f of x equals f of x, you will see later why, and the Taylor series for the point X minus dx of f of x minus dx is f of dx minus, and so forth, and you see the equations here. Actually, what we're seeking is coefficients values, real values, with which we multiply the function values of f x plus dx at x and x minus dx that allow us to have approximations of first derivatives, or second derivatives, or even higher derivatives. So, let's simply multiply each of those equations by a. So, the first one by a, the second by b, the third one by c, and sum them up. So, on the left-hand side, now, basically, we have a weighted sum of the functional values, and on the right hand side we have the equivalent expression using Taylor series. So, if we extract the function values at x, or the derivative of the second derivatives, we are left with coefficients with terms that contain the coefficients a, plus b, plus c, or a and c as you see here. Now, we can do something that's called comparing coefficients to actually get conditions under which we will obtain approximations of the functional value itself, the first or second derivative. So, what are those coefficients a, b, c? If we look at the right-hand side of the equation, we can see that if we say a plus b plus c equals zero. Then, the first term would vanish. If a plus c equals zero, the third term would vanish and we are left with something that contains the first derivative, plus some terms with a and c. So, can we formulate that or reformulate that to find those coefficients? That can be done using matrix vector notation, as we see here. We can simply write those conditions formally, a plus b plus c equals zero, a minus c equals, in this case, one over dx and a plus c equals zero. This is now written in matrix vector form, as you see here, where the system matrix a the unknown coefficients a, b, c that we have use the letter W for weights. On the right-hand side, basically, the desired solution in that case of zero, one over dx, and zero. So, this can be simply inverted by matrix inversion scheme to obtain the coefficients a,b,c, and a, in that case, is one over two dx, b equals zero, and c equals minus one over two dx. If we put that back into the left hand side of our equation, we can easily see that we get the definition of the first derivative, f of x plus dx minus f of x minus dx divided by two dx, and that's the classic central finite-difference approximation. So, let's see if we can use the same approach to calculate the weights, the coefficients for the second derivative, a plus b plus c has to be zero,a minus c has to be zero, and a plus c has to be two divided by dx square. Now, we can develop the system matrix for this and solve the matrix inverse problem, and here we go. So, we obtain a,b, and c, and if we put it back into the left-hand side of the equation, we recover the original definition of the second derivative for the finite-difference approach, and that's a very elegant way of coming up with these operators. Actually, we will see in the next step that this approach helps us to get more accurate operators by using more points than simply the three. For example, for the second derivative.