So, now we've learned how to calculate a first derivative using the finite difference approximation. Now, let's see how this looks like in a Python code and let's make an example with a specific function and see how accurate this is. So, let's go to the Jupyter Notebook. Here we are. This is a notebook where we first calculate a numerical first derivative on a vector containing a certain function, and we compare it with an analytical solution. Then, we also look how the error depends on the space increment and we will see what exactly we mean by that. So, a very convenient function of course and we're dealing later with a wave equation would be a sine function. So, we initialize first a space dependent function sine(kx) where k is the wave number two Pi by the wavelength lambda. Now, how would it be in Python? First, we basically define a physical domain which is x in meters. So, the maximum x, let's say that's 10 meters, and now we sample this space with 200 points and that's the nx that we see here. That basically defines our spatial increment. That's the distance between the two grid points, which we always call dx which is then xmax divided by the number of points minus 1. With the libraries that are available in Python, we can initialize our vector with the space coordinates using the linspace function as we see here between 0 and xmax. Then, we start by defining a wavelength. Let's at first define that the wavelength is actually 20 times the grid increment. We will later introduce the concept of number of points per wavelength. So, actually here, the number of points or grid increments per wavelength is 20. That defines the wave number two Pi by Lambda and then we very simply can initialize the function f. Remember now this is a vector by saying f is equal sine(k times x). Again remember x here is a vector, k is a scalar, the wave number, and that returns the function f which is also a vector. Remember the np dot is basically related to the way we have imported the NumPy library. Now, let's plot that function. You can see it here. This is our sine function defined between 0 and 10, you see the amplitude varies between minus 1 and 1. That's of course very well-known. Now, let's calculate the derivative of that function numerically using the finite difference approximation, a central difference approximation as you see here. So, we will always look at the point to the left and to the right, take the difference, divide by 2dx, and at the central point x, it will return an approximation of the first derivative, and we will compare that with the analytical solution which of course we know it's very easy to get that, it's k cosine kx which will be the analytical solution. So, in the Python code, it looks like shown here. First, we initialize two vectors with zeros and the vector will have length nx, that's the numerical derivative NDER and the analytical derivative ADER. Then we have a loop, and this kind of structure will become very, very important later if we apply this to real partial differential equation. So, we have a loop over space in this case. So, i and that's the range of the index goes from 1 to nx minus 1, and we write the result into the vector numerical derivative at point i which is the function of i plus 1 minus function at i minus 1 divided by 2dx. The analytical derivative is simply initialized into the vector ADER as k times cosine k times x. We actually exclude the first and the last term of that vector because we will calculate the error term, the root-mean-square error here with this line here in order to avoid problems with the edges of the first and last points where we actually do not calculate the derivative. So, basically, now we have all to plot the numerical derivative, which is here in blue and also with the blue crosses, and it's superimposed with the analytical derivative and we also show the difference. So, the first observation is actually, we seem to be doing a pretty good job in calculating the numerical derivative in comparison with the analytical derivative. If we calculate the root-mean-square error across the whole vector, so from basically from 0-10, we get this value here. The absolute value here is not so important but it's only relevant if you compare it with another example, but we see visually that we seem to be doing a pretty good job in calculating the numerical derivative in comparison with the analytical derivative. You can also see that the difference here is kind of oscillating and the key question is now, is that accurate enough for example for a real simulation and how can we further investigate the behavior of these finite differences? In order to do that, we actually introduced the concept of number of points per wavelength. For a sine function, that's very easy because the wavelength is clearly defined and that's simply Lambda, the wavelength, divided by the grid increment. In our case, remember here, it's very easy. We said we want to use 20 points per wavelength and that's again illustrated here. That's basically just a zoom in of the previous plot. We will see that one entire wavelength here is actually sampled by approximately or exactly, in that case actually 20 points. Now an interesting question is, how does the accuracy of the numerical derivative depend on the number of grid points per wavelength? To do that, I'm now actually looping through a number of derivative calculations, changing incrementally the number of points grid points per wavelength and calculating the error at the central point of the domain, which is at five meters, and plotting this. So, the result is shown here, and that graph here contains a very, very important message. What we see here is the error on the vertical axis which the absolute value now is not so important even though it is actually percent error, error energy, and the horizontal axis here is the number of points used to calculate the derivative. As you can see, we actually start with three points. The error is very large, and the more points we use to sample the wavelength, the better the estimation of the numerical derivative becomes. That will become very, very important later in the actual simulation tasks. The question "how many grid points per wavelength" should be used to make a simulation accurate. We can't tell you. We can say that well, above 10 grid points per wavelength for this one calculation, we seem to be doing a pretty good job and we are below 1 percent. But actually, that's not sufficient to decide whether a simulation will be accurate if we have also a time-dependent problem, but we will discuss that much later. For the moment, let's conclude. We can use the finite difference approximation that provides actually a pretty good estimate of the first derivative of a function. The accuracy depends on the number of points per wavelength, that's of course an indication how well we sample the original function, and the more points we use per wavelength, the more accurate is the derivative approximation. Actually, now, I invite you to, and that's the reason why we use these wonderful Jupyter Notebooks, I invite you to play around with this little code, for example, change the function, turn the sine function into a Gaussian function or cosine function or any other function where you easily can estimate the analytical derivative to compare and then see how that first derivative behaves.