So, let's apply this powerful technique to calculating derivatives to solving partial-differential equations. Again, we make use of the one-dimensional acoustic wave equation, which you can see here. Like we did before, on the left-hand side, we have the second time derivative, and we solve it numerically using a standard three-point second derivative finite difference operator. On the right-hand side, what remains to be done is to calculate the second space derivative, and for that, we do the following: we Fourier transform our vector containing the pressure values, obtaining P, the spectrum, we multiply it with a second derivative operator, which is minus K square, in the wavenumber domain, and then, we inverse transform what we have obtained back into the space domain and obtain an exact second derivative. Again, how that's working in practice, we resort to our Jupyter Notebooks and look at this actually not only in one dimension, also in two dimensions. So, let's look how we can implement that in a Python code using our Jupyter Notebooks, one-dimensional acoustic wave equation as we had before. Actually, as mentioned, all the programming environment, whether it's MATLAB, Python, Maple or others, usually have libraries for the fast Fourier transform that help you implement these kind of pseudo-spectral derivative applications. So, let's see how this works in our Jupyter Notebook. So, here, we have, again, the one-dimensional wave equation. The left-hand side, we always use a finite difference approximation, and the right-hand side, what we have to do is, we have to replace the second time derivative of P, again, by an approximation, and here, we're going to use the Fourier method. For that, we make use, and this is shown here, again, in the continuous form, the derivative calculated in the spectral domain. In other words, we multiply the spectrum of the pressure P by IK square. This is a multiplication in the spectral domain corresponding to convolution in the space domain. This is here written in the continuous space, and now, let's see how this is implemented in a small computer script. That's the subroutine or the function in Python that actually returns the second derivative. We first initialize the wavenumber domain. Of course, we have a maximum wavenumber, which is the Nyquist wavenumber, which is pi/dx. So, that's kmax, and that basically forces us to discretize the spectral domain with nx over two points, which is half the number of grid points. We need half the number here because we always have, basically, an amplitude spectrum, and also, we have a phase spectrum. So, the k-vector is initialized, as we see here. This requires a bit of care and actually depends on the actual implementation of the fast Fourier transform. One always has to look carefully at the programming languages how this is done. So, in this case, it's done, as we see here. Then, actually, the calculation of the derivative approximation is very elegant. Here, now, we tap into the FFT library of NumPy. So, we have our vector F, which is a vector containing the pressure values. We apply the fft to that vector and that will return ff, which is the fast Fourier transform of that vector. It's actually a complex spectrum, and that, we multiply with, actually, IK square, which is written here. So, if we replace the two here by another number like one or two, three, four, we get the nth derivative in the spectral domain. Then, we have to do an inverse Fourier transform using ifft applied to ff, which is now the derivative spectrum, and it returns df_num, which is the exact space derivative at our grid points. This is what we are going to use to solve our numerical partial-differential equation. So, we initialize a source as before. I'm not going to go into details here about the setup. You can look at this in more detail. Now, the nice thing about this Notebook is, we're actually comparing now our Fourier implementation with two classic finite difference approximations. One is the three-point finite difference operator, so a three-point approximation to the second derivative, as given here. As you remember, the substantially more accurate five-point operator, that's given here. So, let's see what happens if we run our problem. We initialize a source at the left side of our physical domain, and here, you see the waves propagating. So, at the top, you see a three-point operator, middle, red, a five-point operator, and the bottom, the Fourier operator. So, everything is left exactly the same. So, the space increment, the time increment is exactly the same, but we have these different spatial operators. So, you can see that the Fourier operator, in that case, actually performs substantially better than the other finite difference approximations. You can see a lot of numerical dispersion, more with the three-point operator, less with the five-point operator, and substantially less, even more, at the Fourier operator. Now, there's an important message here. I really want to make sure that you don't misunderstand these results. It doesn't mean the finite difference method is bad because actually, we paid a high price having a much better pseudo-spectral solution. We actually needed many more floating-point calculations to obtain the result. So, there is always a price to pay for accuracy. Now, let's look also quickly at an implementation of this approach in two dimensions. Let's go to another Notebook. You can see this here. So, if you look at the equation, it's exactly the same, but instead of having only a second derivative with respect to X, we now also have a derivative with respect to Z. It's basically the same kind of function or subroutine that we use. We just have to apply it to different spatial domains. You can see this here is exactly the same function as we used in the one-dimensional case. We initialize a source, and, again, we compare with a five-point operator here in two dimensions. Now, let's see what happens. We run the algorithm, and here, we actually don't show the entire physical domain, we only show a quarter of the domain. On the left-hand side, you see the Fourier implementation, on the right-hand side, you see the finite difference implementation. The source is always at the lower corner. What we observe is, again, it seems like what the Fourier method delivers, an isotropic wavefield, which is what we expect. This is an acoustic wave equation. So, it's like clapping in the hands, and so, you would expect in all directions the same signal. Now, this is not the case for the finite difference method. We've actually calculated this when we discussed the finite difference approximation. We can see that the result is actually most accurate in an orientation 45 degrees with respect to the two grid axis. In the direction of the grid axis, we see a lot of dispersion. This is an important message here to conclude this analysis of the two-dimensional Fourier method. The error, because we have an exact derivative, exact interpolation in space, the error is actually isotropic for the Fourier method. That has actually important implications because, for example, in nature, there is actually physical anisotropy. If you want to study physical anisotropy, you better use a method that does not have an anisotropic error in order not to confuse numerical anisotropy and physical anisotropy. That's actually the reason why when the pseudo-spectral method was invented and used for the first time in seismology, for example, it was actually used immediately to study anisotropic wave propagation. So, I think this is a very elegant way of solving the wave equation numerically. Again, the drawback is that we need a lot of more floating-point operations to solve our problems, so more computation time. The advantage is that we actually need less grid points per wavelength, so the spatial discretization can be coarser than with a standard finite difference approximation if you want to achieve the same accuracy. What does that mean? That means, actually, you need less memory, and that was actually very important a few decades ago, when rapid access memory was very, very expensive. Now, computers are much faster, and, actually, because parallelization is much more difficult or inefficient with these global communication schemes that are required for the pseudo-spectral method, that method is actually not really in use today to solve fully three-dimensional problems.