Hello, as I mentioned at the end of lecture seven, we have a Python code that's simply all the tables that I populated. I use discord, which you would have it, you can play with it. I just want to walk you very quickly through the code and then run maybe few cases. Now as usual, I'm importing some Libraries. I'm using numpy, C-math for complex math and time for just in case of trying to measure time. You might notice actually I'm importing warnings because sometimes we get certain warnings. I Just simply want to ignore those warnings. If you want to see the warning just commented out. But I'm having it just to make sure that the warnings wouldn't show up. The parameters, the so-called fixed parameters, I'm having the spot of 100, a strike of 80. I said one is specific strike if you remember, but I'm working with the log of it. That's why I'm doing the log of that interests rate of five percent. Dividend, I've one percent. The parameters that I'm using for FFT, of course, you can use any N you want. I'm just going to run it for the cases N two to the power 12. That's what I'm having here. A step size for, eta, I'm having 0.25 and alpha, I'm using 1.5. I mean, as I said, you can play with this in the population. When actually I was populating the table, I used various different values for these things. Now, let me just run this "Shift", "Enter" a step size in log, strike space. Don't forget. If you remember I said lambda, the constraint would force you to do two pi over N eta. That's for them having over here, which is the step size in the log strike space. Choice of beta, there were two choices. As you see, I'm having both here, but I commented the first one. I'm using the second one which is the setting beta to be log of that K. The model under consideration that I'm using is Heston. As you notice, for each of these, depending on the model of the choice, I'm setting the parameters for GBM. There is one parameter for VG, there were three parameters for his sender, five parameters. Then depending on which model you pick, it would go and choose from these choices. Now, the generic CF, which is the generic Characteristic Function, depending on the model, it would create a characteristic function for you as a model-free setup. That means you pick the model and it would go and generate its characteristic function. These characteristic functions that you see here that I've been implementing it, comes exactly from the slides at the add-in Lecture seven, implementation of that. The only thing you may notice is in case of VG, if nu is equal to zero is a different implementation, you have to be careful with this because for nu zero it would become, I mean, you would go into some undefined that you need to use L'Hopital's rule for it. That's why I already did this. I will leave this one as an exercise for you. I just want to make sure that you understand that case, that nu is equal to zero. You need to take care of analytically. You may say that, do I run nu equal to zero? Typically you can make it some tiny positive. You don't run into it, but in case it becomes zero, that should be fine. Now, if you do remember in the end of lecture five, my apologies, Lecture six, I said, if you're interested in just one strike at a time, no need as I'm writing here, No need for Fast Fourier Transform. You can't just do the evaluate the integral, which is exact implementation of what we had at the end of lecture five. End of lecture five we said, if you have one strike, there is no need to go any further. No further step. But if you want to do more than one strikes, then you need to use these efficiency of FFT. Then in case that you just want to do one strike, you can just use this evaluate integral. In case of having more than one, then I have this genetic FFT, which is exact implementation of setting up the X factor. Then you set up the X factor, you send it to FFT then you get that one back. You get the real part of this. As you see, I'm going to get real part. You multiply by that multiplier. That's exact implementation of exactly what I had in the slides. Setting up the X factor, which that's this part. And then you getting the X factor out, you're passing it through the FFT. Once you get Y out, you get the real part of it, and then you do that damping. That's exact implementation. You have the code, you can go through it. The only thing I would say about it is sometimes I tried to take advantage of this factorization, which I'm not sure I did here or not, but you can go through the code and you will see it. Let me just run this one. I do that "Shift" "Enter" and now what I would do as I'm emphasizing what the model is, I'm getting the FFT, calling the FFT, passing those and knowing that the choice of the data, the first entry coming out of here, that first entry would be what I'm interested in. Let's run this. Now look what I've done here. What I've done here is this. I intentionally ran both the one because I'm doing one strike. I'm running both the one that I'm using FFT and the one that I'm just calling the evaluate the integral and comparing it against each other. Then option via FFT for segregated the option premium is this. The execution time was this. Now I'm seeing option via integration. Well, I'm sure via integration for strike 80, I'm getting exact same. Has to be. It turns out actually the time is a slightly actually longer. The reason for this, because I'm doing the explicit implementation, then you're still better off using FFT here. As you see, the results would be exactly the same no matter what but then FFT is much faster to use. You have the code, you can play with the code yourself and as I mentioned, make sure that in lecture seven, make sure you go through various different values of N, alpha, eta and then play with it yourself. Convince yourself what would be the optimal choices for those three. Now, what I will do in the next seven lectures, which would be on model calibration, what I'm trying to do is I'm trying to use this engine as an engine for pricing and go to the market. Let's say I go and look at the option on apple stock. Then trying to see if I can find the parameter set, say for Heston, for example, that I can capture by plugging it into Heston, create a surface coming from Heston and that surface would be closely match with the option surface comes from the apple company a stock. This process of doing this kind of reverse engineering is what I would call calibration. I have seven lectures on model calibration that I'll walk you through that one in my my next lectures. Thank you so much.