So, here's a small exercise for you. You remember the auction data on the clocks which we did a while ago? I hope you still have it, otherwise you'll find it in the data folder, right? First of all, I want you to run Random Forest on it. Okay? The reason I want you to run Random Forests on this, you'll see the accuracy is pretty good. Okay. I want you to score the data and put it in a separate file. You can decide how to split it into training and validation, right? Use different cutoff values for classifying whether you will win the auction or not. You should see your error going down, and then going up. So, it's well known that if you use a cutoff value of 0.5 in your probability, your error is the lowest. You can see your sensitivity, your specificity, your precision. The precision also behaves the same way, right? Your specificity also behaves the same way. Okay. But you may stop at a different place, you might go up to 0.5, you may go to 0.6, you could keep them as low as you want, right? So, ignore the first row because this is a data table I created. So, what this table I want you to reproduce is saying that if I change the classification to 10 percent, 20 percent, 30 percent, 40 percent, 50 percent, 60 percent, right? What would be the error? What would be the sensitivity, specificity, and precision? If you do this with other datasets, it's fine but this is for your submission. How would you do it? Well, here's the auction data. I have given you in these three slides how to do it, you're reading the data, you are running the model, and you're saving that data in a CSV file, you're scoring it on the probability, and then you're using that to come with these confusion matrices by yourself in Excel and creating specificity and sensitivity scores. Okay? It's a long exercise, but it's worth doing.