Welcome back. Here, we will close out the Notebook starting with question number 5. All we ask here is for the original dataset, scale the values using one of the following, StandardScaler or MinMaxScaler, and then compare the error calculated on the testset. Now, we want to make sure whenever we do our scaling that we do the fit transform method that's available to us only on the training set and then call transform using our fit from the training data, we just call transform on the test data. We don't recall that fit transform on our testset. The first thing that we're going to want to do is we're going to want to import our StandardScaler as well as our MinMaxScaler. We're going to set a dictionary equal to standard for the key and then the StandardScaler, minmax for the key, and then the MinMaxScaler. I'm going to initiate each one of those and pull out scaler.items as just a reminder of how the.items functionality works. We now have our dictionary tuples, where the tuples are going to be first value key, second value is going to be the value, and that's just going to be standard and StandardScaler. So the scaler label is going to point to this key standard and this scaler variable is going to point to our scaler object. We run our fore loop. We're going to set our training set equal to the fit transform of our x_train. Again, using this scaler, we're going to be using the second value which is just the value from our dictionary. So we fit transform our x_train to get our trainingset, and then we just run transform to get our new testset. We run LR, linear regression was initiated up here and we fit it to our trainingset, which has been transformed with our y_train. We don't have to do any transformation there, and then we get our predictions on our testsets, just calling LR.predict. We're going to create a variable called key, which will just be the scaler_label, which is standard or minmax and then add on the string scaling and then we're going to set errors, that's our dictionary that we initiated up here, errors key using this key is equal to the mean_squared_error on y_test in our predictions. Then, we're just going to create a series out of that. Then, we're going to have, as we did before, an index for each one of our values and then we're just going to print them all out. For key, error_val and errors.items again getting that tuple, print out our new keys as well as our error values, which will just be these mean squared error values. So we run this and we see that we get the same error for each. If we look back to error_df, that's going to be the same value that we got here for the testset. The idea being that most of the time when you're working with linear regression, scaling won't actually affect the outcome, won't affect your prediction. This doesn't old shoe once we talk about Ridge and Lasso regression, which we'll see in just a bit. But we see that it won't have an effect when you just do it on plane linear regression. Finally, we just want to plot the predictions versus actuals for one of our models. We're going to call sns, all of these sets, set_context, set_style, and set_palette are just different ways to ensure that you're seaborne context and this will work for your matplotlib plots as well will all print out in some clean style defined by each one of these, context, style, and palette. We're going to initiate our plt.axes, so our bounding box and we're just going to call x.scatter on our y_test and y_test_pred, which we defined it much earlier. We have y_test pattern all the way up here. But I'm just going to look at both the plot versus the predictions. The point being here just to look at this plot. So it's a scatter plot of our actual outcome variables versus our predicted outcome variables. If we got it exactly right, they should all be on one diagonal. Alpha equals 0.5, just ensures that it's a bit transparent, each one of our scatter points. Then, we're going to set different options our xlabel, our ylabel, and our title. We run this and we see each one of our predictions is very close in regards to that diagonal line, which would tell us that we did a fairly good job of predicting whereas when they're far off, as we see in the top left and a bit all the way to the right, then we are a little bit off on those predictions. That closes out this Notebook and I look forward to seeing you back in the lecture. Thank you.