In the previous video, we looked at training a deep neural network to learn, to predict the next value in a Windows sequence. We also looked at some basic hyper parameter tuning to pick a good learning rate for the gradient descent, which then allowed us to further improve the model. In this video, we'll wrap up this week by going through the workbook that shows you how to do all of that. We'll run this code to see our current version of TensorFlow. And as before, if you're using something earlier than 2.0, you'll see it reported here. Use the code cell above to install the latest version of TensorFlow or a nightly version like I'm using. Running this block will set up the data series and all the constants for the Window size and all that. This code will create the Window dataset as before. And here is our DNN where we have three layers in a sequential. The first has ten neurons activated by relu. The second is the same, and the third is a single dense giving us back the predicted value. We'll compile it with means squared error loss and stock axis radiant descent as an optimizer. After 100 epochs it's done, and we can plot the forecast versus the data. Then we can print the mean absolute error. Don't worry if you get a different value, remember there's going to be some random noise in the dataset. If we run this code block we'll retrain over 100 epochs, but we'll use the code back to call the learning rate scheduler, which will then adjust the learning rate over each epoch. When it's done we can then plot the results of the loss against the learning rates. We can then inspect the lower part of the curve before it gets unstable. And we'll come up with the value. In this case it looks to be about two notches to the left of 10 to the minus 5. So I'll say it's 8 times 10 to the minus 6, or thereabouts. I'll then retrain and this time I'll do it for 500 epochs with that learning rate when it's done, I can then plot the loss against the epoch. So we can see how the loss progressed over training time. We can see that it fell sharply and then flattened out. But agin if we remove the first ten epochs, we'll see the latter ones more clearly and it still shows the loss smoothly decreasing at 500 epochs. So it's actually still learning. Let's now plot the forecast against the data and we can see that the prediction still look pretty good. And when we print out the value of the mean absolute error we have improved even further over the earlier value. So that wraps up this week. Go through the workbook yourself and experiment with different neural network definitions, changing around the layers and stuff like that to see if If we can make it even better. Next week we're going to take this to the next level by using neural network types that were current neural networks which have sequencing capabilities built-in. I'll see you there.