Let's take a look at running some statistical forecasting on the Synthetic Dataset that you've been working with. This should give us some form of baseline that we'll see if we can beat it with Machine Learning. You saw the details in the previous video and you'll go through this workbook in this video. Before you start, make sure you are running Python 3 and you're using an environment that provides a GPU. Some of the code will require TensoFlow 2.0 to be installed. So make sure that you have it. This code will print out your version. If you have something before to 2.0, you'll need to install the latest. To install it, use this code. At the time of recording, TensorfFlow 2.0 was at Beta 1. You can check out the latest install instructions on TensorFlow.org for the updated version so that you can install it. Once it's done, you'll see a message to restart the run-time, makes sure that you do that. Check that you still have a Python 3 GPU run-time and run the script again. You should see that 2.0 is now installed. The next code block condenses a lot of what you saw in the previous lessons. It will create a time series with trend, seasonality, and noise. You can see it in the graph here. Now to create a training validation set split, we'll simply split the array containing the data at index 1,000, and we will chart it. Again, we can see that the seasonality is maintained and it's still trending upwards. It also contains some noise. The validation set is similar and while the charts may appear different, checkout the x-axis. You can see that we've zoomed in quite a bit on it, but it is the same pattern. Now let's start doing some of the naive prediction. The first super simple prediction is to just predict the value at time period plus one. It's the same as the value of the current time period. So we'll create data called Naive Forecasting that simply copies the training data at time t minus 1. When we plot it, we see the original series in blue and the predicted one in orange. It's hard to make it out. So let's zoom in a little. We're looking at the start of the data, and that's sharp climb this C. So when we zoom in, we can see that the orange data is just one time-step after the blue data. This code will then print the mean squared and mean absolute errors. We'll see what they are. We get. 61.8 and 5.9 respectively. We'll call that our baseline. So now let's get a little smarter and try a moving average. In this case, the point in time t will be average of the 30 points prior to it. This gives us a nice smoothing effect. If we print out the error values for this, we'll get values higher than those for the naive prediction. Remember, for errors lower as better. So we can say that this is actually worse than the naive prediction that we made earlier on. So let's try a little trick to improve this. Since the seasonality on this Data is one year or 365 days, let's take a look at the difference between the data at time t and the data from 365 days before that. When we plot that, we can see that the seasonality is gone and we're looking at the raw data plus the noise. So now, if we calculate a moving average on this data, we'll see a relatively smooth moving average not impacted by seasonality. Then if we add back the past values to this moving average, we'll start to see a pretty good prediction. The orange line is quite close to the blue one. If we calculate the errors on this, you'll see that we have a better value than the baseline. We're definitely heading in the right direction. But all we did was just add in the raw historic values which are very noisy. What if, instead, we added in the moving average of the historic values, so we're effectively using two different moving averages? Now, our prediction curve is a lot less noisy and the predictions are looking pretty good. If we measure their overall error, the numbers agree with our visual inspection, the error rate has improved further. That was a pretty simple introduction to using some mathematical methods to analyze a series and get a basic prediction. With a bit of fiddling, we got a pretty decent one too. Next week, you'll look at using what you've learned from Machine Learning to see if you can improve on it.