Hello everyone, in this lecture we will be talking about parameter estimation, specifically recruitment data. So objective is to fit an auto-regressive process to the recruitment data, which is number of new fish for a period of 453 months, ranging over 1950s till 1987, and we'll be using Yule-Walker equations in matrix form to estimate the parameters of the fitted model. So there's data which is called a rec recruitment, it's about the number of the fish, it is monthly time series, and the sources are astsa package, and this is related to the book which is called Time Series Analysis and It's Applications, with R examples, this is by Shumway and Stoffer. If you plot the data, I wish we were going to do, I will show you the code in a minute, but if you plot the data this is what we get, this is the recruitment time series. And if I look at ACF and PACF, auto-correlation function, and partial auto-correlation function. So if you look at the altered correlation function, there's some kind of decaying, but also some cyclic behavior, right? And but when you look at the partial auto-correlation function, we see maybe two significant lags, lag two and then maybe nothing significant afterwards. So that kind of gives us an idea, that maybe AR(2) process might be suitable for the recruitment data set. So we're going to use this parsimony principle, in other words, we're going to choose the simplest explanation, that fits the evidence. So in that case, we're going to take the simplest of the competing theories to be preferred, in this case we had one theory. We say that PACF somehow gives us plausible, auto-regressive of order two models that can be fitted into this data, set and we're going to of course use the Yule-Walker equations in matrix form to estimate the parameters. So, we're going to look at the code but let me just give you idea that we're first going to basically subtract the mean, from Xt, so be shifted, so that we have new zero. Our order's going to be 2, remember Yule-Walker equations, which was base the R phi equals b, and we are trying to get this phi here, which is the coefficients in our model. R and b, we can get it from using the sample auto-correlation coefficients. In other words, we just take the ACF of the process that we are looking at, in this case ar process that we are looking at is actually the recruitment data, which is shifted. So we look at the ACF of ar process, we do not want to plot this, but we would like to look at the ACFs, and then we assign then to the r. So R as a vector will have our ACFs, and we're going to put those R into our matrix, and this is going to matrix R, but realize the following, if you actually look at Ri,j, you can actually update them by looking at the index i-j value. In other words, for example, if you look at r1, 2, r1, 2 is here, 1 minus is two is negative one, absolute value is one. R1, 2 is r1, r2,1 is also a one indies metric matrix. So that's how we update our matrix R, doing the tool for look nested loops, making sure that, i, is not equal to j. Well, then, i, is equal to j, you basically have to diagonal elements, which is our old ones, and the rest of it we just update using our absolute value of i minus j. The right-hand side which is the column b, is the transpose of r, and this is basically finding the transpose of r. And then we solve for phi, right? How do we solve for b, if Rx equal to b, then all we have to do, we have to say solve Rb, and the solution we're going to put it into the phi hat. So this is the solving Rb, rx equal to b equation, and our model is going to be, basically, the following, we found Phi, right? We know our mean, and the calculate Phi zero using this expression, and we plug them in, and that becomes our fitted model. Let's look at the code, this is our studio on other platform for r, and here we have basic the r code, which is also available to you in the reading section of this lesson. So, you still get the reading you are trying to extract this code, you can copy it in your program either all or our studio. The console is gong to be right here, and the out is going to be shown on the right hand side. So let's go step by step, let's say I'm here, if I say run, it runs my data equals to rec. So I take the rec and I attach it to the my data, them I'm in the next line, let's run the plot, so this is basically plotting the recruitment data, I can actually zoom in here, on my data set. So this is our time series, this is the time plot, it starts from 1950 it goes on till 1987, it has some kind of cyclic behavior, which we are kind of ignoring, then we are doing this model fitting for auto-aggressive processes, and let's run it again, okay, and then we go to the next line, we run, this time it's going to partition our screen. We get the ACF, this is the ACF that we showed in the presentation, this is the PACF, and when we look at it on the right hand side, we see that, all right, there's some cyclic behavior, in the ACF, but PACF so that maybe AR(2) might be a good fit. So if you say p is equal to 2, this is going to be r order, and then we calculate r, by basically taking the ACFR process, and looking at the AC part of it, and if we obtain our r, let's print our out. R values, basically this is r1, and r2, okay. And then we run again, this is matrix r defined, it's not two by two, actually. This is, we cannot fix this, this is, p by p. In this case it is two by two, but in general this is p by p matrix, and we run it. So we are here, you run, so we have our r, and we update the r by using r apps with value r minus j, and if you print r, as you can see this is r0, r1, r1, r0, r0, which is auto-correlation coefficient. Sample other correlation coefficient like zero, it's always 1, so if you have 1 on the diagonal, and we have r 1s here. All right let's get matrix b, which is basically transpose of r. This is basically r1 and r2. And now here it's time to solve, we solve for phi one hat. So phi one hat is our approximation to phis. I'm sorry this is phi hat, and this phi hat is approximation to the phis, this is phi one hat, this is phi two hat, and let's obtain the constant, and the variance. So this is auto co-variance function of the process at lag zero, which is the variance, and you look at the variance hat, the approximate variance, and we obtain 94.1731. And now it's time to calculate the constant, and constant is calculated using the formula we obtained, so if you use your hack becomes 7.033. In other words, in our model the constant is going to be 7.033, coefficients are here and the variance of the noise, the random noise is 94 0.17131. All right so we obtain that p, the order is going to be 2, and our fitted model, which is phi 0, phi 1 hat, phi 2 hat, and Zt, Zt is our random noise with mean 0 and the variance 94.17131. So what have you learned? You have learned to fitting auto-regressive process for order two to recruitment data set from astsa package, and we did this using Yule-Walker equations in matrix form.