Hello everyone. In this lecture we will be doing another parameter estimation. This time it's going to data set called, Johnson&Johnson from astsa package. The objectives is, to fit an AR(p) model to Quarterly earnings in dollars per Johnson & Johnson share from 1960-1980. And meanwhile, we're going to use Yule-Walker equations in matrix from, to estimate the parameters of the fitted model. Johnson & Johnson is from the package "astsa". It is about the quarterly earnings in dollars from 1960-80. It is also available in the data sets that comes in R. And it is, of course, the origin is from the book, "The Time Series Analysis and its Applications with R examples" by Schumway and Stoffer. If we plot the data, which we're going to do. So the code is available in notebook, Jupiter notebook Johnson & Johnson. So if you haven't opened up yet, we will open this up at this point. We will be working on that code in a minute. But for now, let me show you the code, the plot. This is the time plot of the Johnson & Johnson and earnings, quarterly earnings. As you can see, there's definitely a trend going up, there's kind of a trend, the mean level is going up. And as it goes up, from 1960 til 1980, the variations also increases. As you can see, there's a difference in variations, systematic difference in variation. There's systematic difference in trend. Which means this dataset is definitely not as stationary dataset, in other words we cannot just fit some stationary AR model to this dataset, what we need to do in this case is some have transform the dataset. So let me give you the one famous transformation, it's called log return of the time series. So if Xt is a time series, if you look at the division Xt/Xt- 1 and take the logarithm of this, this is called a log return. In other words, if you look at log(Xt)- log( Xt- 1). That difference is usually stationary times series. Spatially and financial times series. In r we can obtain this difference, by basically taking the log of the dataset first. And then taking their differences. If you look at the ACF and PACF, ACF is alternating and decaying and then PACF shows significant log at 4 there zero log, I'm sorry, Zero log 1, log 2, log 3, log 4. And this is after that log, there's no significant logs available. So this give an idea that maybe we can try to attempt to fit the Johnson&Johnson using AR(4) model. So remember the parsimony principle, we're going to try to choose the simplest explanation that fits the evidence, and in this case we're going to use PACF which gives us AR(4). And we're going to do estimations using your Yuke-Walker equations, just like before in the matrix form. Let's look at the code. This is Johnson & Johnson model fitting from the Jupiter notebook, it is available to you, so I encourage you to open up and work on this code as we go through in the video. So, the first expression is going to give us a time plot with the title, and the color, and the line width of 3. So if I run this cell, we obtain a time plot that we just talked about. The next cell in the Jupiter notebook, next cell is about the low returns. So if you take the logarithm of the Johnson & Johnson data, if you take their difference, that becomes log return. But remember, if you would like to use log equations to estimate the parameters, we have to shift the dataset so we get mu0. So this is what we're doing here, we're taking the log return and it's mu mean, and be shifted so we will have this dataset which we will call JJJ, Johnson & Johnson log return means zero dataset. This will run that, now we have that dataset inside Jj.log.return.mu.zero. Here, we partition or the output, so that we look at the time plot of the new data set and its ACF and its PACF. Let's look at that, as you can see we have our time plot here ACF, PACF. We talked about it, and PACF here suggested you should maybe think of AR4 model. So P is going to be a four, so if you'll look at this next cell which is R is null, so we define R and then via assign ACFs to R, and we print R. And R is going to be the following, since we have p = 4, we have r1, until r1, r2, r3, and r4. We define our matrix the capital R, which is a matrix 4 by 4 matrix, p by p matrix. And the update is entries. Basically diagonal will stay 1, that's what it means. This is everything in this matrix is actually one. But if you're going to update non-diagonal entries, and you are going to print R to see what R is. If you do that, we obtain the following matrix. The is our R. We have diagonal, main diagonal, and we have symmetric. This is R1, this is R2 and so forth. Let's define out matrix b vector, be column vector, which is the transpose of R. Here we had the R here, and we just have its transpose here. We solve RB, so this is where we use equations and estimate our coefficients phi, and estimation is denoted by phi hat, if it did that, we obtain our phi hat. This is phi 1 hat, phi 2 hat, phi 3 hat, phi 4 hat. Here we are trying to estimate a variance and the variance, is needed the article variance sample article variance function, at log zero for that reason we take a c function type covariance. And they become the first guide, because that is C 0, and we use that C 0 in the formulation of a variance and redefine variance hat. That's an estimate for the variance, and the estimate for the variance becomes 0.0141. First one here is C 0. And then it's time to find the coefficient, I'm sorry, the constant phi 0 and we call it phi0.hat, and that becomes 0.0797. In other words, the last cell tells us, this cat is for printing everything together. We have a constant, coefficients, and the variance here estimated. So, what do we get? We obtain that order, the count of fitted is four and then fitted model, for Rt, what is Rt. Well Rt is the log of return. It's the log of the division of Xt to X9- 1. The log of return of RT obeys auto regressive a process of all the four. And this is the fitted model, this our constant hut, phi 1 hat, phi 2 hat, phi 3 hat and phi 4 hat. And Zt which is the noise is normal distributed with mean 0 and variance hat, estimates for the variance being 0.0141 and so forth. So what have we learned? We have learned how to fit an AR4 model To log return not itself but the low return of the Johnson and Johnson quarterly earnings. And this data originally come from astsa package, and we did this by using Yule-Walker equations in matrix form.