In this optional lecture,I will talk about mean square convergence.Objectives are to learn mean square convergence and formulate necessary and sufficient condition for invertibility of MA(1)) process. So, let's first define what mean-square convergence is. So, we have a stochastic process, right? A sequence of random variables and I'd like to say these random variables are converging to some common random variable and call it x. But what do we mean with this convergence if we have random variables. Well we defined there are few definitions of conversions of random variables what we're going to concentrate on is the mean squared convergence. In other words, we're going to say Xn converges to some random variable X as n increases, if I look at their differences. Squared and I take the expectation of it. So this is mean, and this is squared. This is mean squared, some number. And if this some number goes to zero as n increases, which means, as it increases, this random variable is a different square. Expectation of the different square is actually getting smaller and smaller and smaller. Then we call xn convergence to x in mean square sense. In previous lectures, we inverted ma1 model. Which is this guy here xt = zt + beta z t-1 into an infinity model and we write zt as infinite sum here. So what we would like to say, we would like to make sure this right hand-side is convergent in mean-square sense. So how do we say that we have to get the partial sums and make sure that partial sums of this infinite sum actually converges to Zt in mean square sets. Let's remember the auto covariance function of MA(1) processes. MA(1) processes of the covariance function would be 0 after lag 1. At lag 0, it is 1 + beta squared times sigma square, at k1 at lag 1, it is beta Sigma square, and for negative values this is an even function, so Gamma k same as Gamma negative k. So we're going to use these two guys here, the Gamma 0 and Gamma 1. Okay, let's find Betas so that the partial sum then notice there is a n here now. We cut the infinite sum at sum n. And we have to make sure that partial sum converges to Zt as n increases in the mean-square sense. In other words, We have to make sure this partial sum, this expression here, is the partial sum until n minus Zt and we square it and we take their mean, their expectation. This is the mean squared. And we have to gain, we should find betas where this expectation actually drops to zero as n gets larger and larger. So we have to do some analytical work here. Let's go slowly. We take the square. This is one big lump sum, big, big term. Think of that as one big term and this is another term. The square of the first term, square of the second term and this is two times their multiplication. This is usual a- b² formula. But then, we have to take the sum squared. And if you take the square of a sum, you get the sum of squares. This is basically square of the each term, but then we have to have pair-wise multiplications times 2. Now, one thing you have to note here is that when we look at the pair by multiplication, we shouldn't look at more than one because we know we know all the covariance function drops to 0 after lag 2. So we only have xt minus k with the next guy only as k goes from 0 to n minus 1. And if you multiply the coefficients, we're going to have some odd coefficient on top of negative beta. In this term, zt is uncorrelated with almost of them except the first guy, which is xt, and expectation of z squared is sigma square. Here you take expectation to inside, right? The expectation is a linear operator, expectation of x squares will give you expectation of x squares. This is going to be common for everybody, this is basically the variance. Covariance at lag 0 on a variance and we have beta to the k. In this then we take expectation to inside, we're going to have expectation of this multiplication. This expression we can put xt back into the game, xt is zt + beta zt squared. This is zt squared + beta, zt -1. And both of them are multiplied by zt. Now, this expectation of x squared, this is literally gamma 0 so we can pull this out. This expression here, expectation of xt- k, xt- k + 1, this is literally gamma 1. We can pull this out. Expectation of z is going to be another gamma square so we're going to have -2 gamma square here. But these guys are uncorrelated. So expectation of this will drop to 0. So we have negative 2 gamma square with that other gamma square, we're going to have negative gamma square. You put gamma 0 back into here which is 1 plus beta square gamma square, sigma square. They put gamma 1 back into the game which is beta sigma square and we basically simplify this expression. A lot of terms will get canceled. And we obtain that expectation of the different square here. The mean square is actually sigma square times beta to the 2n plus 2, right? This n is the number of the elements in the partial sum. So what do we want for you? We want this mean square to go to the zero as it gets larger. In other words, we mean this expiration which we calculated to be sigma squared beta to the 2n plus 2. You want this guy to drop to zero as it gets larger. Sigma is constant. Which means beta. Absolute value of beta must be less than one, so that this can go up to zero. The conclusion is that we can do this inversion, we can inverse and make new process into AR infinity process, but we have to make sure that this series is convergent and that convergence only is the case when magnitude of beta is actually less than one. Now remember magnitude of the beta is less than one means negative one over beta is greater than one. This guy is the zero of the polynomial. So the zero of this polynomial literally lies outside of the input so what have you learned? You have learned the definition of the mean square convergence and you have learned the necessary and sufficient condition for invertibility of MA(1) processes.