0:00

Let's talk a little bit about multivariate variances and covariance.

Â So we're going to define for

Â random vector X the variance of the random Vector X, which says N by one

Â is going to be the expected value of the outer product of X minus mu.

Â With it self so where

Â here mu is equal to the expected value of the, it's the vector expected value of x.

Â 0:41

Okay. So

Â it's just the ordinary variance of the first element of the vector.

Â The second entry, second diagonal entry of this matrix is

Â just the expected value of X 2 minus mu 2 squared.

Â The first off diagonal element of this matrix in either above the diagonal or

Â below the diagonal, it's just the expected value of X 1 minus mu 1,

Â times the expected value of X 2 minus mu 2 and

Â that is exactly the covariance between X 1 and X 2.

Â So, the ajith element of this matrix is the co-variance

Â between the I element of the vector X and the J element of the vector X.

Â So this quantity is called the variance co-variance matrix.

Â And just like the variance calculation for

Â 1:36

Univariate random variables has a shortcut formula, the variance calculation for

Â multivariate random variables also has a shortcut calculation.

Â So the variance of x is expected value of (x-mu)(x-mu)transpose.

Â So let's use our rules, so that's the expected value of x,

Â x transpose- x mu transpose minus mu x

Â transpose + mu mu transpose.

Â So that's equal to the expected value of x x transpose.

Â Then this quantity, mu is not random.

Â So we can pull it out of the expected value.

Â And the expected value is a linear operator, so it moves across these sums.

Â So we can write this as the expected value of x times mu

Â transpose minus mu times the expected value of x transpose.

Â And then mu mu transpose has nothing random in it, so that's mu mu transpose.

Â But this is just mu mu transpose because remember mu is defined as expected value

Â of x.

Â And this just just mu mu transpose again so

Â we get -mu mu transpose -mu mu transpose + mu mu transpose.

Â So we get that the shortcut formula is the expected value of the outer product of

Â the x's- tthe outer product of the expected value of the axis, okay?

Â So that's a simple shortcut formula.

Â The variance has nice properties, not unlike the mean.

Â It would be nice if the variance was a linear operator but it's not.

Â 3:28

Or uncorrelated at least.

Â So, what we can say is that first of all that the shifts

Â in the variance, shifts have no affect on the variance.

Â So if we take the variance x and

Â shift it by a constant vector b that's just the variance of x again.

Â Just like in univariant cases of course.

Â 3:59

So the variance doesn't change if we shift and then another important property is

Â the variance of A times X if we have a variable that we'd like to pull

Â out of a variance that is equal to A variance of X.

Â A transpose.

Â So when we pull matrix or vector out of a variance it sandwiches the variance.

Â And so you get, A is the bread, and then the variance of X part is the meat.

Â So, it sort of sandwiches the variance and

Â when you pull it out it has to go in both directions with A and A transpose.

Â Look back at the definition one more time, I also want to point out that the.

Â 4:42

Variance covariance matrix is clearly symmetric.

Â If you were to take, for example, the transpose of this matrix.

Â Remember, that transpose has moved into expected values.

Â You'll find that you get the expected value of the same exact thing.

Â So it is symmetric which is a good thing because we know that for

Â example the IGA Off-diagonal covariance x i and

Â x j is equal to the covariance of x j and x i, the bivariate

Â covariance operator is exchangeable with its arguments, so it has to be symmetric.

Â So it's nice that we can see that property very directly.

Â So those are some of the key things to note about multivariate variances, or

Â variance of vectors.

Â And we'll use these facts a lot throughout the class.

Â So it'd be nice to commit, especially this formula right here

Â about pulling a matrix out of a variance calculation.

Â That's quite useful.

Â