0:01

Let's talk about the conditional distribution associated with the normal

Â distribution.

Â So let me let X,

Â my vector X be equal to X1 X2.

Â And so without loss of generality, we're just going to assume that

Â we want to know what the conditional distribution of X1 given X2 is.

Â 0:25

So let me write out the answer first.

Â First of all, I'm working under the assumption that x as we're

Â talking about in almost all of the cases from this section of the class.

Â X is multi variant normal distributed.

Â So I'm going to write out the variance covariance matrix this way.

Â Sigma 1 1, sigma 1 2,

Â sigma 1 2 transposed, and sigma 2 2.

Â So what I'd like to know is what's the distribution of X1 given X2.

Â And we know that it's normally distributed,

Â because we've mentioned earlier that all the conditional distributions are normal.

Â But what's the variance and covariance?

Â So, Let me just write out the answers first.

Â And then I'll show you a nifty derivation of this.

Â I'll do the derivation just because I like it.

Â So the expected value of X1 given X2 took the particular value,

Â little x2 is exactly equal to mu 1, okay?

Â It will be weird if that wasn't there

Â plus sigma 1 2 sigma 2 2 inverse X2 minus mu 2.

Â And then the variance of X1 given that

Â X2 equals little x2 is then equal to sigma 1 1.

Â So what we would hope it would be, because that would be easy,

Â just the same as the marginal variance.

Â But of course, there's got to be other stuff minus,

Â sorry, minus sigma 1 2 sigma 2 2 inverse.

Â You could either write it as sigma 2 1 or sigma 1 2 transpose.

Â I'd like to write it as sigma 1 2 transpose, okay?

Â Let's come up with a derivation of this.

Â 2:25

And there's a really clever one,

Â and I'm not sure how someone figured this out but

Â it's really, I find it really nice.

Â And in fact it was a student in my class, who showed me this,

Â because I was deriving it using inverses of partition matrices

Â which is a lengthy bookkeeping procedure though not hard.

Â And they show me there's a much easier way to do it.

Â And what she said was,

Â define X as Z equal to X1 plus AX2.

Â Where A is equal to negative sigma 1 2,

Â the sigma 2 2 inverse.

Â And then what I would contend is that

Â the covariance between X2 and Z.

Â Let's say, Z and X2 is equal to 0, and let's work that out really quick.

Â So the covariance of Z and X2 is nothing other

Â than the covariance of X1 plus AX2 and X2.

Â Which is equal to the covariance of X1 and

Â X2 plus A covariance of X2 and X2, okay?

Â So that's equal to covariance of X1, X2 is sigma 1 2 plus A,

Â which I have defined as negative sigma 1 2 sigma 2 2 inverse.

Â And covariance X2 and X2, itself, is sigma 2 2, okay?

Â So I think you can pretty much see at this point that this equals 0.

Â So Z is independent of X2.

Â So the distribution of Z given X2 is just the distribution

Â of Z disregarding X2, because Z is independent of X2.

Â So let's calculate, and we know that Z is normal,

Â because it's a linear combination of normals.

Â So let's calculate the expected value of Z.

Â 5:07

A the variance of X2 times A transpose with

Â variance of X2 is sigma 2 2 times A transpose.

Â Now let's fill some of those in that's mu 1

Â minus sigma 1 2 sigma 2 2 inverse mu 2.

Â And this is equal to the negative sign will cancel out sigma 1 2.

Â And then I get a sigma 2 2 inverse sigma 2 2 sigma 2 2 inverse, so

Â it's just going to be sigma 2 2 inverse sigma 1 2 transpose, okay?

Â 5:51

Okay, so let's think about this.

Â So the expected value of Z given X2 is equal to little x2 is

Â equal to the expected value of Z, but it's also equal to,

Â 6:13

Mu 1 minus sigma 1 2 sigma 2 2 inverse mu 2, okay?

Â So we have that much so far, but

Â we can also write out the expected value of Z as the expected value.

Â Remember, the definition of Z which we've written up here.

Â And that works out to be then that we can

Â write this as expected value of X1 plus

Â AX2 given X2 equals little x2, okay?

Â So we then can write,

Â given X2 plus A expected

Â value of X2 given X2.

Â So this is the expected value of

Â X1 given X2 plus A expected value

Â of X2 given X2 is just X2.

Â Okay, so now take this equation and this equation, and

Â we can solve for the expected value of X1 given X2.

Â 7:42

And then you can use the same technique to derive the variance,

Â so the variance of Z given X2 equals little x2.

Â Well that is just exactly equal to the variance of Z,

Â since Z is independent of X2, since Z is independent of X2.

Â And we see right up here the variance

Â of Z is sigma 1 2 sigma 2 inverse,

Â sigma 1 2 sigma 1 2 sigma 2

Â 2 inverse sigma 1 2 transpose.

Â But then we also know that it's

Â the variance of X1 plus AX2 given X2.

Â Because we're just simply rewriting that statement out with the definition of Z.

Â And by the rules of variants, we can then apply the variance formula and

Â apply the rules of variants, and solve and solve again.

Â And I'm going to ask you to complete the steps for homework.

Â