0:00

>> We are now going to discuss the multivariate normal distribution.

Â The multivariate normal distribution is a very important distribution in finance.

Â It crops up in many different applications including, for example, mean variance

Â analysis and asset allocation, as well as geometric Brownian motion and the

Â Black-Scholes[UNKNOWN]. So we say an n-dimensional vector, X, is

Â multivariate normal with mean vector Mu and covariance matrix Sigma; if the PDF of

Â X is given to us by this quantity here. Okay, so the PDF is equal to 1 over 2 pi

Â to the power of n over 2, times the terminant of the covarience matrix raised

Â to the power of a half, times the exponential of this quantity up here.

Â And be right that X is multivariate normal Mu, sigma.

Â The little subscript n here, denotes the dimensionality of the vector x.

Â The standard multivariate normal, has mean vector mu equal to 0, and variance

Â covariance matrix equal to the n by n identity matrix.

Â And in this case, the xi's are independent.

Â We can actually see that, because in this case we can write, the joint PD f of x, as

Â being equal to the product. I equals one to in.

Â One over route to pie e to the minus a half x i squared.

Â And that follows just from this line here because mu equals zero so this term

Â disappears, and Sigma is just the identity.

Â So, in fact, you just end up with a sum of xi squared divided by 2.

Â So as we saw in an earlier module on multivariant distributions.

Â If the joint PDF factorizes into a product of marginal PDF's, then the random

Â variables are independent. Okay.

Â The moment generating function of x is given to us by this quantity here.

Â So phi subscript x of s is actually a function of s.

Â Okay this vector s. And it's the expected value of e, to the s

Â transpose x. Okay, and this is equal to e to the s,

Â transpose mu, plus a half s transpose sigma s.

Â Now, you're probably familiar with this in the 1 dimensional case, we'll just recover

Â here. Suppose x is really just a scale of random

Â variable, then the moment generating function of x is equal to the expected

Â value of e to the sx, and it's equal to e to the s mu plus the half sigma squared s

Â squared. And this is the case where x is normal

Â with mean mu and variance sigma squared. So this is the moment generating function

Â of the scalar. Normal random variable.

Â This is, it's generalization to a multivariate normal random vector, x.

Â Okay. So, we call our partition we saw in an

Â earlier module. We can break x into two blocks of vectors

Â x1 and x2 as such. We can extend this notation, notation

Â naturally. So we can write Mu equals 1 2, and equals

Â to This sigma 11, sigma 12, sigma 21, sigma 22 and they are the mean vector and

Â covariance matrix of x1, x2. So we have the following results on the

Â marginal conditional distributions of x. The marginal distribution of a

Â multivariate normal random variable is itself normal.

Â In particular the marginal. Distribution of Xi is multivariate normal

Â with mean vector Ui and variance covariance matrix sigma Ii.

Â So for example X1 is multivariate normal, in fact it's k components, mu 1, sigman 1,

Â 1. And similarly X2 is multivariate normal.

Â Mu 2, sigma 2, 2, and this is n minus k components.

Â And we have here an example of the bi-variance normal density function, where

Â the correlation beween x1 and x2 is 80%. If we rotate the service you can see the

Â correlation of 80 percent the large values of X 1 are associated with values of x 2

Â like all values of x 1 are related to all values of x 2.

Â So we can also talk about the conditional distribution assuming sigma is positive

Â definite. The conditional distribution of the

Â multivariate normal distribution is also multivariate normal.

Â In particular x 2, given that x 1 equals little x 1 is multivariate normal with

Â mean vector mu 2.1. In the variance, covariance matrix, sigma

Â 2.1. Where mu 2 1, is given to us by this

Â expression here, and sigma 2.1 is given to us by this expression here.

Â And we can get some intuition for this result, by just imaging the following

Â situation; so we've got X one down here. We have X two over here, and imagine we

Â plot some points from X one and X two if you like, we generate X one and X two from

Â some distribution, from the bivariate normal distribution, in particular.

Â So the mean of X one is, let's say mew one and the mean of X two is mew two.

Â Okay. Now what if I tell you that we observe

Â that X 1 was equal to this little value X 1.

Â Well if that's the case, then you can come up here and you'll see that X 2 is more

Â likely than not to be in this region as. I'll circle them right here.

Â So in fact you would expect the conventional mean x one equals little x

Â one to be maybe somewhere around here . And this would be near 2.1 okay?

Â Likewise you can see just from this. Again, that the variance of x2 would have

Â shrunk. Because knowing something about x1 would

Â give us information about x2, and that would decrease our uncertainty about the

Â location of x2. And in fact this expression here tells us

Â how to actually do that. This mathematically.

Â So, so they're conditional distributions. A conditional distribution of a

Â multivariate normal is again, multivariate normal.

Â We also mention that the linear combination, ax plus a, of multivariate

Â normal random variable x, is normally distributed with mean, a times the

Â expected value of x plus little a, and covariance mix, matrix a times covariance

Â of x times A transpose.

Â