In the last lecture, we talked about what a multivariate expected value is, particularly if for example, we have expected value of x, where x is a random vector or matrix. This was just the associated random vector or matrix maybe we might write it this way. The expected value of the components, let's say Xi over i so that expected value of individual components, and last time we talked about the distinction between multivariate and univariate calculated expected values, okay. So, let's now just talk about some of the properties of expected values in this setting. Let me make the pencil a little bit thicker. So if we have a random vector or matrix, and we want to calculate for example A, which is a matrix or vector of constants. Times my random vector or matrix, A times x. Well, the expected value is linear in the sense that we can pull the random vector matrix out. A times expected value of x. And if we have two random vectors, say x and y, then their sum is the sum of the expected values. Which is great, and another useful property that comes up in matrices and vectors is the expected value of the transpose of a vector matrix is the expected value of the vector matrix, then transpose outside of the expected value. And the final thing I would say is that trace because the trace is a linear operator. The trace of the expected value or random vector or matrix is the expected value of the trace, okay. So all these just go to show that the expected value, which we know from our univariate case is a linear operator. All of these linear operator properties then transfer over to the multivariant case. And we're not going to go through the proofs, because the point of this class isn't to recreate multivariant mass stat. So, these are just pretty much all you really need to know about multivariant expected values for this class. And then in the next lecture, we're going to talk about the variants, multivariances and covariances.