Properties of expected value and variance for continuous random variables. Very similar to the corresponding properties of discrete random variables. Let us discuss them briefly. First of all, let us define variance for continuous random variables. In fact, the formula that defines variance for continuous random variable is exactly the same as for discrete random variables. Variance of X is expected value of X minus expected value of X squared. Of course, if we know how to calculate expected value, then we can find expected value of this random variable as well. Now let us discuss a little bit properties of expected value and variance. First, expected value of sum of two random variables is the sum of expected values. Second, expected value of CX is equal to C expected value X, where C is a constant. These two properties together are called linearity of expected value. From linear algebraic point of view, expected value is a linear operator from random variables to numbers. These properties directly follows from the corresponding properties of integral. So I will skip the proof here. By the way, expected value of constant is a constant. This is very similar to the corresponding property of discrete random variables, and the proof is obvious. It follows immediately that variance of constant is equal to zero. It is useful to discuss how to find expected value of a random variable that is a function of another random variable. Let X be a random variable and Y is f of x. Here, f is a sum function from real numbers to real numbers. Then we can find expected value of Y in terms of probability density function of X in the following way. This is integral over all line f of x times p of x d_x, where p of x is probability density function of X. Now let us discuss geometric meaning of variance. Let us consider two probability density functions. This is a probability density function of first random variable. This is probability density function of second random variable. How do you think which probability density function defines random variable with larger variance? We can reasonably expect that variance of Y is larger than variance of X. Because for X, expected value is somewhere here, and the probability that we find the value of X, which is far from the expected value, for example here or here, is very small. But for Y, we can draw expected value somewhere in the middle, for example, here. Now we see that even far from expected value, we have some and not so small probability to get the value of a random variable Y. So we can say that random variable X is more compact and random variable Y is more wide and has more wide probability density function. This gives us some intuition about variance of these variables. So we can write and this is actually correct that variance Y is larger than variance X. We can make this conclusion because the eggshell graphs that define these two probability density function is rather close to each other. So we can compare them in terms of their wideness. Now let us discuss how probability density function changes when we transform variable.