We discussed discrete random variables that can take only finite or countable number of values with some non-zero probabilities and continuous random variables that have probability density function. However, it is possible to consider random variables that are neither discrete, nor continuous. Sometimes these random variables can be useful for statistical modeling. So let us discuss them in brief. We begin with continuous random variable, which is distributed according to a normal distribution. So the probability density function of standard normal distribution looks like this. As you can see, this random variable can take either positive or negative values. However, let us assume that they have a new random variable that is equals to this random variable but only for positive numbers. For negative numbers, the value of our new random variable is zero. This new random variable can be defined in the following way: Y is maximum of two numbers, zero and X. This is an example of defining of new random variable as a function of a previous random variable. However, as we will see, not every function will give us new variable that has probability density function. Indeed, let us ask, what is the probability that Y equals to 0? Y equals to 0 if X is less than or equals to 0. So this probability is equal to probability that X is less than or equal to 0. This probability equals to one-half due to symmetry of the picture. It is easy to guess that if this probability density function is symmetric, then the probability of left half is the same as the probability of right half, and the sum of these two probabilities have to give one. So both halves has probability of one-half. So this probability is one-half. This means that this random variable Y cannot have probability density function defined at this point 0. Indeed, if we try to find the probability density function of Y at point 0, we have to consider a limit for Delta y tending to zero of probability that Y lies in segment from zero to Delta y over Delta y. In the numerator, we have this probability, and this probability is greater than or equal to the probability that Y equals to 0. However, this probability is one-half. So it means that the numerator is greater than or equals to one-half. At the same time, denominator tends to zero. So it is arbitrary small. If you divide some value which is bounded from below by some value that is arbitrary small, you have arbitrary large value. So this limit is infinity. It means that probability density function is not defined at zero. On the other hand, if we consider some segment here at positive semi-axis, then here, our random variable Y coincides with the random variable X, and probability that y is inside this segment, for example from x-nought to x-nought plus Delta x, is the same as a similar probability for x. This holds if x-nought positive. So we are somewhere here. It means that for this part of the picture, we can define probability density function for Y. This probability density function will be the same as for X. So it will be a half of a standard normal distribution, that right half. So we see that, for example, probability of any point here that are greater than zero is zero as it goes for random variables with probability density function. So we have a kind of different random variable that on one hand behaves like discrete random variable, and on the other hand, it behaves for different values and behaves like a continuous random variable. So we cannot define probability density function for random variable Y. However, we can define cumulative distribution function. For negative values of y small, we see that Y capital cannot take values that are less than y small. So it means that for all negative values here, the value of this CDF equals to 0. This goes up to zero, but at zero, CDF have a jump. Indeed, CDF_y at 0 equals to probability that y is less than or equal to 0. As Y cannot take negative values, this is the same as probability that Y equals to 0 and this probability, according to previous, is equal to one-half. So here, we have a jump to the level one-half. After the jump, the probability that we have can be calculated in the same way. For example, if we have some point y-nought here, then CDF_y at point y-nought is equal to probability that Y is less than or equal to y-nought, and this probability can be splitted into two probabilities. Either Y equals to 0 or Y is greater than 0 but less than or equal to y-nought. This probability is defined by probability density function by the integral from here, from zero to y-nought by this integral. So the corresponding CDF looks like this one. Thus, we see that we can define this a little bit strange on the variable by defining its cumulative distribution function.