Continuing our discussion of probability distributions, now I want to discuss the central limit theorem. The central limit theorem, shown here, states that if we take many samples of N items from a larger population which has a normal distribution with a mean MEU and a variance sigma squared, then the means of the samples, or the sample means, are themselves normally distributed with a standard deviation given by sigma divided by the square root of the number of samples, and the mean value of the samples is equal to the mean value of the larger population. To see how this arises, let's consider this example that I showed some time ago of the variations of velocity in a turbulent velocity field where the vertical axis here is the speed in centimeters per second, and the horizontal axis is the time in seconds. So from this record, we can easily compute an average value, which turns out to be 5.82 centimeters per second, and a standard deviation sigma, which turns out to be approximately .53 centimeters per second, as shown here. But what if, instead, I made samples of, or computed the averages of smaller sub samples. For example, I take the first sequence of points here, and I compute an average value and I get that. Then, I take another, say 20 values, and I get an average value here. And another one here. So I get a sequence of averages which is shown by the red dots here. And similarly, I can compute an average of those averages, and that average, provided I take enough samples, turns out to be exactly the same as the average of the larger population. The variation, though, is obviously much reduced. The standard deviation of those red dots around about the mean line is much less than the larger standard deviation, and is given by this expression here. So, this is how this situation arises. To show that in an example, let's suppose that we have batches of concrete which are manufactured by a factory that contain random amounts of impurities, and the mean value of the impurities is five grams and they have a standard deviation of 1.5 grams. Let's suppose we make samples of 50 batches each. What is the probability that the average impurity in one of those samples of 50 batches, is greater than 5.3 grams? Is it which of these probabilities? So in this case, we'll assume that the central limit theorem applies to the sub samples and therefore, their average value is also going to be 5.0 grams. Their standard deviation though, is equal to the standard deviation of the population, 1.5 divided by the square root of the number of samples or batches, square root of fifty, which is equal to .212 grams. Now we calculate a normalized parameter, x minus u over sigma. So, that is equal to 5.3 because we want to know the probability it being greater than 5.3, minus 5.0, the average, divided by the standard deviation of the batches, which we've just computed, is .212, is equal to 1.42. In other words, this is 1.42 standard deviations from the mean. So, from the table, we can look up what the value is. At a value of 1.42, which is approximately here, we want the probability is the probability of this area here. In other words, the probability that this particular average is more than 1.42 standard deviations from the mean. If we look up that in the table, we find that r of z, which is the area greater than a particular value, at 1.42 is equal to .0808, or rounding that off, multiplied by a 100 is equal to 8.1%. So, the correct answer is B. There is an 8.1% probability that the average of any one sample is greater than 5.3 grams. And one final note about this, is that a good rule of thumb is that the central limit theorem is usually okay if the number of samples is greater than about 30. And this concludes my discussion of the central limit theorem.