In the early lessons we looked at various distributions. In this module, we're going to look at how we can sample from these distributions. In most practical scenarios, distributions are complex enough. That it is difficult enough to sample from appropriately, but they can often be evaluated at a certain point. The simplistic strategy of sampling uniformly fails for a couple of reasons. One, it quickly becomes inefficient as the number of dimensions grows. In high dimensional spaces that are vast regions of nothingness, and most of the probability density is concentrated in a small region. Ideally we want to sample from regions in space. Where the function given by ff(x) and the probably the of that value of effects given by pf(x)is high. So that the contribution of the Sturm f f (x) times pf(x) is high. We will look at various ways to perform efficient sampling here. This is useful not only to understand how distributions are sampled and practice in a package to site by. But it also helps you to write your own custom distributions, should that need arise. It's worth pointing out that the techniques listed here mostly applicable to universal distributions. But they're useful to understand since they formed the components or foundations, of more sophisticated techniques. So we will start looking at techniques in increasing order of complexity. So first, we'll start with the simplest technique for sampling from a discrete distribution. So we have a uniform continuous distribution, and how do we go from that to a uniform discrete distribution? In order to sample data from a particular distribution, we can start with the uniform continuous distribution between 0 and 1 denoted as U[0,1]. To your values that are distributed uniformly between 0 and 1 here. And the probability of picking a value less than a between 0 and 1 is given by p(x<a)=a. But because they're infinite possibilities between 0 and 1, the probability of picking any particular value given by 'a' is close to 0. If you have a discreet uniform distribution from 0 to end, denoted as U(0,1,...n-1). The settlers and samples. And the probability of choosing any discrete value 'a', is now given by p(x=a)=1/n. Since this is a discrete value and there are a finite number of elements in the set. One way to convert this continuous uniform distribution given by U to the discrete distribution, U(0,1,...n-1), is through the transformation X=[[nU]]. Where the I've used this notation to denote that we're rounding the values, within it to the nearest integer. If we do that, now we have generated a uniform discrete distribution that goes all the way from 0 to (n-1). To generate an arbitrary discrete distribution, we can use the continuous distribution that we saw earlier that is U[0,1] and sample from it followed by a transform, says that we get our desire distribution. So in the figure below we've divided our uniform continuous distribution U[0,1], into uneven lengths given by p1, p2, p3 pn. In this case obviously we have five different sections. The only condition for each one of these sections is that, the sum of these individual sections, needs to sum to one. If we want to sample from an arbitrary discrete set of elements, given by a1,a2...an, with probabilities for each of these elements given by p1, p2...pn. All we have to do is sample from the uniform distribution given by U[0,1]. Since I, will be picked with probability Pi, corresponding to the set element ai. For example, let's just say we want to pick element a2 and p1 spans from 0- 0.2, and p2 now spans 0.2- 0.5. So the length of this interval p2 will be 0.3. So the probability that that in our open we picked, when you randomly Sample from the distribution is 30%. So the probability that element a2 will be picked or selected when sample, is 30%. Now, let's look at another method that can be used to generate slightly more complex distributions. This is the inverse transfer method. So this method uses the accumulated distribution function, or the CDF. And the inverse of the CDF to generate our desire distribution. If the variable 'Y' is generated by applying a function 'F' after 'X', such that we get Y = F(X). It's in place that we can apply an inverse transformation to 'Y' if it exists to obtain 'X'. Here, the function 'F' has to be inevitable. In this case we call that we call the function after we buy ejected, which means we can apply F inverse (Y) and retrieve X. Now the steps required to generate the new distribution from an existing distribution is shown below.