Hello, bonjour, dia duit, hei, Welcome to the fourth week of Statistical Mechanics: Algorithms and Computations, from the Physics Department of Ecole Normale supÃ©rieure. This week we will advance our understanding along two directions from the physical point of view, we have so far thoroughly studied the first pillar of statistical mechanics: the equiprobability principle. Time has come to consider the second pillar which comes from the Maxwell distribution of velocities and gives the Boltzmann distribution of energy. This second pillar will allow us to heal the dissimmetry between the molecular dynamics simulations that is the solution of Newton's equations of motion and the Monte Carlo approach. So far, the molecular dynamics had velocities and positions, whereas the Monte Carlo only had the positions. What we learn today will allow us to overcome the dissimmetry between molecular dynamics and Monte Carlo. To get there, we must deepen our understanding of sampling and this is the second direction along which we will advance this week. So far, sampling meant that we threw pebbles, placed disks, or fixed clothes-pins on a washing line. Now, we will connect this sampling problem with the underlying process of integration. The sampling problem will accompany us also in this week's tutorial, tutorial 4. We will start with a very simple Saturday night problem on your free evening you have many options and you can give them probabilities you can go out with friend, see your family, do homework, do sports. All of these options have statistical weights and you may think at first sight that the problem is writing down these statistical weights but it isn't. The difficult part is doing the sampling. In tutorial 4, you will learn how to sample this distribution along different approaches, and how to do it if you not only have 4 or 5 options, but thousand, millions, or billions, how to sample such a discrete distribution efficiently, or event optimally and how to take the continuum limit. All of these are central subjects of daily life at work or study but - as you see - also for the night life. In this week's homework, homework session 4, you will be concerned again with the connection between sampling and integration. In fact, you will compute the volume of the sphere not in three dimensions, like here, but in 100 or 200 dimensions this is what Monte Carlo methods are really good for. You will learn for yourself how the sampling approach really goes at top speed in high dimensions and how you can evaluate an integral there. You see, there are lots of exciting developments in week 4 of our course, so let's get started. As we just mentioned, time has come to connect sampling to integration, let us look back at the children's game: pi/4 came out as the ratio of the volume of the circle to the volume of the square. More precisely, we find that the number of hits divided by the number of trials is approximately equal to the ratio of two integrals. The integral over dx and dy on the square of O(x,y) pi(x,y) divided by the integral over the square of pi(x,y). In this equation O(x,y) is a function, an observable and it is 1 inside the circle and 0 outside, and pi(x,y) is the uniform probability distribution on the square. By throwing pebbles, we in fact compute the integral in dx dy pi(x,y). But notice: on the left side the probability distribution pi(x,y) is absent it is the samples that are drawn from this distribution. Furthermore the dimension of the integral (the fact that the integral on the right is in two dimensions) does not appear on the left: there are no multiple sums along all dimensions. This explains why Monte Carlo sampling methods excel in high dimensions. For integrals, we have substitution rules: changes of variables. And we we'll next see how these changes of variables relate to the sampling approach. Our example will be the Gaussian integral integral from (-infinity) to infinity in dx over sqrt(2 pi) exp(- x^2 / 2). The value of the integral is I=1. And to show it, generations of students have learned to square the integral so we write I^2 = (integral in x) times (integral in y) Now let's do a first substitution we go from x and y to r and phi. x = r cos(phi), and y = r sin(phi) This gives the following integral. Now, we do a second substitution from r to psi and then finally we do a substitution from exp(-psi) to Upsilon, and we arrive at the formula I^2 = integral from 0 to 2pi dphi over 2pi times the integral from 0 to 1 dUpsilon. We can do these two integrals, and confirms that I^2=1, which means that I=1. We have computed the Gaussian integral. But in fact we don't care about the value of this integral what we do is to use our relationship between integration and sampling. So we can see that phi in fact is a random number between 0 and (2 pi) and Upsilon is a random number between 0 and 1. So now let's take a little crab's walk back from the variables phi and Upsilon (the samples phi and Upsilon) back to x and y. So the samples phi and Upsilon give samples phi and psi phi and r, and finally x and y. Let us discuss as what it really is, namely an algorithm gauss_test.py Its graphic version (gauss_test_movie.py) allows you to see that the flat input distributions of phi and Upsilon get transformed into distributions of x and y that are Gaussians. So we have a distribution exp(-x^2/2) and a distribution exp(-y^2/2) and the two are independent. So what you see here allows you to understand that the substitution rules (the changes of variables in integrals) also apply to the samples. Before going on, please take a moment to download, run and modify the programs we just discussed. On the Coursera website, you'll find the program gauss_test.py. Understand that phi and Upsilon are numbers, samples, and that these samples are transformed into numbers, samples of x and y taken from a Gaussian distribution. Take the program apart in the graphic version of gauss_test.py you see the variable Upsilon: it's a uniform between 0 and 1. Check that this is actually uniform. Then there is an intermediate variable psi - log(Upsilon). What is its distribution? Check it out, and compare it to the analytical calculation we did a few minutes ago. Now let us consider vectors or lists of Gaussians not just a single one. We start in two dimensions, with x and y two independent Gaussians. This is shown here in this program, and note that we have gone from our home-grown version of gauss_test.py to the Python version random.gauss. It uses basically the same algorithm, called the Box-Muller method. Output of gauss_2d_movie.py is shown here. You see a first sample, a second sample, a third sample, and so on. What you see is a cloud of points. Each point is an independent Gaussian in x and an independent Gaussian in y. And note that the distribution (the cloud) is isotropic, it is invariant under rotations. This situation is unique, there's only Gaussians that can do it. Remember that we already had distributions of x and y, in the children's game it was a square and it changed shape when it was rotated. To understand why this distribution is isotropic, let's look at the old program gauss_test.py you see that there is a variable phi which is uniform between 0 and (2pi) and the point x,y is oriented along this axis. Another of choice of phi would simply rotate the point x,y along the origin. You can directly check this also in out little crab walk we took a few minutes ago. You see we arrived from independent distributions x and y at a distribution integral from 0 to (2 pi) dphi over (2 pi), times an integral in r There is no function depending on phi, which means that the variable phi is independent. We will considerably deepen and generalize this for higher dimensions in a few minutes. Gaussians are unique in that independent distributions in x and y give an isotropic distribution in two dimensions. This property is general for Gaussians in any dimensions. The 3D case is shown here in gauss_3d.py each point is a vector x,y,z, and all three are independent Gaussians. So output of the movie version is shown here: one point, two points, three points, and so on.. You see that the distribution is isotropic. And in the movie version we can even turn around the distribution and look at it from all angles. Does this make more adventurous? Yes of course, let's give this distribution a haircut. Let's take each point x,y,z and renormalize it to one. So we take x,y,z independent Gaussians, and divide this vector by the square root of (x^2 + y^2 + z^2). So now we have random points on the unit sphere. And as the distribution initially was isotropic, it remains isotropic on the unit sphere. So we found an algorithm to put random pebbles onto the surface of the sphere. Understand that isotropy does not simply mean that the points are on the surface of the sphere they are so by construction because we renormalized each point x,y,z to have a radius = 1. Isotropy means that the distribution of points (the probability distribution) is uniform on the surface of the sphere. So here is the program direct_surface.py in general dimensions. It puts points randomly with a uniform distribution on the surface of the hypersphere in d dimensions. Now, let us follow mathematically what is going on. We have to use the concept of sample transformation. Remember: sample transformation means that the changes of variables in the integral apply directly to the samples. So the integral that we are concerned with is the integral dx_0, dx_1, .., dx_(d-1) exp(-x_0^2) exp(-x_1^2) .. exp(-x_(d-1)^2) Now let's do a change of variables from x_0, .., x_(d-1) to r, the radius variable and an angular variable Omega, the solid angle. Integrated over the whole space, dOmega gives (2 pi) in two dimensions and (4 pi) in three dimensions. So now see that the integrand depends only on r, it is equal to exp(-r^2 /2) and the differential dx_0 dx_1 .. dx_(d-1) yields r^(d-1) dOmega dr. So, for our d Gaussians, we end up with a distribution integral dr r^(d-1) exp(-r^2 / 2) integral over dOmega. It is two independent integrals one in r, one in Omega. There's no function depending on Omega, which shows that we are isotropic in angles on the surface of the sphere in d dimensions. Finally, let's do a trick let's renormalize the radius r to be on a surface: r=1 and then blow it up again to have a distribution pi(r) = r^(d-1) between 0 and 1 rather than r^(d-1) exp(-r^2/2) from 0 to infinity. What you obtain is written in this program: direct_sphere_3d.py and it is a random point inside the three dimensional unit sphere. So output is shown here: one point, two points, three points, .. and we can turn it around and we have a random point inside the surface of the three dimensional sphere. There's also a general program that allows you to sample a random point inside the d-dimensional hypersphere. Before continuing, please take a moment to download, run and modify the programs we discussed in this section that means: take them apart and put them back together. On the Coursera website, you'll find the programs direct_sphere_3d.py and direct_surface_3d.py These short and sweet programs are marvelous and they should have their place in the national museums all over the world. Molecular dynamics concerns positions and velocities, whereas the Monte Carlo method considers only the positions. Why the velocities disappear from our Monte Carlo program and how we can make them come back deserves a most thorough answer. To incorporate velocities into statistical mechanics, we again use the equiprobability principle. For hard-disks in a box, during an event-driven molecular dynamics simulation, the energy is equal to the kinetic energy and it is fixed. Kinetic energy is equal to 1/2 m sum over the velocities square v_0^2 + .. + v_(N-1)^2 In this equation, each velocity has two components v_0 = v_0x, v_0y So that the square v_0^2 is equal to v_0x ^2 + v_0y ^2 In this equation you see that the sum over the squares is fixed just like on a circle, or on a surface of a sphere. Only in higher dimensions, any legal set of velocities is a point on the 2N-dimensional hypersphere of radius sqrt(2 E_kin / m) for 4 particles in a box, this forms a vector of 8 components (an 8-dimensional vector) on the surface of the 8-dimensional hypersphere. The equiprobability principle applied to these velocities means that the statistical weight pi(v_0, v_1, .., v_(N-1)) must be a constant if the vector (v_0.. v_(N-1)) is on the surface of these hypersphere, and 0 otherwise. This means that the set of velocities is a random vector on the surface of the 2N-dimensional hypersphere. Fortunately, we just became expert in the subject of sampling a random point on the surface of a hypersphere, using 2N independent Gaussian random numbers. Remember that the algorithm direct_surface.py involves a rescaling of velocities. but that this rescaling became unnecessary in high dimension if the variance of the Gaussian was chosen correctly. In our case, we have to choose each component v_x or v_y according to a probability pi(v_x) proportional to exp(-v_x^2 / (2 sigma^2)) where sigma = sqrt((2/m) (Ekin / 2 N)) or more generally in d dimensions rather than two dimensions: sqrt((2/m) (Ekin / dN) ) Look here, at the distribution of the radius r in the algorithm direct_surface.py before rescaling and see that it becomes sharper and sharper with increasing particle number right at the correct value this means that each velocity component is a Gaussian random number. This is the Maxwell distribution. It is really famous, yet we have derived it by ourselves. The total kinetic energy divided by d N is the mean energy per degree of freedom and it is equal to one half kB times T where T is the temperature in Kelvin and kB is the famous Boltzmann constant. We find that the variance sigma^2 of the Gaussians describing the velocity distribution in our system is given by sigma^2 = kB T / m And we finally arrived at the probability distribution of each velocity component of each particle, pi(v) proportional to exp(-v^2) Now remember that the velocity of each particle was given as v = vx,vy so the velocity distribution of the absolute value of the velocity is given by pi(v) proportional to v exp(-v^2) because the volume element dvx dvy can be written as 2 pi v dv In three dimensions, we do the same thing with vx,vy,vz the probability distribution for each component is unchanged: it is a Gaussian but the probability distribution of the absolute value of the velocity is now proportional to v^2 exp(-v^2), as shown here We can compare the Maxwell distribution that we just derived with the simulation results obtained by event_disk_box.py in the second week and check that the histogram of each velocity component of an individual particle is a Gaussian The probability distribution is like exp(-v^2) Furthermore, we can check that the probability distribution of the absolute value of the velocity of each particle is also given by a Gaussian: v exp(-v^2) So we find that in order to introduce velocities into the Monte Carlo scheme we have to sample independently from the particle positions the velocities from a Gaussian with some rescaling We thus established the complete symmetry between the molecular dynamics approach and the Monte Carlo sampling approach. So finally let us look again at the Maxwell distribution which has the form exp(- energy / kB T) this is for one particle and one component but it also applies to a subsystem of particles inside a larger simulation box, for example cut-off by boxes that allow exchange of energy and momentum. Such a subsystem has energy E that may vary and the probability distribution pi(E) is proportional to exp(-E / kB T) We have arrived at the Boltzmann distribution, the second pillar of statistical mechanics, and we have done it all by ourselves. In conclusion we have studied in this fourth lecture of Statistical Mechanics: Algorithms and Computations the intricate relationship between sampling and integration the change of variables in integration was mirrored in the process of sample transformation The most important such change of variable or the most important sample transformation appeared in multi-dimensions from Gaussians to an isotropic distribution This magnificent relationship between Gaussians and isotropic distributions was very well understood by the founders of statistical mechanics and it produced the Maxwell distribution, and by generalization the Boltzmann distribution It is now time for you to study the simple programs that come with this lecture the math is simple and the concepts are clear yet the implications are far-reaching. What we discussed here will be considerably deepened in this week's tutorial and homework sessions. Next week we'll take out first steps into the field of quantum physics and quantum statistical mechanics. In the meantime, have fun with tutorial 4 and homework session 4. And see you again next week at Statistical Mechanics: Algorithms and Computations.