Hello, bonjour, gruss gott, laba diena, Welcome to the the 9th week of Statistical Mechanics: Algorithms and Computations from the Physics Department of Ecole Normale Supérieure. During our course, we have mostly concentrated on equilibrium statistical mechanics Our algorithms allowed us to compute partition functions, energies and condensed fractions but they were rarely concerned with time-dependent phenomena. Physical time played a minor role except in the molecular dynamics calculations for the hard disks. Monte Carlo time really was of secondary importance for example the Wolff cluster algorithm that we discussed last week had a dynamics that had nothing in common with the physical dynamics of the Ising model. In this lecture, we explore time-dependent phenomena and strangely enough we are right back with the local Monte Carlo algorithm, as it provides often an excellent model for a time evolution in a physical system. So, what we are looking for is an efficient implementation of markov_ising.py in dynamical Monte Carlo, we are stuck with a given dynamical rule and we want to implement this rule as efficiently as possible. In this lecture we will treat markov_ising.py and study the faster-than-the-clock implementation and also its main limitation, that's called the futility. In this week's tutorial we plunge into the great and very general method called simulated annealing. Simulated annealing is a tool for solving difficult optimization problems with or without a relation with physics. It consists in reformulating the optimization problem in terms of an artificial physical system whose ground-state at zero temperature or at infinite pressure contains the solution to the original task. This state is then approached through a slow decrease in temperature or a slow increase in pressure. We will apply simulated annealing to the famous 13-spheres problem, that already Isaac Newton worked on. Simulated annealing gives a great solution. In this week's homework session - the final homework session of this course - we will also be concerned with simulated annealing in the 13-spheres problem and in the equally famous traveling salesman problem. So, let's get started with week 9 of Statistical Mechanics: Algorithms and Computations. To begin with, let us consider a magnetic field as shown here and a single Ising spin whose dynamics we want to study. We set the temperature equal to 1 and the energy of this spin is equal to -h if it is up and is equal to +h if it is down. In short, the energy of sigma is -h*sigma, where sigma can be plus or minus one. To simulate the dynamics, we use the Metropolis algorithm min(1, pi_new / p_old) So the probability to flip the spin from minus to plus is equal to one, because the statistical weight of the new configuration is larger than the statistical weight of the old configuration. And the probability to flip from +1 to -1 is equal to min(1, pi_new / pi_old) = exp(-beta h) / exp(beta h) = exp(- 2 beta h) In Python, this gives the following program naive_spin_dynamics.py, where we have set beta=1 Now, notice that if at time t the spin is down it will certainly be up at time t+1 This is the essence of Metropolis dynamics and we don't discuss here whether this is realistic. In any case, this algorithm satisfies detailed balance and in the long run - at long times - the two configurations up and down will be visited with their corresponding statistical weights: pi_up and pi_down. So now, let's look at the movie version of naive_spin_dynamics.py. Here is output of this program at high temperature: there are a lot of flips. And here is output at lower temperature you see: a long time nothing is happening and the spin is in the up position, then there is a flip, immediately followed by a back-flip, and then followed by a long desert of the spin being up and here is the dynamics of our system at very low temperature. Nothing going on, and just an occasional flip and our computer heating up without any result. Now remember: the probability to flip from down to up is equal to one and the probability from up to down is equal to exp(-2h) so now, to make progress, we choose a particular value of h. In fact, we choose log(6)/2 so now, the probability to flip from up to down is exp(-2 log(6) / 2) =exp(-log(6)) =1/6 So look at this flip-die it has 5 neutral faces and one face that says "flip" so, simulating the Metropolis algorithm at magnetic field log(6)/2 is like rolling this die so, let's roll it.. First trial: no result, the spin does not flip Second trial Third time we are getting quite tired.. no result.. Fifth attempt Sixth.. no.. Seventh.. so now we have to flip the spin In Python this gives the following program: naive_throw.py and here is output of this program so now, let us program this problem a little more intelligently than we have done before. So, the probability of throwing a blank face of our beautiful flip-die is equal to 5/6 The probability of throwing twice a blank face is equal to (5/6)^2 and the probability of throwing k times a blank face followed by once the flip face is given by (5/6)^k times 1/6 or (1 - 5/6) So this is equal to (5/6)^k - (5/6)^(k+1) so this can be made into a tower sampling problem between 0 and 1 with horizontal lines at 5/6 (5/6)^2, (5/6)^3 ... we throw a pebble into this tower and if it falls for example in the upper box we have a new flip right away if it falls into the second box, we have to wait once if it falls into the third box, we have to wait twice to find out the number of times we have to wait we have to compute the box, which means (5/6)^(k+1) smaller than the random number smaller than (5/6)^k Taking logarithms, we find that k is the integer part of the logarithm of the random number divided by the logarithm of 5/6. And this means - in our naive_throw program - that the time after which we have a new flip is given by t_flip=.. So, in Python this gives the program fast_throw.py its output is indistinguishable from the naive program but it advances one iteration per accepted flip and not per time, it is for this reason that it is called a faster-than-the-clock algorithm. Of course, a very simple one. We may now write the fast_spin_dynamics program, as shown here it will remain exactly one step in the down-spin configuration with spin equal to -1 and an average of 6 steps in the up-spin configuration so that the magnetization on average, will be 5/7 We can obtain the same result also by a pedestrian approach So now, before continuing please take a moment to download, run and modify the programs we just discussed So we had the die-throwing program naive_throw.py and the fast_throw.py, that implemented the faster-than-the-clock idea The output of the two programs is completely indistinguishable and then we had the related programs naive_spin_dynamics.py and fast_spin_dynamics.py that implemented the dynamics of a single spin in the magnetic field. From a single spin in a field, we now pass on to a full-fledged simulation of the Ising model on N sites in the faster-than-the-clock flavor We consider a spin sigma, all the spins that depends on time and we denote by sigma_k the configuration that comes out of sigma by flipping the spin number k The energy change that is associated with this spin flip is delta_E = E[sigma_k] - E[sigma] In the Metropolis algorithm the probability to go from sigma to sigma_k is given by (1/N) min(1, exp(-beta delta_E)) the 1/N stands for the probability of sampling the spin number k In the Ising model and its variances (for example spin glasses) at low temperature, most of the spin flips are rejected. So, let's roll back our sleeves and implement the faster-than-the-clock approach for the Ising model. Let's avoid all rejections. But there is a problem. In the die-rolling algorithm fast_throw.py we notice that the quantity that was important was not 1/6 the probability to have a flip but it was rather 5/6, the probability of doing nothing Now, in our N-sites Ising model we also have to compute the probability of doing nothing but as we have N choices for doing something, computing this probability takes a bit of time. So we can compute now lambda, the probability of doing nothing that was 5/6 before, is given by 1 minus the sum over all the probabilities we just wrote down a few seconds ago and we can implement the faster-than-the-clock approach through the tower sampling from lambda, lambda^2, lambda^3, ... In Python, this is implemented in this program dynamic_ising.py and you see that there are two tower samplings the first tower sampling is in lambda, it is from 0 to 1 and it computes after which time we accept a new spin flip then we have to decide which spin to flip and this again is a discrete probability and it is done in a second tower sampling from lambda to 1 and it is there that we compute k, the value of the spin that is going to flip. Output of dynamic_ising.py is completely indistinguishable from markov_ising.py and the only problem that we have is that doing one iteration of this algorithm takes of the order of N operations besides that, there is no rejection in dynamic_ising.py The complete recalculation of lambda - the probability of doing nothing - can be avoided because flipping a spin somewhere in the system does not change the local environment elsewhere. So the number of environments of a spin with respect to its neighbors fall into a finite number of classes: a number of up spins and down spins that are neighbors of a given spin. And flipping a spin somewhere in the system just changes the numbers of members of each class. This was first implemented by Bortz, Kalos, and Lebowitz in 1975 They called their algorithm the n-fold way, where n stands for the number of classes. Let's take as an example the Ising model in two dimensions with periodic boundary conditions As you see here, the number of classes is 10 from a up spin with four neighboring up spins to a down spin with four neighboring down spin The tower of N probabilities can thus be reduced to a tower of 10 classes of spin flips if only you know the number of spins per class So, in this example the central spin that we want to flip has just one plus neighbor, so it lives in class 3 flipping it brings it into class 8 and makes change also the classes of the neighboring spins this very elementary book-keeping is written up in dynamic_ising_patch.py With respect to the original version (dynamic_ising.py) where each spin flip involved of the order of N operations now we can do one spin flip even for a large system in of the order of one operation At small temperature, this algorithm dynamic_ising_patch.py is much faster than the original markov_ising.py but remember that it implements exactly the same dynamics moving from accepted spin flip to the next accepted spin flip. So, before continuing, please take a moment to download, run and modify the two programs we discussed in this section For once there was dynamic_ising.py that worked with a tower of N probabilities and the nice n-fold way algorithm first implemented by Kalos Bortz and Lebowitz in 1975 that uses the n-fold way and goes from the tower of N probabilities to a tower of 10 probabilities. So the complexity of this algorithm is O(1) per accepted spin flip. So in conclusion, we have given in this lecture an introduction to faster-than-the-clock algorithms, that implement efficiently the dynamics of our system. This algorithm spend for per accepted move and no longer per attempted move. But, going from the word "attempted" to the word "accepted" move involves book-keeping. This book-keeping makes also that in many applications they are not as good as they seem to be because to go from one configuration a to a configuration b you may have to do some book-keeping as we showed in the Ising model and then on the next move you will simply go back from b to a. This is called the futility problem. To overcome this futility problem many people have implemented quite elaborate algorithms that even increase the expense per accepted move In this week's tutorial we will take up the study of dynamics algorithms from another point of view, the one of simulated annealing. Simulated annealing is a great optimization algorithm. It also shows that the sampling approach is not only related to integration as we have said for many weeks now but it can also be used for searching and for orientation in complicated spaces. So have fun with this week's tutorial and homework sessions and see you again next week in the final week of Statistical Mechanics: Algorithms and Computations for a discussion of the alpha and the omega of Monte Carlo followed by a resume and by a big party..