JOIN
 Select a Forum     Round Tables New Member Discussions News Discussions Algorithm Matches Marathon Matches NASA Tournament Lab TopCoder Cookbook High School Matches Sponsor Discussions Development Forums Design Forums Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings View: Flat (newest first)  | Threaded  | Tree Previous Thread  |  Next Thread Forums TopCoder Cookbook Marathon Competitions - New Recipes 3.3.5. Using Simulated Annealing Techniques
 3.3.5. Using Simulated Annealing Techniques | Reply ProblemYou faced discrete optimization problem which is obviously NP-hard. In such a problem there is a big space of feasible solutions and a goal function. You are required to find a solution with the value of goal function as least as possible.SolutionSimulated Annealing is one of the metaheuristics for the global optimization problems. To use simulated annealing you have to define space of feasible solutions, goal function and solution mutation.(Until the discussion section all the examples will refer to the traveling salesman problem.)Space of feasible solutions is a set of all the possible solutions to the problem.For example, feasible solutions of TSP are all the hamiltonian cycles in the full graph. Another space of solutions is a set of all permutation of vertices. Each of these permutations represents an order of vertex traversal. Strictly speaking, these two solution spaces are different, but for us they are almost the same. It is much more convenient to use the second representation.In our program we should define the class of solution (like "Sol").Goal function is a function defined on the set of solutions giving the score for each solution. The solution score is usually integer of float value and means how good the solution is. We will consider minimization problem, so: less score = better solution. Very often goal function can be taken straight from the problem statement.From the programmer's perspective it is necessary to create a method "GetScore" in "Sol" class.In this recipe we will refer to mutation in two ways:1. Mutation sample is a particular perturbation of solution which preserves its feasibility.2. Mutation procedure is a way to generate random mutation sample for a given solution.The example of mutation procedure for TSP problem is: choose two random vertices in a tour and swap them. There are V*(V-1)/2 mutations samples, each of them is fully described by two indices (vertex numbers or indices in permutation to swap) and is produced with equal probability. Note that changing random element of permutation to random vertex is incorrect mutation since the result is not a permutation hence it breaks feasibility.The score of mutation is the score of perturbed solution minus the score of initial solution.It is very important that the space of solutions is connected by mutation which means that any solution can be produced from any other solution in finite number of mutations with positive probability. It is also quite cool that the score of mutation can often be calculated much faster than the score of the solution (for TSP it is O(1) instead of O(V)).Mutation code is the most important part of the whole simulated annealing solution.I will assume that there is a class of mutation samples (like "Mut") which has three methods:1. generation ("Init" with "Sol" parameter)2. getting delta = score difference ("GetScore")3. perturbating the solution ("Apply" with "Sol" parameter)The greedy optimizational approach is the following:1. Take any initial solution.2. Generate mutation for current solution.3. If the mutated solution is better than the current one, then switch to it: if (score of mutation < 0) solution := mutated solution4. Unless the stop condition is met, go to step 2.Usually the greedy approach quickly converges to the local minima i.e. it finds the solution which is not globally best but any single mutation makes it worse. Of course, this local minima can be very far from the globally best solution. That's why all the heuristics like simulated annealing where created=)Simulated annealing works almost the same but it allows to make the solution worse with some probability. The greater the difference is, the less the probability is. Also the temperature parameter is introduced. The temperature characterizes the approximate score of mutations that are accepted with reasonable probability.Initially the temperature is high so that almost any mutation is accepted with probability close to 100%. Then the temperature is being "cooled". Hence the bad mutations are being accepted less and less frequently. At the end the temperature is very small so that almost any bad mutation is almost never accepted.In this case simulated annealing works like the greedy solution and goes to some local minima.Here is pseudocode for simulated annealing:1. Take any initial solution.2. Generate mutation for current solution. Let "delta" be its score.3. If delta < 0, then switch to the mutated solution. Otherwise switch to the mutated solution with probability equal to exp(-score / temperature).4. Unless the stop condition is met, go to step 2.The chance of switching to mutated solution which is used here originates from laws of some physical systems. It is the most commonly used formula to calculate acceptance probability from delta and temperature.The behaviour of temperature parameter during annealing is determined by the so-called cooling schedule.