JOIN
Get Time
forums   
Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings
View: Flat (newest first)  | Threaded  | Tree
Previous Thread  |  Next Thread
3.3.5. Using Simulated Annealing Techniques | Reply
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by syg96) | Reply
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by syg96) | Reply
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by syg96) | Reply
Very well written. I learned a lot, thank you! The comment about genetic algorithms was very funny and matches well with my personal experience.

Can you provide some intuition as to why you would use ITERS_WARMUP? So essentially you are being greedy at the start then non-greedy and then greedy towards the end of your schedule. I guess this gives you a faster start?

Also I didn't understand what you mean by "synchronization"?
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by dimkadimon) | Reply
To be honest, I tried to use GA in TCO01 and TCO02 marathons, and it was actually working (i.e. converging somewhere), but it worked worse than even the simplest SA. The cool thing about GA is that its process is not dependent on time (no temperature like in SA). Maybe GA requires more time to produce good results, I don't know. I would be happy if some "marathon jedi" wrote GA recipe which really showed how to make GA work better than SA.

Below you can read why this time synchronization mechanism was invented. Now it comes to my mind that instead of doing all that ugly things I can simply write:
        if (!(iters & (ITERS_PER_SYNC - 1)))
            done = (gettime() - starttime) / hastime;

I'll investigate how it affects the solution and edit the code=)

Synchronization with time means: temperature = F(work time). Note that the function parameter is work time instead of iteration number. So I want SA to take precisely 9.7 seconds on a marathon with 10 seconds TL=) It is quite strange if SA ends in 5 seconds and other 5 are wasted or if it doesn't have time to end properly. It is useless to restart SA from the very beginning since the first iterations effectively randomize the solution (temperature is very high).
During the first ITERS_WARMUP iterations SA works with maximal temperature (it is not greedy, but almost random). These iterations are required to estimate how much time is consumed per one iteration. Then it is assumed that all iterations spend approximately the same time. So if we did K iterations and spent T time on them, the time limit is L, then we can do about K*(L-T)/T more iterations. As you see, there is division on T. That's why this "synchronization" scheme is enabled only after several first iterations.
This modification is actually doing the start slower. The first ITERS_WARMUP are almost wasted. But I tweak this constant so that the time spent on warmup is about 0.1 - 0.5 seconds. So I do not lose much=) But I gain stability in the cooling schedule.
By the way, the assumption that all iterations take the same time is often wrong. For example, 2-opt TSP mutation takes O(1) time to generate and calculate score, but O(V) time to apply. That's why iterations with big temperature will take significantly more time that iterations with small temperature (mutation acceptance ratio is different).
But as I wrote above all this ugliness is completely unnecessary=) Thanks for a question!
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by syg96) | Reply
This would be a great standalone recipe on SA, but we have a preceding one on hill climbing (that's what you call "greedy", right?), and this recipe has a lot of good points (like connectivity by mutation of solutions space) that can be applied to HC as well.

I suggest that we split your recipe in two parts: the part which can be applied to both SA and HC I'll add to HC recipe (marking you as a co-author), and SA recipe will concentrate on the differences between SA and HC, choice of cooling schedule, examples of SA etc. This will make this pair of recipes more balanced.
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by dimkadimon) | Reply
OK, I wiped that ugly code from SA and edited the post. Now the temperature is simply determined by current time.
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by Nickolas) | Reply
Yeah, hill climbing is called greedy here=)

The terms and definitions can be unified across hill climbing, simulated annealing and genetic algorithms. But not much more. I think it is a bad idea to eliminate any single tip out of this recipe only because it is also true for hill climbing. For example, all the tips about mutation for SA should be in one place, not separated across several recipes, so it would be bad to move the tip about solution space well-connectivity out of here (though copy is of course OK).

I cannot delete paragraphs "solution space", "goal function" and "mutation" from "Solution" section since in each of them:
1. Simple example for TSP case is given. I feel that it is important to provide simple examples throughout the whole recipe because otherwise it would be too abstract.
2. I define the programming interface of concept. It means the class/method which you have to write to implement the concept. And the C++ SA code uses precisely these definitions (names of classes and methods and their signatures are completely the same). It is convenient for understanding the code.

The best thing I can do is to replace "greedy approach" paragraph with a single reference to your recipe. And reformat SA pseudocode to make it more similar to your HC pseudocode.

Making terms unified now is rather complicated... Here is a list of major differences in terms:
1. state = (feasible) solution.
2. neighbor/transformation = mutation sample/procedure.
3. maximum problem <-> minimum problem
We can ignore them since the correspondences are clear. Or edit the recipes to unify them completely.

As you see I'm not very happy about the idea of cutting this recipe... Sorry. Is the standalone style of the recipe really bad?
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by Nickolas) | Reply
If hill climbing = greedy SA, ie. a subset of SA then why not remove the recipe on hill climbing completely?

EDIT: I just had a look at the hill climbing recipe and it looks quite different from this one: provides further examples and insights. So now I am leaning towards keeping both recipes. But you guys must make sure that you use a consistent language.
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by syg96) | Reply
Thanks for the explanation.

I could never get SA with cooling to work well. Perhaps the biggest difficulty for me is finding the correct cooling schedule: not too fast and not too slow. Instead my preferred version of SA is something like this:

1. Start with an initial solution
For all possible mutations
  2. Perform a mutation
  3. If score of the mutated solution > 0.9 * score of current solution: keep the mutation
4. If no single mutation increased your score then revert to the previously best solution


0.9 is just a constant and can be changed to anything from 0.7 to 0.99. Note that you can keep mutations in step 3 without ever increasing the score. Hence step 4 is important, as it ensures that you get back to a good state.
Re: 3.3.5. Using Simulated Annealing Techniques (response to post by dimkadimon) | Reply
Well... The algorithm you prefer seems to be modified hill climbing but not simulated annealing.

This is because the number 0.9 is constant. I think cooling is essential for SA. Did you try to make this constant change? Something like C = pow(0.7, 1-T) or C = 0.7 + 0.3*T.

Also the deterministic nature of transition condition is not very good. There may be some paths in the solution space that you never consider because the maximal regress is more that x0.9. In this case your algorithm has no chance of jumping over this obstacle. SA is also unlikely to overcome this problem, but it can do it. Though the probability to do so decreases if the maximum barrier in solution space increases. On the other hand, if you increase number of cooling iterations, the probability to pass through the barrier increases. So there is always some hope with nondeterministic transition rule=)

The idea to revert to some previous solution is sometimes used in both SA and HC. I've seen it in wikipedia but never tried to use it.

I've never tweaked cooling schedule yet. Just used the exponential schedule. The maximal/minimal temperature can be determined as it is advised in the recipe (based on the scale of mutation score).

Of course I cannot say that your approach is worse than simple SA unless we compare them on some problem. Maybe it is really better?...
RSS