Get Time
Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings
View: Flat (newest first)  | Threaded  | Tree
Previous Thread  |  Next Thread
Post your Approach | Reply
7th place provisionally

That was a pretty nifty problem, really good for the 3 days duration. Congrats to the organizers and to nika for a pretty dominant performance!

I did pure Simulated Annealing, heavily optimized (not enough judging by some example results from other competitors).

My SA move is simple, choose a cell purely randomly, and flip it (if it had a knight, remove the knight, otherwise add a knight).

At first, I used O(cell neighbors) operations for applying the flip and undoing the flip (in case the move wasn't accepted).
This meant around 8 + 8 operations per iteration.
However, the number of accepted moves is significantly lower than the total number of tried moves, therefore it would be better to have an ultra fast 'ComputeMoveDelta' function and a slower 'ApplyMove' one (which would be called only for an accepted move).

I stored the delta for a flip in each cell (potential[cell] = if the cell would be flipped, by how much would the score change?). This leads to an O(1) 'ComputeMoveDelta' function. Whenever I applied a move (by flipping a cell), I had to update the potential value for neighbors of neighbors of the cell (flipping a cell would affect the potential for other 33 cells in the worst case). This was done only for the accepted moves however.

That was pretty much it. If we colored the board like a chessboard, the black cells and the white cells are completely independent, which means we actually have 2 disjoint subproblems. I tried to use this fact to no avail.
I also tried to guide the SA somehow (by not choosing a cell purely random) but couldn't find anything that worked.

Re: Post your Approach (response to post by CatalinT) | Reply
My approach was very similar, but with few details that I will describe later.

Somehow, instead of picking a random cell, trying to flip sequentially, from left to right and top to bottom for example, produced much better results...

CatalinT, if you have some time try to make this change and see if it helps.
I couldn't find a good explanation...
Re: Post your Approach (response to post by wleite) | Reply
Thanks wleite, great idea!
Tried it locally, huge difference ... (-400 on seed 3).

Congrats on the 2nd provisional place, must have been a tough match for Java :)
Re: Post your Approach (response to post by CatalinT) | Reply
@CatalinT and other participants
how you "choose a cell purely randomly" ? and how flip it :? i am beginner in marathon please help me .... i had little time for think on this but i know your approach in during contest . how you write your statement? i want know with witch condition we flip our cell to a knight?? i used just flip cell to knight but i forgot flip a knight cell to empty.
in first of my solution i used randomly cell and check it can be a knight or no(give me more score or no). but how we must check a knight can be in a cell ..... after flip to knight cell, how and why we must again flip it to empty cell(i forgot this)?please help me . thank you
Re: Post your Approach (response to post by CatalinT) | Reply
When doing the problem, I was pretty sure that running SA twice (for disjoint sublattices) will converge faster but I didn't actually check if that's true.

I'm novice to SA but I did Monte Carlo a few times in my life and the convergence timescales were always exponential with size, so I imagine that halving size should square root convergence time.

My main problem was time lost on copying result if the score is better than previous. I try to do it every S^2 found value, but I could be missing some better scores this way (and it still takes significant amount of time).
Re: Post your Approach (response to post by CatalinT) | Reply
Once again thank you for a nice problem Nickolas. I am not usually a fan of short MM competitions, but this one was just right. If it was much longer I would probably lose interest.

That's a great optimisation CatalinT! - Why didn't I think of that? ;)

I did Hill Climbing with random restarts. My moves were the same - flipping a knight in a cell. After trying all flips I shuffle the best solution using 1 to S^(3/2) random flips. I used O(cell neighbours) for checking if a flip improves the score and the same for applying the flip.

I also used a few optimisation tricks:

- One dimensional arrays for everything. This actually made the code faster and much easier to read
- Precomputed array of neighbours for each cell
- Instead of computing abs(currentCount_ij-expectedCount_ij) every time, I incrementally update Diff_ij=currentCount_ij-expectedCount_ij
- Faster random number generator (Java only). Instead of Math.random() I used Random.nextInt()
- Fixed scanning order (like wleite suggested) instead of picking random cells. I suspect this gave better scores only because choosing random numbers takes time, which is better spent on running more iterations.


- How did you guys work out the best annealing schedule for Simulated Annealing?
- What is the best score possible for seed=3?


- How would your approach change if other chess pieces were involved?
- For a given S, what is the minimum number of knights needed such that each cell has a count >= k? For k=1 this problem is probably studied already, because it is equivalent to the n-queens problems. However k>1 may be quite interesting
- The same as above, but for other chess pieces.
Re: Post your Approach (response to post by 7ania7) | Reply
12th place provisionally

The main structure of my solution was: generate an initial solution greedily and then improve it by making small changes to it. I tried multiple combinations for both the 1st and 2nd part:

1) Generating an initial solution greedily

I first tried the obvious approach: starting from an empty board, keep flipping the cell which brings the largest score improvement. Soon after I found a better approach for this phase, by just repeatedly iterating top-bottom, left-right, and flipping elements if they improve the score (I made a small change to also allow flipping elements which don't change the score, and the starting score improves significantly, but the final score not really).

2) Improving the solution

Attempt 1: Flip some random cells and then repair the solution greedily, using hill climbing (by repeatedly flipping the cell which brings the best score improvement, breaking ties randomly). This was my 1st submission.

Attempt 2: A SA approach very similar to the one described by CatalinT. But the overall scores didn't improve, so I abandoned it. I came back to it today, after reading CatalinT's post, and tinkered with the params a bit more. I could get significant score improvements (from -100 to even -1K) on large tests (N>400), but the scores on smaller tests were worse than my final submit. I should probably revisit my general SA approach after this match :)

Attempt 3: Choose a small rectangular piece of the board and use beam search to find the best set of cells to flip in order to improve the score. It was better on smaller tests, but too slow on large tests (i.e. I could run too few iterations).

Attempt 4 (finalized approx. 5 hours before the end of the contest): Choose a random cell which does not have its target met. Then use beam search several levels to select the next cell to flip. The next cell to flip is initially one of the neighbors of the chosen cell. At the next levels, it must be one of the cells having a neighbor in common with the previously flipped cell. This change gave me +0.3% absolute score improvement, but my score on the leadearboard increased from 720K to ~845K. I then realized the absolute scores were very close. My solution was mostly limited by the number of iterations it could make, so I tried to improve its speed. I even reimplemented it to use persistent data structures to avoid copying and recomputing stuff among states (took me another 2 hours), but that was actually slower :( The final improvement I made was to reduce the beam size for large tests, to allow more iterations in these cases.

@7ania7: Regarding copying, I kept the state as an array of bits, so whenever the best solution improved, I only needed to copy N^2/32 ints (it's at least better than not doing anything).

What I noticed while participating in this lightning match was that I had very little time to try out things, because there was no week-end day included in the 3 days match duration. And squeezing some coding and trying out ideas between regular work hours and sleep was very hard :)
Re: Post your Approach (response to post by CatalinT) | Reply
1st place provisional

My solution has two stages.
First one is exactly what CatalinT described. Instant move evaluation, slower updates. Have some idea where 33 comes from but I just had 8x8 worst case updates, on average much fewer than that. This whole thing is running for the first 2 seconds and can get reasonably close to the optimal solution.

Second stage is doing the following until time runs out: choose 16x16 subgrid randomly, run 8K iterations of exactly same SA (only minor change in temperature calculation) and revert the board to the last optimal state obtained in the process (regressions are reverted, but moves that lead to same score are good). This process can converge to near-optimal solution given enough time, which is more than 10 seconds for bigger tests. EDIT: here I've managed to use the fact that black and white cells are independent and color of allowed moves is chosen beforehand randomly.

After the contest I've tried wleite's idea and not only it is positive change as it is, but also replaces relatively expensive random generator calls with simple iteration which is a huge deal for harder tests. Got almost -500 on seed 3 without any other tweaks.

Here are the best scores obtained after running this solution for a few minutes:
seed score
1    23
2    598
3    88698
4    3594
5    5144
6    21087
7    23455
8    62631
9    43554
10   24841
Re: Post your Approach (response to post by CatalinT) | Reply
My approach was very simple, just pure SA, with single flips (i.e. putting a knight in an empty cell, or removing it, if it was already there) at each iteration.

My solution maintains an array with the gain (or loss) that flipping each cell would produce, so the flip evaluation is extremely fast.
When a flip is accepted, it updates all the affected cells' gain/loss values.

As I mentioned before, using a sequential scan, instead of picking a random cell helps the solution to converge much faster.
And it was not mainly because of the speed gain (of not calling the random generator).
Running local tests with fixed number of iterations (no time control), the "sequential scan" still produced much better results.
I tried other scan patterns, like spiral from the center and from the borders, "zig zag" with different lengths, etc.
All of them produce worse or similar results, so I kept the sequential scan, which was much simpler (and faster).

My last couple of improvements came from small code optimizations (removing double usage and pre-calculation of log values used on SA).
That increased the speed a lot (~3x), producing a very small improvement in terms of absolute scores (about 0.05%), but a huge boost in the standings, as raw scores seem to be very close.
My final submission was able to run about ~700M iterations.

I tried flipping 2 cells at a time, but that didn't help.
Also implemented different ways of building an initial solution (before starting SA), but got similar or better results starting from an empty board.
Re: Post your Approach (response to post by CatalinT) | Reply
4th place

It is my second time to be a red coder, however I am very happy!!

I used SA. Neighbor flipped a knight of one cell.
I calculated the score change amount of all neighbors like other competitors, and I recalculated when transited.

I advanced it further. I stored all neighbors in the array for each score change amount, and also updated this when transited. And firstly, I calculate the threshold of the amount of change in score that will transition, secondly randomly selected only from within the neighbors that surely transits among all neighbors. Therefore, unnecessary random selection of neighbor disappeared and it became faster.