Get Time
Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings
View: Flat (newest first)  | Threaded  | Tree
Previous Thread  |  Next Thread
[ 1 2 3 ]    NEXT >
ACM ICPC analysis | Reply
I've posted my solutions to some of the problems on my blog: I'll update it as I solve more of the problems.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
> I'm pretty sure I've seen either this exact problem or a very similar one before.

I'm pretty sure too.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
The ICPC Live analysts did prepare the solutions presentations for all tasks, but sadly due to scheduling issues most of them remained unaired.

Here are the slides I prepared for problem I:

For problem I, the team from Comenius U was the only one that had the intended solution. A few other teams had randomized solutions that are hard to break, and the rest (including Iowa State who had the fastest time) just got lucky that the test data did not kill their particular heuristic.

For problem E, I came up with the same solution Bruce already describes nicely in his blog: this is basically just DFA minimization. (As for the worst case, there are at most nk iterations, because we only split the original equivalence classes into smaller and smaller ones, and at the end there are at most nk of them.)

There is also an alternate solution to E based on hashing. Basically, we can just generate and hash everything reachable from a given vertex starting via a given edge in 1, 2, ..., sufficiently many steps, and then compare two vertices by comparing their cyclic sequences of hashes. This is basically a poor man's approximation of the automata-based solution. It's probably easier to come up with, but finding a usable hash function is still a very non-trivial exercise.
Re: ACM ICPC analysis (response to post by ffao) | Reply
Surveillance (Problem K) was also a repeat from USACO Feb 2010 (Covering the Corral):
Re: ACM ICPC analysis (response to post by tehqin) | Reply
The USACO analysis has a nice alternative solution, that takes linear time after initial sorting:
Re: ACM ICPC analysis (response to post by bmerry) | Reply
I guess I'll post all my comments here, instead of at Blogger.

This problemset was ridiculously, brutally hard. I knew this was the case beforehand, but even I underestimated just how much of a massacre it would be. (I predicted 3 unsolved problems and that the winner would solve 8.) Sorry. :(

A (Baggage):
We were absolutely stunned nobody got this. As mentioned at the closing ceremonies, some of us predicted this would be the FIRST problem solved. I personally test-solved it in about 20 minutes, 10 of those offline.

It seems to be a combination of a few things:
- The group psychology of the Finals. Everybody pivots to the problems they see being solved. With so many other solved problems out there, nobody was brave enough to give A a try on their own.
- It requires some fiddling on the computer (running a brute-force solution for low N) before you start, without a guarantee that this will lead to results. This is similar to Hey, Better Bettor last year. In an ACM environment with 1 computer for 3 people, this seems to be a kiss of death.
- There's the added uncertainty of not necessarily knowing, before submitting, if you've got the SHORTEST solution. After brute-forcing the low-N cases it seems pretty obvious that the minimum number of moves is always N, and a proof (if contestants aren't brave) isn't super difficult. However, you wouldn't know this BEFORE you grab precious computer time.

Future ACM teams, there's a procedural lesson to be learned here. Not every problem can be completely solved offline before you grab the computer. Think of how valuable a solution to A would have been in this year's Finals - when only 4 problems was enough for a medal! - and EVERY TEAM missed out on it. :(

E (Maze):
I was the author, and it was indeed disturbing to see that bmerry had written exactly the same problem 7 years ago. Arg. My other problem, H, turned out to be highly similar to a problem from SWERC 2008, First Knight. It's getting so hard to come up with original ideas. :(

My intended solution was hashing; I didn't think of the equivalence-class reduction method. Nice.

F (Messenger):
Messenger's instability is actually worse than bmerry stated, because AFAIK there's no guarantee that any epsilon will work. In the equation sqrt(a_1) + sqrt(a_2) + ... + sqrt(a_50000) - sqrt(b) = x, there's no nice way to guarantee x doesn't lie in (0, 1e-15) or worse. I have no idea how bad x can get, aside from statistical arguments, which are always suspect in number theory.

So we considered putting a stability guarantee into the problem statement but, frankly, I'm getting sick of doing it. Hundreds of similar problems are asked without proper guarantees, judge data never contains these ultra-extreme cases (since judge solutions use doubles just like everyone else), and there are probably FAR more people who get confused by those guarantees than people who know why they're "necessary". Sigh.

Incidentally, there's a much nicer solution to Messenger that avoids the nasty floating-point hazards of ternary search and quadratic equations:
- Test for "impossible" (which only happens if B's total path length is shorter than the distance from A's start point to B's end point). Use an epsilon, of course.
- Binary search over [0, B's total path length] on the distance D that the messenger travels (ie, the final answer). If a given D works, any greater D will work, because the messenger is as fast as B.
- For a given D, you can see if it works by giving B a head-start of D time. From then on, you're moving A and B simultaneously and seeing if they ever come within D (plus epsilon) of each other. This isn't too hard, and requires comparing up to 50000*2 pairs of line segments. For a given pair, you can subtract one velocity from the other to convert the problem into Point-Line distance. No quadratic or ternary search required.

G (Metal):
This was a beautiful algorithmic challenge, and my favorite problem of the set. I'll describe my short but tricky solution, which was O(N^3).

The key idea is that 2SAT can be treated as a graph problem. 2SAT on variables (a_1, ..., a_n) fails iff the directed graph on 2n nodes induced by all the implications (eg, clause "(a_3 or !a_5)" creates an edge from !a_3 to !a_5 and an edge from a_5 to a_3) has a path from a_i to !a_i and !a_i to a_i for some i. But if you're just adding clauses to a 2SAT instance in a predictable order, you can find the first clause at which 2SAT fails by adding weights to all the edges and solving a maxmin path problem from a_i to !a_i and !a_i to a_i for each i. The "cost" of any path here is the minimum of the edge costs, not the sum.

bmerry already described the high-level idea behind Metal solutions (ie, pick a higher partition bound, pick a lower partition bound, run 2SAT). Using that idea, my pseudocode is:
- Add all the 2SAT edges that could be induced by any lower partition bound into a graph G (with edge weights taken from the disparities), and find the all-pairs maxmin path (ie, with Floyd-Warshall).
- Pick upper partition bounds (B) in decreasing order.
- For each B, find the maximum lower partition bound A at which the 2SAT instance fails. That is, A is the maximum over i of min(d(a_i, !a_i), d(!a_i, a_i)). A+B is a candidate for the final answer.
- As you lower B, add the new edges into G (with cost infinity) and update the maxmins. This takes O(N^2) time, but only does real work for N of the values B. Justification below.

All the O(N^3) or O(N^3 log N) solutions I've seen rely on the same observation that at most N of the B values are "important". The reason is that as you lower B, you're creating a 2-coloring problem, tracking elements that must lie in opposite partitions. However, only N meaningful updates to this problem can happen, because those meaningful updates will combine two connected sets of points into one. You can ignore all the other values of B (since the underlying 2SAT graph will have no new implied edges). An equivalent statement is that the only candidates you need to consider for B are the edge weights on a maximum spanning tree (well, plus one more, a niggling detail).

It seems possible to beat O(N^3) with a better data structure. The only step that is keeping my algorithm at O(N^3) is the updating of G with new edges. Right now I'm just doing it in the stupidest way, costing O(N^2) per edge. Is there a structure that can add edges to a directed graph and update the 2N important maxmin paths d(a_i, !a_i) and d(!a_i, a_i) in under amortized O(N^2) per edge? It seems likely, IMO.

By the way, while we were hoping for teams to get O(N^3 log N) or better, the time limits on this problem were such that an O(N^4) solution with a fast 2SAT implementation could still get accepted. It's hard to distinguish unoptimized O(N^3 log N) solutions from optimized O(N^4) solutions with a reasonable margin of error.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
If you want to look at actual code, my solutions to F and G are in the edit.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
A cool linear solution to problem K, due to ante:

For every wall X calculate the farthest wall Y (to the right) such that X and Y are in some (same) interval. We say that "X can jump to Y". Fix any wall and naively find the number of jumps that it will take to cover the entire polygon, let this number be K. We now know that the answer is K or K-1, but we don't know which one. Let the shortest jump we used above be from A to B and it's length is O(N/K). It can be proved that some optimal solution will start at the wall in the (cyclic) interval [A, B] and we can check each one of them in O(K)*O(N/K) = O(N).


Problem B (simpler discrete):

Note on the discrete case: you can completely bypass the convex hull dp trick by noting that you only need to consider the most tasty W dishes of weight 1, the most tasty floor(W/2) dishes of weight 2, ..., the most tasty floor(W/k) dishes of weight k, ... . In total, we need to consider at most O(W log W) dishes for the optimal solution and do a straightforward knapsack on them in O(W^2 log W) time. The seemingly high complexity is mitigated by the simplest-possible-cache-friendly inner loop in the knapsack.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
What really upset me this year was problem K. It had supposedly appeared in USACO, and even in this year's Slovak Correspondence Seminar in Programming (KSP)... and anyway, it sounds too common to be new or at least unlikely to be not very well known, which is well reflected on submissions for it.

That aside, good for Comenius U :D

There was a heavy shitstorm in CF comments about how the problems weren't very original, that randomized solutions shouldn't pass unless intended on WF (yes, I), that if the judges really expected just half of the problems to be solved, then they could've taken away some unoriginal ones (yes, K) and possibly replaced them with something easier etc. I'm surprised not to see anything like it here :D
Re: ACM ICPC analysis (response to post by SnapDragon) | Reply
Problem G was really interesting. We (me and couple topcoder members) were discussing it and managed to come up with everything for O(N^3 log N) solution. I like it even more now after seeing 2SAT trick for O(N^3).

Intended solution in I was also nice but other than that I didn't like the problemset. It is surprising that this years ICPC suffered from the lack of original simple enough problems.
Re: ACM ICPC analysis (response to post by Zuza) | Reply
@Zuza: I don't see how this helps: you say you reduce it to considering O(W log W) dishes, but the limit for the number of dishes is much smaller than O(W log W) anyway.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
I've posted my solution for J at In general I'm making new posts rather than editing old ones.
Re: ACM ICPC analysis (response to post by bmerry) | Reply
It was a bit simpler to code, at least for me.
Re: ACM ICPC analysis (response to post by Zuza) | Reply
But how do you do the knapsack in O(W² log W)? If you have W log W dishes, each of which can be used up to W times and from W different previous weights, that gives O(W³ log W).
Re: ACM ICPC analysis (response to post by bmerry) | Reply
The entire point is that instead of the D dishes with diminishing tastiness, I extract the optimal W log W "single portions" that can be used only once. I'll give an example:

Let the entire weight W = 12, and let's look at the following dishes of weight 2:
a) init T = 100, delta = -40
b) init T = 90, delta = -25

We know that we won't use more than 6 "portions" of weight 2 (because W = 12). So the optimal "portions" of weight 2 have tastiness 100, 90, 65, 60, 40, 20, We don't need to consider any other "portion" of weight 2. In any case, the idea should be correct, we accepted it during the contest.
[ 1 2 3 ]    NEXT >