|| Problem Statement
I can see how a O(n^3) DP can calculate the optimal value:
For each segment (i,j), try all possible splitting point p, and thus the potential cost(i,j) is cost(i,p) + cost(p,j) + length of (i,j). Minimize over p.
But this is O(n^3) and n is around 1000, time limit is 1 sec.
Any ideas on improvement?
||You can reduce it to another problem.
You are to construct a binary tree with n leaves such that
Sum[i=1..n] weight(i) * depth(i)
is minimized. Weight(i) and depth(i) are the weight and the depth of the i-th leaf. Weight is given in the input.
Here weight(i) would be the length of the i-th segment. Depth(i) is how many times it has been broken.
I think this is similar to huffman codes.
EDIT: also add a constraint that the leaves must be placed in order from left to right. This makes it different from huffman codes.
||hmm...that sort of makes sense, but I can't just apply Huffman's algorithm:
In Huffmans algorithm I can pick any 2 sub-trees (2 with the least weight) and merge them into a bigger tree.
In this problem, after the splitting, the leaves must give you the orginal string in the in order traversal. So I can't just pick 2 with the least weight to make a sub-tree.
Is that true?
||If you stick to the problem statement's notations, where M is the number of breaks, I guess you are talking about a tree with M+1 leaves.
I'm almost certain that the greedy O(M) algorithm "just merge together the adjacent subtrees that have the minimal sum" is correct.
Edit : my 'proof' was indeed false, as usual when I'm only 'almost' sure of its correctness :-)
||Hmm...I may not be understanding your approach.... I am guessing you are saying do Huffman's algorithm but only consider mergin adjacent blocks, and select the one that is minimum (result in the minimum merged segment)?
Let take a look at this:
4 7 10
So we have a string of length 14, and splitting at 4,7,10:
Split: 0 4 7 10 14
Length: 4 3 3 4
So the last split should be at 7:
that reduces to:
0 4 10 14
4 6 4
Symmetric, so say last last split is at 4:
that gives us (from top-down): 10, 4, 7 as splitting points,
14+10+6 = 30
But splitting (from top-down) 7,4,10 gives cost: 14+7+7 = 28.
It could just be I am mis calculating or mis understood.
||I've tried several approaches that don't work so far :
- The DP in O(N^3) seems to give the correct answers, but times out.
- Do the DP in O(N^3), but optimize by doing a ternary search when looking for the best place to cut a string, thus obtaining a DP in O(N^2 log N). This doesn't work, there are examples where it doesn't get the same results as the normal DP.
- Greedy algo : each time, cut the substring at the place that gives the best two balanced substrings in terms of length... That doesn't work either.
||There is actually a relatively simple n^2 DP solution to this problem very similar to your n^3 solution.
Let F[a][b] be the minimum cost to make all cuts from a to b inclusive.
In the standart n^3 solution :
F[a][b] = min(F[a][c-1] + F[c+1][b] + length(a, b)) - for every c from a to b;
Let P[a][b] be the c for which F[a][b] is minimized. It can be shown that:
F[a][b] = min(F[a][c-1] + F[c+1][b] + length(a, b)) - for every c from P[a][b - 1] to P[a + 1][b];
Some people call this "Knuth's optimization". A very similar problem is finding an optimal BST. You can have a look at the problem
A reference to CLRS is given in the forum there:
||I got ACC with this method. Thanks! This is pretty nice :-)
I should definitely read the full Cormen when I've got time..
||I see, Thanks!
Edit: Interestingly, I already solved this problem, but the time limit was set to 30 sec, so the O(n^3) method passed xD. But this problem gets me now, there's no escape in efficient algorithms !
||for every c from P[a][b - 1] to P[a + 1][b];
You still can have P[a][b-1]=a+1 and P[a + 1][b]=b-1 and have to make O(n) checks, right?
Is there some trick to prove that the total amortized complexity is O(n^2)?
||Aww, I checked, this is a stared problem in CLR (15.5-4)..."STARED!" This is gonna take me a few days to do...
||The proof of n^2 might go something like this:
1. Let's suppose that the inequality P[a][b - 1] <= P[a][b] <= P[a + 1][b] is proven.
2. Now for the interval [1, x] we will traverse the indexes from P[x - 1] to
P[x]. For the interval P[x + 1] we will will traverse the indexes from
P[x] to P[x + 1]. For the interval P[x + 2] - the indexes from
P[x + 1] to P[x + 2] and so on. In other words every time we continue
from where we left off on the previous iteration.
3. So all intervals of fixed length are calculated with only one pass from 1 to n.
4. Since the number of these intervals is obviously n, the overall complexity is n^2.
Hope this answers your question. At least it is the way I see it :)
||Yes, I think I actually understood it. Thanks, that's a very neat trick.
||Nice, I think I get it too.
You're quite impressive :-)
||I looked at Knuth's Paper: Optimum Binary Search Tree, Acta Informatica, 1, 14-25 (1971). His proof about this optimal BST...well, it didn't make much sense for me. Can someone explain the idea of R[n-1]<=R[n]<=R[n] in a more elaborate way? Thanks!