Get Time
Search | Watch Thread  |  My Post History  |  My Watches  |  User Settings
View: Flat (newest first)  | Threaded  | Tree
Previous Thread  |  Next Thread
A first draft of the recipe | Reply

Here's just a minor draft of the article that I am writing: "Introducing Dynamic Programming".
I would like to know first feedbacks from the organizers and other people:

Introducing Dynamic Programming

Dynamic Programming is a technique that efficiently solves a large variety of algorithmic problems that require optimal solutions and that have inputs too large to be timely solved by brute-force or backtracking. A person that handles this technique well can easily tackle various problems that are often encountered in TopCoder competitions and in real life programming.

So what is exactly the Dynamic Programming, how can we describe it?
There’s no clear definition for this technique. It can be rather characterized as an algorithmic technique that is usually based on a starting state of the problem, and a recurrent formula or relation between the successive states. A state of the problem usually represents a sub-solution, i.e. a partial solution or a solution based on a subset of the given input. And the states are built one by one, based on the previously built states.

Let’s now consider a very simple problem that will help to understand better the details that will be further discussed:
Given a list of N coins, their values (V1, V2, … ,VN), and the total sum S. Find the minimum number of coins the sum of which is S (we can use as many coins of each type as we want), or report that it is not possible to select coins in such a way so that they sum up to S.

For this problem a ‘state’, let’s call it SP, would represent the solution for a partial sum P, where P<=S.

A Dynamic Programming (DP for short) solution would thus start with an initial state S0 and then will build the succeeding states based on the previously found ones. In the above problem, a state SQ that precedes SP would be the one for which sum Q is lower than P, thus representing a solution for a sum smaller than P. One starts with the trivial state S0, and then builds the state S1, S2, S3, and so on until the final state SS is built, which actually represents the solution of the problem. One should note that a state can not be processed until all of the preceding states haven’t been processed – this is another important characteristic of DP technique.

The last, but not least, detail to discuss is about finding the relation between states that would allow us to build the next states. For simple problems this relation is quite easy to be observed, but for complex problems we may need to do some additional operations or changes to reach such a relation.
Let’s again consider the sample problem described above. Consider a sum P of coins c1, c2, …, cp. P can only be reached from a smaller sum Qi by adding a coin ci to it so that Qi + ci = P. Thus there is a limited amount of states which would lead to the succeeding state SP. The minimum number of coins that can sum up to P is thus equal to the number of coins of one of the states SQi, plus one coin, the coin ci.
The existence of a limited amount of states of the problem and the fact that a state can be achieved from a limited amount of states too, leads to a polynomial quantity of steps of a DP solution, and thus to a polynomial complexity of solutions using this technique.

We will now summarize the main characteristics of the DP technique:
- The existence of states that can be ordered.
- The existence of a limited amount of states.
- The existence of a recurrent formula or relation between successive states.
- A polynomial complexity of the solutions.

So when you start solving a problem try to look for the presence of these characteristics to get a rough idea about the possibility to solve the problem by using Dynamic Programming.

We will start our discussions with the sample problem that we have considered above.

Let’s summarize what we have found till now:
1. A state of the problem represents the optimal set of minimum coins that sum up to a partial sum, i.e. a sum that is lower than S.
2. There are at most S reachable states.
3. Each state is achievable by adding a coin to a preceding state.

All these states determine the possibility to solve the problem by using DP and we basically have all required “ingredients” in order to construct the complete solution. We now give a brief description of this solution:
- We basically start with state S0 (that represents the sum 0) having an empty set of coins.
- Then we consequently compute the states S1, S2, S3, and so on till we compute the final state SS.
- The optimal set of coins for each new state Si is obtained by adding a coin to a previously computed state Sj. So we just need to consider all previous states j, from 1 to i-1 inclusive, and take the one that has the smallest number of coins and for which there’s a coin c so that j+c=i.

We end up the discussion of this problem by analyzing a simple case: the set of coins is 1, 3, and 5; and the sum S is 11.

- First of all we mark that for state 0 (sum 0) we have found a solution with a minimum number of 0 coins.
- We then proceed to sum 1, which is achieved by adding the coin 1 to sum 0, thus obtaining a solution of 1 coin for this partial sum 1.
- Then we proceed to the next state - sum 2. We can again use only coin 1 to achieve this sum. So the solution for this sum consists of 2 coins.
- Now we proceed to sum 3. We need to analyze 2 coins: 1 and 3, because both are lower or equal to 3 and thus can be used to achieve it. If we use the coin 1 then we obtain a solution of 3 coins by adding the coin to the solution of the state S2. We obtain a better solution if we add the coin 3 to the zero sum, which is represented by state S0. So we end up with having a solution of just 1 coin for state S3.
- For the sum of 4 we get a solution of 2 coins – 1+3.
- Proceeding this way we obtain a final solution of 3 coins: 5+5+1.

int Coins(int Sum, vector<int> Coins)
int N = Coins.size();
const int INF = 1e+9;
vector<int> numCoins;
for (int i = 0; i <= Sum; i++)

numCoins[0] = 0;
for (int i = 1; i <= Sum; i++)
for (int c = 0; c < N; c++)
if (Coins[c] <= i && numCoins[i - Coins[c]] + 1 < numCoins[i])
numCoins = numCoins[i - Coins[c]] + 1;

return (numCoins[Sum] < INF ? numCoins[Sum] : -1);
Re: A first draft of the recipe (response to post by Dumitru) | Reply
There is a bug in the line
numCoins = numCoins[i - Coins[c]] + 1;

I guess it should be
numCoins[i] = numCoins[i - Coins[c]] + 1;
Re: A first draft of the recipe (response to post by | Reply


This is an important question regarding operators and operands. I think we should use dynamic programming approach.

We have n operands and m operators. For example:
operands = {a,b,c,d}
operators = {*, -, +, /}

We must put operators between operands and create all of the possibility permutations.
F1 = a+b+c+d
F2 = a+b+c-d
F2 = a+b-c-d
Fi = a*b+c/d;

I could not implement the program.
Thank you very much.