Since I got a comment asking about it, I thought I would write down what I know about dynamic programming.
What is it?
When I think of dynamic programming I think of filling in an array using other already filled in elements as part of the inputs to each new element filled in. However, Wikipedia tells me that the term is actually much more generic. Dynamic programming is an optimisation technique for solving recursive problems where each recursion leads to a ‘smaller’ problem and that while following the recursion specific sub problems occur multiple times.
The optimisation technique is one of avoiding calculating the results to repeated sub problems more than once. This covers memoization, which is the act of first checking whether a given problem has been solved and is in a cache, if so, return the result, otherwise calculate result and store it in the cache. I personally think of memoization separately, with dynamic programming only being what Wikipedia calls bottom-up dynamic programming.
Bottom-up dynamic programming is about realising that if your cache is some form of array and there is a (preferably easy) order you can traverse entries of that array such that all sub-problems required to fill in a given entry, are already filled in, you can do away with the recursion entirely. Just fill-in the cache ‘bottom-up’ from the smallest sub-problems to the actual problem you want to solve. The code to do this is almost always simpler and cleaner and avoids potential stack overflows for large inputs. It also usually performs a bit better because the way you access the cache table is more likely to be sequential, giving better cache hit ratios for the cpu cache, not to mention avoiding all the function call overhead.
Why use it?
Well for Wikipedia’s general term, you use dynamic programming because otherwise the code will perform too slowly. Specifically you want to use it when the recursion repeats sub problems not once or twice, but maybe exponentially in the size of the original problem. Even if the number of repeats isn’t so huge, if it is greater than constant, dynamic programming gives you a significant win in terms of how large an input you can handle within a given time limit. For bottom-up dynamic programming you use it because it reduces the amount of code you have to write (good for coding competitions) and usually performs better (sometimes good for coding competitions).
Why not use it?
For general dynamic programming, you don’t use it because either a) it isn’t applicable or b) there is a better approach. Sometimes a problem for which you can work out a recursive solution is still to slow even with dynamic programming. For a coding competition, given you know that a solution exists, this is sometimes a sign that some non-recursive approach can be used to solve the problem (although for me it is often a case that I simply haven’t got the best recursive method to solve the problem and looking at the problem differently will solve it with dynamic programming).
For bottom-up dynamic programming, you don’t want to use it if for a given input problem, the sub-problems actually required to solve it are only a small subset of the total size of the array needed to be used as a cache (memoization using a dictionary for a cache instead of an array is the better option in that case). An exception to this is when you are given multiple inputs and the same fully populated array can be used for all of them. Otherwise bottom-up dynamic programming is what you want when the recursion for a problem is dense in all possible sub-problems.
How to use it?
And here comes the tough question… Dynamic programming as a technique is so simple that most of the time how to use it isn’t a problem. The problem is ‘how do I turn problem A into a recursive problem with repeated sub problems where I can then take advantage of dynamic programming to solve efficiently’.
So all I can do is give a few hints and examples.
First hint is consider the input size (this is applicable for coding competitions only obviously). If the product of the size of the range for each of the inputs is > about 50million, then your problem either can’t be solved by dynamic programming, or you need to work out some way to simplify it first.
Second hint is a problem which contains a list of objects, where the maximum list length is ~16 or less might be a recursion on whether each element is included or not. In that case one of your array indexes will be a number constructed with each bit corresponding to one of the elements. Usually in these cases the recursion is over removing one element, and you can fill in the array in ascending numeric order, as every number which contains an additional bit filled in, is greater than the current number. This is not a case where you need to use memoization instead of bottom-up dynamic programming.
Third hint, if your recursion involves 4 or more parameters, you might be doing it wrong. This is not a definite but coding competition problems rarely require more than 2 indexes for their dynamic programming cache arrays, very rarely more than 3.
Fourth hint, sometimes the problem can be broken into a number of smaller problems, each of which can be solved quickly using dynamic programming (often all at once), and then each problem recombined to give the final result, where a direct attack on the problem will not result in a useful recursive definition.
Here are a few of classic problems.
1) How many ways can a string (specified) be made by taking characters in order from a document (also specified)?
2) How many ways can you choose k items from a set of n?
3) Given n items of varied value and (positive integer) size, and a (moderate) limit on the total size you can take with you, what is the maximum value you can take with you?
4) Given a grid of cells which are either filled or empty, what is the largest square of empty cells which can be found?
A1) So what recursion can we define to solve the problem? How about defining f(n,k) as ‘how many ways can the first n characters of the string be made by taking characters in order from the first k characters in the document?’. Then f(n,k)=f(n, k-1) + document[k-1]==string[n-1]?f(n-1, k-1):0. We’ll need to define termination conditions as well, so f(0,k)=1 for k>=0 and f(n,0)=0 for n>=1. If the string is ‘aaaaaaaaaaaaaaaaaaa’ and the document is 500 repetitions of the character a, this recursion will quite obviously explode as every recursion will call 2 recursion calls and the minimum depth before hitting termination is the length of the short string. Obviously repeated subproblems occur frequently. We could use memoization or we could do bottom up programming. This problem is a special case of bottom up dynamic programming, because the recursion only refers to elements with one less value of k. Therefore for each cell in an array f[n,k] we only need the values in f[n,k-1] to work it out. So if n is row and k is column, we can do each column one at a time, and after we do each column the next one only needs that column, nothing before it. Therefore rather than using a rectangular array, we can use 2 column arrays, current and next, filling in next based off current, then switching them, repeating until we reach the column we need. This is especially useful if the document is very long, because it saves a lot of memory.
A2) f(n, k)=f(n-1, k)+f(n-1, k-1) – I included this question because it is the ‘choice’ function, and values of the choice function map to binomial coefficients, which are famously described by Pascal’s triangle. Thus when you write a bottom-up dynamic programming solution you end up implementing a Pascal’s triangle generator. It also happens to be practically the same problem as the one before, in the case where the string and document are repetitions of the same character.
A3) Define f(n, s) be the maximum value of a subset of the first n items with total size less than or equal to s. Then f(n,s)=max(f(n-1, s), f(n-1, s-sizes[n-1])+values[n-1]) with termination condition f(0, s)=0 and f(n, s)=-infinity if s<0. This scenario is a bit different, because the function potentially recurses to negative indexes which would make our cache array problematic. We can solve that problem by simply removing the right hand side of the max function if s-sizes[n-1] is less than 0. I included this example because it uses max. Quite often a coding competition problem which requires you to optimise some scenario will end up as a dynamic programming implementation where each entry requires the choice of the maximum or minimum of multiple options.
A4) Define f(x,y) be the maximum sized square with its bottom right corner at position x,y. Then f(x,y)= 1+min(f(x,y-1), f(x-1, y), f(y-1, x-1)) if grid[x,y] is empty otherwise 0. Termination condition f(x,y)=0 where x or y<0. This works because the first 2 of the min’s give side lengths-1 and the 3rd gives diagonal count to the diagonally opposite corner-1, it shouldn’t take too much convincing to see that a square 1 greater than the smallest of these 3 is guaranteed to be all empty, and it obviously can’t be larger than that. Finally, no specific value of f(x,y) is actually the solution to the problem. The solution to the problem is the maximum over all of them. This is a case of hint number 4.
And that’s about all I can write up for tonight. I might add some more hints and examples another time.