18. Dynamic Programming

In CSCI 3110, we discussed how to use dynamic programming to obtain efficient polynomial-time algorithms for a number of optimization problems. The starting point of any such algorithm is a recurrence that describes an optimal solution to the problem. Based on this recurrence, it is easy to obtain a naïve recursive algorithm, but this algorithm usually has exponential running time. For problems that can be solved in polynomial time using dynamic programming, the exponential running time of the naïve algorithm is a result of solving the same subproblems over and over again during the evaluation of the recurrence. The key idea behind dynamic programming is to avoid this re-evaluation of subproblems by building a table that stores all solutions to subproblems that have already been computed. If the number of different subproblems to be evaluated is only polynomial and computing the solution for a given subproblem from solutions of smaller subproblems takes polynomial time, then this leads to a polynomial-time algorithm.

When using dynamic programming to solve NP-hard problems efficiently, we cannot expect the same degree of success, that is, we cannot expect the running time of the algorithm to become polynomial. Nevertheless, if a naïve recursive algorithm for an NP-hard problem evaluates the same set of subproblems over and over again, then dynamic programming can still be used to speed up the algorithm by making sure that each subproblem is solved only once. The resulting algorithm is hopefully faster, even though it will still take exponential time.

A serious caveat of applications of dynamic programming to NP-hard problems is that the number of subproblems to be considered is exponential. Thus, the size of the table the dynamic programming algorithm constructs is exponential, that is, the algorithm uses exponential space. The gap between the processing power of modern CPUs and the amount of memory available on current computers means that exponential time is a much smaller problem than exponential space. Thus, the applicability of dynamic programming to NP-hard problems in practice is somewhat limited.

The exponential explosion of the space requirements can be alleviated in two ways. First, if we can design a parameterized algorithm for the problem based on dynamic programming, then the space usage of the algorithm is exponential only in the parameter but polynomial in the input size. Second, the next chapter will discuss the method of inclusion-exclusion, which can be combined with dynamic programming techniques to solve NP-hard problems in exponential time but only polynomial space.

In this chapter, we illustrate dynamic programming for NP-hard problems using two classical problems. The first is the travelling salesman problem. This problem can be solved trivially in \(O(n!)\) time. Using dynamic programming, we obtain a significant improvement of the running time to \(O^*(2^n)\). The second problem we study is the Steiner tree problem. This problem has a naïve \(O^*(2^n)\)-time solution by computing a minimum spanning tree for every subset of the vertices of the graph that contains all required vertices. Using dynamic programming, we obtain a parameterized algorithm with running time \(O^*\bigl(3^k\bigr)\), where \(k\) is the number of required vertices.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.