8.4. Fixed-Parameter Tractability

Let us return to the question already touched upon in the previous section: Can we decouple the exponential running time of an algorithm that solves an NP-hard optimization problem exactly from the input size and instead limit the exponential explosion to some hardness parameter?

The first question we need to answer is what makes a good parameter? There are many possible answers to this question. From a purely theoretical point of view, it may be interesting to understand which structural properties of an input make a problem hard or easy on this input. For example, the treewidth of a graph captures how tree-like a graph is. It turns out that a very large number of NP-hard graph problems can be solved in \(O(f(k) n)\) time, where \(k\) is the treewidth of the given graph. Thus, if \(k\) is small, if the graph is very tree-like, we can solve these problems in linear time, in spite of their NP-hardness.

These results are theoretically interesting and the techniques necessary to prove them are clever. From a practical point of view, however, these algorithms are often unsatisfactory. What reason do we have to believe that real-world graphs have small treewidth, that is, are very tree-like? More natural parameter choices are often driven by the problem we want to solve. Throughout these notes, we choose the objective function value as the parameter. For the vertex cover problem, we thus want to find an algorithm with running time \(O^*(f(k))\), where \(k\) is the size of the computed vertex cover. We can argue that this is a natural parameter because, if the vertex cover is not small, we probably do not care about an exact solution anyway.

So how do we design an algorithm with running time \(O^*(f(k))\) for the vertex cover problem. The first observation, which applies to all NP-hard problems, is that we can focus on the decision version of the problem: Given a pair \((G,k)\), where \(G\) is an undirected graph and \(k\) is an integer, we only want to decide whether \(G\) has a vertex cover of size at most \(k\). Such a decision algorithm is easily augmented to also return a vertex cover of size at most \(k\) if such a vertex cover exists. Given such a decision algorithm, say with running time \(O^*\bigl(c^k\bigr)\) for some constant \(c\), we can compute an optimal vertex cover by first trying to find a vertex cover of size \(0\), then of size \(1\), then of size \(2\), and so on until we finally obtain an affirmative answer. If \(k\) is the size of the computed vertex cover, this linear guessing approach takes \(\sum_{i=0}^k O^*\bigl(c^i\bigr) = O^*\bigl(c^k\bigr)\) time, that is, the final iteration dominates the running time of the whole algorithm. What is left is to design an algorithm that takes \(O^*\bigl(c^k\bigr)\) time to decide whether a given graph has a vertex cover of size at most \(k\). Again, we use branching to do this:

  • If \(k < 0\), we answer "No". There clearly does not exist a vertex cover of negative size. So assume from here on that \(k \ge 0\).

  • If \(E = \emptyset\), we answer "Yes" (and return the empty set) because the empty set is a valid vertex cover and has size \(0 \le k\).

  • Otherwise, we once again choose a vertex \(v\) of maximum degree. We make two recursive calls to decide whether \(G - v\) has a vertex cover of size at most \(k-1\) and whether \(G - N[v]\) has a vertex cover of size at most \(k - |N(v)|\).

    If the first recursive call answers "Yes" (and returns a vertex cover \(C_v\) of \(G - v\) of size at most \(k - 1\)), then we answer "Yes" and return \(C_v \cup \{v\}\), which is a vertex cover of \(G\) of size at most \(k\).

    If the second recursive call answers "Yes" (and returns a vertex cover \(C_{N(v)}\) of \(G - N[v]\) of size at most \(k - |N(v)|\)), then we answer "Yes" and return \(C_{N(v)} \cup N(v)\), which is a vertex cover of \(G\) of size at most \(k\).

    If both recursive calls answer "No", then we answer "No". This is correct because any vertex cover \(C\) of \(G\) includes \(v\) or all vertices in \(N(v)\). If \(|C| \le k\) and \(v \in C\), then \(C \setminus \{v\}\) is a vertex cover of \(G - v\) of size at most \(k-1\), that is, the first recursive call returns "Yes". If \(|C| \le k\) and \(v \notin C\), then \(N(v) \subseteq C\) and \(C \setminus N(v)\) is a vertex cover of \(G - N[v]\) of size at most \(k - N(v)\), that is, the second retursive call returns "Yes". Thus, if both recursive calls return "No", then \(G\) has no vertex cover of size at most \(k\).

The key observation now is that the parameter of both recursive calls is at most \(k-1\) because \(|N(v)| \ge 1\). Thus, the running time of this algorithm is given by the recurrence \(T(n,k) = O(n) + 2T(n,k-1)\), which has the solution \(T(n,k) = O^*\bigl(2^k\bigr)\).

Theorem 8.4: The vertex cover problem can be solved in \(O^*\bigl(2^k\bigr)\) time.

If you paid attention, you should have an obvious question: We used the same branching strategy as in the improved exponential time algorithm, but we only obtained a bound of \(O^*\bigl(2^k\bigr)\) on the running time of the parameterized algorithm while the running time of the exponential-time algorithm was \(O^*\bigl(1.619^n\bigr)\). Why is that?

The reason is that the size of the input to the second recursive call is at most \(n - 2\) because both \(v\) and the vertices in \(N(v)\) are removed from the input to this recursive call. The parameter, on the other hand, may be reduced by only \(1\) because in the second branch, only the vertices in \(N(v)\) but not \(v\) are included in the vertex cover. To obtain better parameterized algorithms for the vertex cover problem, we need more advanced branching rules, which we discuss in Chapter 16.

So what does the term "fixed-parameter tractability" refer to? Intuitively, it captures the idea that if we consider only inputs for which the hardness parameter is bounded by a constant, then we can solve the problem on such inputs in polynomial-time: the problem becomes "tractable" on such inputs. Technically, an algorithm with running time \(O\bigl(n^{f(k)}\bigr)\) also satisfies this condition: If \(k\) is a constant, then the running time of the algorithm is polynomial. To consider a problem fixed-parameter tractable, however, we impose a stricter requirement:

The input to any parameterized algorithm for some problem \(P\) consists of a pair \((I, k)\), where \(I\) is an instance of \(P\) and \(k\) is an upper bound on the hardness parameter of \(I\).

A problem \(P\) parameterized by some parameter \(k\) is fixed-parameter tractable if there exists an algorithm that solves \(P\) on any parameterized instance \((I, k)\) in \(O(f(k) \cdot n^c)\) time.

In other words, we allow the running time of the algorithm to have an exponential or worse dependence on the parameter \(k\), as expressed by the function \(f(k)\), but the running time of the algorithm must depend only polynomially on the input size. More importantly, these two factors of the running time must be independent of each other: The polynomial dependence of the running time of the algorithm must be independent of the parameter.

The motivation for this restrictive definition of fixed-parameter tractability is that algorithms with running time \(O\bigl(n^{f(k)}\bigr)\) are efficient only for very small values of \(k\).


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.