16.4.2.7. Analysis

We have argued as part of the discussion of the different cases that the branching rule in each case is correct. Thus, the algorithm correctly decides whether the input graph has a vertex cover of size at most \(k\). To prove the following theorem, we need to analyze the algorithm's running time.

Theorem 16.17: It takes \(O^*\bigl(1.33^k\bigr)\) time to decide whether a graph \(G\) has a vertex cover of size at most \(k\) and, if so, compute a minimum vertex cover of \(G\).

Proof: Note that each invocation needs to compute the connected components of \(G\) (to check whether Case 1 applies) and the degrees of all vertices in \(G\) (to eliminate isolated vertices and to choose between Cases 2–6). This can clearly be done in \(O(n + m)\) time, that is, in at most \(c \cdot (n + m)\) time for a sufficiently large constant \(c\). To simplify some of the arguments in the remainder of the proof, we assume that the cost of each invocation \(I\) on a graph \(G\) with \(n\) vertices and \(m\) edges is exactly \(c \cdot (n + m)\). Clearly, this can only increase the total cost of the algorithm. Thus, if we can prove that the cost of the algorithm is \(O^*\bigl(1.33^k\bigr)\) under this assumption, then it is also \(O^*\bigl(1.33^k\bigr)\) if the cost of each invocation is at most \(c \cdot (n + m)\).

Now consider the recursion tree of the algorithm. For any invocation \(I\), let \(q_I\) be the maximum number of applications of Case 5 on any path from \(I\) to a descendant invocation. The remainder of the proof is divided into three parts:

  • First we prove that we can eliminate all invocations whose input instances \((G,k)\) satisfy \(k < 0\) from the recursion tree without decreasing the total cost of all invocations in the recursion tree by more than a constant factor. Thus, it suffices to prove that the total cost of all invocations whose input instances \((G,k)\) satisfy \(k \ge 0\) is \(O^*\bigl(1.33^k\bigr)\).

  • The second part of the proof shows that the total cost of all descendant invocations of an invocation \(I\) with input \((G,k)\), including \(I\), is at most \(c \cdot 2^{q_I} \cdot (4 \cdot 1.33^k - 1) \cdot (n + m)\), where \(n\) is the number of vertices in \(G\), \(m\) is the number of edges in \(G\), and \(c\) is the same constant as above.

  • The third part of the proof shows that \(q_I \le 2\) for every invocation \(I\). Thus, the total running time of the algorithm is \(O^*\bigl(1.33^k\bigr)\), as claimed.

Eliminating invocations with negative parameter: First note that no invocation with negative parameter makes any recursive calls. Thus, the parent invocation of any invocation with negative parameter has non-negative parameter. By charging each invocation with negative parameter to its parent invocation, we therefore charge all invocations with negative parameter to invocations with non-negative parameter. To prove that eliminating all invocations with negative parameter decreases the total cost of all invocations in the recursion tree by no more than a constant factor, it suffices to prove that the total cost of all invocations with negative parameter charged to some invocation \(I\) with non-negative parameter is only a constant factor greater than the cost of \(I\) itself.

So consider any invocation \(I\) with input \((G,k)\) such that \(k \ge 0\).

  • If \(I\) applies Case 1, then either \(I\) makes no recursive calls or all of these recursive calls have non-negative parameter because Case 1 makes recursive calls only if the number of connected components \(t\) of \(G\) is no greater than \(k\). The parameter of every child invocation of \(I\) is \(k - t + 1\) in this case. Thus, if \(I\) applies Case 1, it is not charged for any invocations with negative parameter.

  • If \(I\) applies one of Cases 2–6, then observe that \(I\) makes at most three recursive calls in each of these cases. Thus, \(I\) is charged for at most three invocations with negative parameter. Since \(I\)'s cost is \(c \cdot (n + m)\) and each of its child invocations has cost at most \(c \cdot (n + m)\), this shows that the total cost of all invocations with negative parameter charged to \(I\) is at most three times the cost of \(I\).

The cost of invocations with non-negative parameter: We use induction on \(k\) to prove that the cost of all descendant invocations of an invocation \(I\) with input \((G,k)\) and with parameter \(k \ge 0\), including the cost of \(I\) itself, is bounded by \(c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m)\).

If \(I\) makes no recursive calls with non-negative parameter, then its cost is \(c \cdot (n + m) \le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m)\) because for \(k \ge 0\), \(4 \cdot 1.33^k - 1 \ge 1\).

So assume that \(I\) makes \(t \ge 1\) recursive calls \(I_1, \ldots, I_t\) with non-negative parameters. This implies that \(k > 0\) because any recursive call that \(I\) makes has a parameter less than \(k\). Let \((G_1,k_1), \ldots, (G_t,k_t)\) be the inputs of \(I_1, \ldots, I_t\). For \(1 \le j \le t\), let \(n_j\) be the number of vertices in \(G_j\), and let \(m_j\) be the number of edges in \(G_j\). Also note that \(q_{I_j} \le q_I\) for all \(1 \le j \le t\) because \(I_j\) is a descendant invocation of \(I\).

  • If \(t = 1\), then observe that \(n_1 \le n\), \(m_1 \le m\), and \(k_1 < k\), no matter which case the invocation \(I\) applies. Thus, by the inductive hypothesis, the total cost of the invocation \(I_1\) and all its descendant invocations is bounded by \(c \cdot 2^{q_{I_1}} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n_1 + m_1)\). By adding the cost of \(I\), we obtain that the total cost of all descendant invocations of \(I\) is bounded by

    \[\begin{aligned} c \cdot 2^{q_{I_1}} &\cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n_1 + m_1) + c \cdot (n + m)\\ &\le c \cdot 2^{q_{I_1}} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n + m) + c \cdot (n + m)\\ &\le c \cdot 2^{q_{I_1}} \cdot 4 \cdot 1.33^{k-1} \cdot (n + m)\\ &\le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m), \end{aligned}\]

    where the last inequality follows because \(q_{I_1} \le q_I\) and for \(k \ge 1\), \(4 \cdot 1.33^k - 1 \ge 4 \cdot 1.33^{k-1}\).

  • If \(t \ge 2\), then we distinguish which case \(I\) applies:

    • If \(I\) applies Case 1, then \(G_1, \ldots, G_t\) are the connected components of \(G\) and \(k_1 = \cdots = k_t \le k - 1\). Thus, if \(n_j\) is the number of vertices of \(G_j\) and \(m_j\) is the number of edges of \(G_j\), then \(\sum_{j=1}^t n_j = n\) and \(\sum_{j=1}^t m_j = m\). By the inductive hypothesis, the total cost of all descendant invocations of each child invocation \(I_j\) is bounded by \(c \cdot 2^{q_{I_j}} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n_j + m_j)\). By summing these costs and adding the cost of \(I\), we obtain that the total cost of all descendant invocations of \(I\) is bounded by

      \[\begin{aligned} \sum_{j=1}^t &\left[ c \cdot 2^{q_{I_j}} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n_j + m_j) \right] + c \cdot (n + m)\\ &\le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n + m) + c \cdot (n + m)\\ &\le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m), \end{aligned}\]

      where the first inequality follows because \(\sum_{j=1}^t n_j = n\), \(\sum_{j=1}^t m_j = m\), and for all \(1 \le j \le t\), \(q_{I_j} \le q_I\). The second inequality follows from the same argument as in the case when \(t = 1\).

    • If \(I\) applies Case 2, 3, 4 or 6, then by the inductive hypothesis, the total cost of all descendant invocations of \(I\) is bounded by

      \[\begin{aligned} \sum_{j=1}^t &\left[ c \cdot 2^{q_{I_j}} \cdot \bigl(4 \cdot 1.33^{k_j} - 1\bigr) \cdot (n_j + m_j) \right] + c\cdot(n + m)\\ &\le \sum_{j=1}^t \left[ c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^{k_j} - 1 \bigr) \cdot (n + m) \right] + c\cdot(n + m). \end{aligned}\]

      As part of the discussion of Cases 2, 3, 4, and 6, we proved that each of these cases has a branching number no greater than \(1.33\), that is, \(\sum_{j=1}^t 1.33^{k_j} \le 1.33^k\) in each case. Thus,

      \[\begin{aligned} \sum_{j=1}^t &\left[ c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^{k_j} - 1\bigr) \cdot (n + m) \right] + c\cdot(n + m)\\ &= \sum_{j=1}^t \left[ c \cdot 2^{q_I} \cdot 4 \cdot 1.33^{k_j} \cdot (n + m) \right] - t \cdot c \cdot 2^{q_I} \cdot (n + m) + c \cdot (n + m)\\ &\le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m), \end{aligned}\]

      where the last inequality follows because \(\sum_{j=1}^t 1.33^{k_j} \le 1.33^k\) and \(t \ge 2\).

    • Finally, consider the case when \(I\) applies Case 5. In this case, \(t = 2\) because any invocation that applies Case 5 makes two recursive calls and we assume here that \(t \ge 2\). Moreover, \(q_{I_j} < q_I\) for every child invocation \(I_j\) of \(I\) because \(I\) itself applies Case 5. By the inductive hypothesis, the total cost of all descendant invocations of each child invocation \(I_j\) of \(I\) is bounded by \(c \cdot 2^{q_{I_j}} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n_j + m_j)\). By adding the cost of \(I\), we obtain that the total cost of all descendant invocations of \(I\) is bounded by

      \[\begin{aligned} \sum_{j=1}^2 &\left[ c \cdot 2^{q_{I_j}} \cdot \bigl(4 \cdot 1.33^{k_j} - 1\bigr) \cdot (n_j + m_j) \right] + c\cdot(n + m)\\ &\le \sum_{j=1}^2 \left[ c \cdot 2^{q_I-1} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n + m) \right] + c\cdot(n + m)\\ &= c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^{k-1} - 1\bigr) \cdot (n + m) + c\cdot(n + m)\\ &\le c \cdot 2^{q_I} \cdot \bigl(4 \cdot 1.33^k - 1\bigr) \cdot (n + m), \end{aligned}\]

      where the last inequality once again follows by the same arguments as in the case when \(t = 1\).

Bounding \(\boldsymbol{q_I}\): The final part of the proof is to show that \(q_I \le 2\) for every invocation \(I\). Consider any invocation \(I_1\) and any descendant invocation \(I_2\) of \(I_1\). Let \((G_1, k_1)\) be the input of \(I_1\), and let \((G_2, k_2)\) be the input of \(I_2\). Then observe that \(G_2\) is a proper subgraph of \(G_1\). By Lemma 16.18 below, \(G_2\) cannot be \(d\)-regular if \(G_1\) is connected and \(d\)-regular. Thus, if we consider any path from some invocation \(I\) to a leaf of the recursion tree, then this path can contain at most one invocation whose input graph is connected \(4\)-regular and at most one invocation whose input graph is connected and \(3\)-regular. Therefore, \(q_I \le 2\). ▨

Lemma 16.18: A proper subgraph of a connected \(d\)-regular graph is not \(d\)-regular, for any \(d \ge 1\).

Proof: Let \(G\) be a connected \(d\)-regular graph for some \(d \ge 1\), and let \(G'\) be a proper subgraph of \(G\).

We can assume that \(G'\) is not empty because otherwise it is clearly not \(d\)-regular for any \(d \ge 1\). Thus, there exists a vertex \(w \in G'\). Since \(G'\) is a proper subgraph of \(G\), there also exists an edge \((u,v)\) of \(G\) that is not an edge of \(G'\).

If \(u \in G'\), then note that \(\deg_{G'}(u) < \deg_G(u) = d\) because the edge \((u,v)\) is incident to \(u\) in \(G\) but not in \(G'\). Thus, \(G'\) is not \(d\)-regular.

If \(u \notin G'\), then note that there exists a path \(\langle w = x_1, \ldots, x_k = u \rangle\) from \(w\) to \(u\) in \(G\) because \(G\) is connected. Since \(w \in G'\) and \(u \notin G'\), there exists an index \(i\) such that \(x_i \in G'\) but \(x_{i+1} \notin G'\). Then \(\deg_{G'}(x_i) < \deg_G(x_i) = d\) because the edge \((x_i, x_{i+1})\) is incident to \(x_i\) in \(G\) but cannot be incident to \(x_i\) in \(G'\) because \(x_{i+1} \notin G'\). Thus, once again, \(G'\) is not \(d\)-regular. ▨


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.