13. LP Rounding
Among the techniques for obtaining approximation algorithms we discuss in this course, LP rounding is the only one that explicitly solves the LP relaxation of the problem at hand and then uses the computed fractional solution of the LP to obtain a feasible solution that is a good approximation of an optimal solution. The lower bound this technique is based on, assuming we are solving a minimization problem, is that \(\mathrm{OPT}_f \le \mathrm{OPT}\). Thus, if we can prove that the solution we obtain via rounding has an objective function value no greater than \(c \cdot \mathrm{OPT}_f\), it is a \(c\)-approximation of \(\mathrm{OPT}\).
There are many clever approaches to LP rounding. Here, we discuss only two. The first one is to exploit half-integrality of an optimal solution of the LP relaxation: Some problems have the remarkable property that there exists an optimal solution of the LP relaxation that assigns values to all variables that are multiples of \(\frac{1}{2}\). By rounding these values up, we obtain an integral solution whose objective function value is at most twice \(\mathrm{OPT}_f\). We will see this idea in action for the vertex cover and multiway vertex cut problems in Section 13.1.
The second approach to LP rounding we discuss is randomization. Given an optimal solution \(\hat x\) of the LP relaxation of the ILP we want to solve, let \({\hat x_v} = \hat x_v - \lfloor \hat x_v \rfloor\) be the fractional part of \(\hat x_v\). To obtain an integral solution \(\tilde x\), we treat these fractional parts \(\hat x_v\) as probabilities and randomly choose \(\tilde x_v\) from the set \(\{ \lfloor \hat x_v \rfloor, \lfloor \hat x_v \rfloor + 1 \}\), choosing the first value with probability \(1 - {\hat x_v}\) and the second value with probability \({\hat x_v}\). In other words, we obtain \(\tilde x_v\) by rounding \(\hat x_v\) up with probability \({\hat x_v}\) and rounding \(\hat x_v\) down with probability \(1 - {\hat x_v}\). Doing this once usually does not lead to a feasible solution, but we can repeat this procedure several times and combine the resulting solutions appropriately to obtain a feasible solution that has low expected cost. We illustrate randomized rounding using the set cover and multiway edge cut problems in Section 13.2.
In some cases, it is possible to derandomize such a randomized algorithm, that is, to turn it into a deterministic algorithm that computes an equally good solution but in the worst case, not just in expectation. The method of conditional expectations is one way to achieve this for some problems. We illustrate this technique in Section 13.3 using the maximum satisfiability problem as an example.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.