13.3.1.2. A Good Algorithm for Small Clauses
Our second randomized algorithm uses linear programming and LP rounding. Let us define two sets \(C_j^-\) and \(C_j^+\) for each clause \(C_j\). \(C_j^+\) is the set of variables \(x_i\) that occur in un-negated form in \(C_j\): \(x_i \in C_j\). \(C_j^-\) is the set of variables \(x_i\) that occur in negated form in \(C_j\): \(\bar x_i \in C_j\). Then the ILP formulation of MAX-SAT is
\[\begin{gathered} \text{Maximize } \sum_{j=1}^m w_{C_j}z_j\\ \begin{aligned} \text{s.t. } \sum_{x_i \in C_j^+} y_i + \sum_{x_i \in C_j^-} (1 - y_i) &\ge z_j && \forall 1 \le j \le m\\ y_i &\in \{0, 1\} && \forall 1 \le i \le n\\ z_j &\in \{0, 1\} && \forall 1 \le j \le m. \end{aligned} \end{gathered}\tag{13.13}\]
The variable \(y_i\) represents whether \(x_i = \textrm{true}\). The variable \(z_j\) represents whether the clause \(C_j\) is satisfied by \(x\).
We solve the LP relaxation of (13.13). Given the resulting solution \(\bigl(\hat y, \hat z\bigr)\), we define a random truth assignment \(\hat x\) by setting \(\hat x_i = \textrm{true}\) with probability \(\hat y_i\). As in the previous section, we use \(\hat W\) to denote the total weight of the clauses satisfied by \(\hat x\) and \(\hat W_j\) to denote the contribution of the \(j\)th clause \(C_j\) to \(\hat W\).
Lemma 13.15: If every clause in \(F\) has size at most \(k\), then
\[E[\hat W] \ge \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot \mathrm{OPT}.\]
Proof: Consider an arbitrary clause \(C_j\) of size \(h \le k\). We can assume w.l.o.g. that \(C_j = x_1 \vee \cdots \vee x_h\) because we can ensure this by replacing each negated variable in \(C_j\) with its negation throughout \(F\) and by renumbering the variables appropriately. With probability \(1 - \hat y_i\), \(\hat x_i = \textrm{false}\). Thus, \(C_j\) is not satisfied with probability \(\prod_{i=1}^h \bigl(1 - \hat y_i\bigr)\). By the inequality of geometric and arithmetic means, we have
\[\prod_{i=1}^h \bigl(1 - \hat y_i\bigr) \le \left(\frac{\sum_{i=1}^h \bigl(1 - \hat y_i\bigr)}{h}\right)^h = \left(1 - \frac{\sum_{i=1}^h \hat y_i}{h}\right)^h \le \left(1 - \frac{\hat z_j}{h}\right)^h,\]
where the last inequality follows from the constraint \(\sum_{i=1}^h \hat y_i \ge \hat z_j\) in (13.13). Thus, the probability that \(C_j\) is satisfied is at least \(1 - \Bigl(1 - \frac{\hat z_j}{h}\Bigr)^h\).
Now let
\[g(z) = 1 - \left(1 - \frac{z}{h}\right)^h.\]
This is a concave function with \(g(0) = 0\) and \(g(1) = 1 - \bigl(1 - \frac{1}{h}\bigr)^h\). Thus,
\[g(z) \ge \left[1 - \left(1 - \frac{1}{h}\right)^h\right] \cdot z\]
for all \(0 \le z \le 1\). In particular, the probability that \(C_j\) is satisfied by \(\hat x\) is at least \(\Bigl[1 - \bigl(1 - \frac{1}{h}\bigr)^h\Bigr] \cdot \hat z_j\), which is at least \(\Bigl[1 - \bigl(1 - \frac{1}{k}\bigr)^k\Bigr] \cdot \hat z_j\) because \(\bigl(1 - \frac{1}{h}\bigr)^h \le \bigl(1 - \frac{1}{k}\bigr)^k\) for all \(h \le k\).
This gives
\[E[\hat W_j] \ge\left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot w_{C_j} \hat z_j,\]
that is,
\[\begin{aligned} E[\hat W] &\ge \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot \sum_{j=1}^m w_{C_j} \hat z_j\\ &= \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot \mathrm{OPT}_f\\ &\ge \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot \mathrm{OPT}.\ \text{▨} \end{aligned}\]
Corollary 13.16: For any formula \(F\), \(E\bigl[\hat W\bigr] \ge \bigl(1 - \frac{1}{e}\bigr) \cdot \mathrm{OPT}\).
Proof: For any \(k \ge 1\), \(\bigl(1 - \frac{1}{k}\bigr)^k \le \frac{1}{e}\). Thus, by Lemma 13.15,
\[E[\hat W] \ge \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \cdot \mathrm{OPT} \ge \left(1 - \frac{1}{e}\right) \cdot \mathrm{OPT}.\ \text{▨}\]

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.