4.1. LP Duality
Consider a maximization LP \(P\) in canonical form:
\[\begin{gathered} \text{Maximize } cx\\ \begin{aligned} \text{s.t. }Ax &\le b\\ x &\ge 0 \end{aligned} \end{gathered}\tag{4.1}\]
In the context of LP duality, we call \(P\) the primal LP. Its corresponding dual LP is
\[\begin{gathered} \text{Minimize } b^Ty\\ \begin{aligned} \text{s.t. }A^Ty &\ge c^T\\ y &\ge 0. \end{aligned} \end{gathered}\tag{4.2}\]
If you write both LPs in tabular form, similar to a tableau, then the dual LP is obtained by transposing the primal LP:
\[\begin{gathered} \begin{array}{c|cccc} & c_1 & c_2 & \cdots & c_n\\ \hline b_1 & a_{11} & a_{12} & \cdots & a_{1n}\\ b_2 & a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ b_m & a_{m1} & a_{m2} & \cdots & a_{mn}\\ \hline & x_1 & x_2 & \cdots & x_n\\ \end{array}\\ \textbf{Primal} \end{gathered} \qquad \Longleftrightarrow \qquad \begin{gathered} \begin{array}{c|cccc} & b_1 & b_2 & \cdots & b_m\\ \hline c_1 & a_{11} & a_{21} & \cdots & a_{m1}\\ c_2 & a_{12} & a_{22} & \cdots & a_{m2}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ c_n & a_{1n} & a_{2n} & \cdots & a_{mn}\\ \hline & y_1 & y_2 & \cdots & y_m\\ \end{array}\\ \textbf{Dual} \end{gathered}\]
Given that every column of the dual is the transpose of a row of the primal and vice versa, it is often convenient to think about each dual variable \(y_i\) (corresponding to the \(i\)th column of the dual) as being associated with the \(i\)th constraint of the primal (corresponding to the \(i\)th row of the primal), and about each primal variable \(x_j\) (corresponding to the \(j\)th column of the primal) as being associated with the \(j\)th constraint of the dual (corresponding to the \(j\)th row of the dual). This association is important, for example, in the theory of complementary slackness.
Something that often hampers the understanding of LP duality is to ask "what the dual means". This is particularly tempting when expressing other optimization problems as LPs and then constructing the dual, for example, when using the primal-dual schema. It is sometimes possible to attach an intuitive interpretation to the dual, and this intuitive interpretation can be very helpful. However, as one of the leading researchers in approximation algorithms, Vijay Vazirani, once put it, much of the power of LP duality stems from the fact that it is a purely mechanical transformation of the given LP. Asking what the dual means is sometimes helpful but is often the wrong thing to do. Take this advice seriously.
The first, useful but not all that difficult to prove, observation is that the objective function value of any solution of the dual LP is an upper bound on the objective function value of any solution of the primal LP.
Lemma 4.1 (Weak Duality): Let \(\hat x\) be a feasible solution of the primal LP (4.1), and let \(\hat y\) be a feasible solution of the dual LP (4.2), Then
\[c\hat x \le b^T\hat y.\]
Proof: We have
\[\begin{aligned} \sum_{j=1}^n c_j\hat x_j &\le \sum_{j=1}^n \left( \sum_{i=1}^m a_{ij} \hat y_i \right) \hat x_j && \left(\text{by (4.2), }c_j \le \sum_{i=1}^m a_{ij}\hat y_i\ \forall 1 \le j \le n\right)\\ &= \sum_{i=1}^m \left( \sum_{j=1}^n a_{ij} \hat x_j \right) \hat y_i\\ &\le \sum_{i=1}^m b_i \hat y_i && \left(\text{by (4.1), }\sum_{j=1}^n a_{ij}\hat x_j \le b_i\ \forall 1 \le i \le m\right).\ \text{▨} \end{aligned}\]
An immediate consequence of weak duality is the following corollary:
Corollary 4.2: Let \(\hat x\) be a feasible solution of the primal LP (4.1) and let \(\hat y\) be a feasible solution of the dual LP (4.2). If \(c\hat x = b^T\hat y\), then both \(\hat x\) and \(\hat y\) are optimal solutions of their respective LPs.
In Section 4.2, we use this corollary to prove that the Simplex Algorithm computes an optimal solution of the LP it is given as input, from which we can deduce the following result:
Theorem 4.3 (Strong Duality): Let \(\hat x\) be a feasible solution of the primal LP (4.1) and let \(\hat y\) be a feasible solution of the dual LP (4.2). \(\hat x\) is an optimal solution of (4.1) and \(\hat y\) is an optimal solution of (4.2) if and only if
\[c \hat x = b^T \hat y.\]
Proof: By Corollary 4.2, if \(c \hat x = b^T \hat y\), then \(\hat x\) and \(\hat y\) are optimal solutions of (4.1) and (4.2), respectively.
Conversely, the correctness proof of the Simplex Algorithm given in Section 4.2 shows that there exist solutions \(\tilde x\) and \(\tilde y\) of (4.1) and (4.2), respectively, such that \(c \tilde x = b^T \tilde y\). By Lemma 4.1,
\[c \hat x \le b^T \tilde y = c \tilde x \le b^T \hat y.\]
If \(c \hat x < b^T \hat y\), this implies that
\[c \hat x < c \tilde x \quad \text{or} \quad b^T \tilde y < b^T \hat y\]
(or both). Thus, at least one of \(\hat x\) and \(\hat y\) is not an optimal solution. ▨

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.