7.5.2. The Hungarian Algorithm
Based on Lemma 7.17, the following strategy finds a minimum-weight perfect matching in a bipartite graph. We start by setting \(M = \emptyset\), \(\pi_w = 0\) for all \(w \in W\), and \(\pi_u = \min_{w \in W} c_{u,w}\) for all \(u \in U\). We use \(\hat x\) to refer to the assignment of values in \(\{0,1\}\) to the variables in the primal LP that represents \(M\), that is, \(\hat x_e = 0\) if \(e \notin M\) and \(\hat x_e = 1\) if \(e \in M\). Given the one-to-one correspondence between \(M\) and \(\hat x\), we consider \(M\) and \(\hat x\) to be one and the same, which justifies that we call \(M\) a primal solution, a solution of (7.2).
Our initial choice of \(M\) is an infeasible primal solution (because it contains fewer than \(n\) edges), but \(\pi\) is easily seen to be a feasible dual solution. Complementary slackness holds because \(M = \emptyset\), so trivially, all edges in \(M\) are tight.
The algorithm now proceeds in \(n\) phases. Each phase increases the size of \(M\) by one while ensuring that \(M\) is a matching and every edge in \(M\) is tight. Thus, after the \(n\)th phase, we obtain a perfect matching. Since each of its edges is tight, it is a minimum-weight perfect matching, by Lemma 7.17.
Remember our tool to increase the size of a matching \(M\): We find an augmenting path \(P\) and then replace \(M\) with \(M \oplus P\). Now we have to be careful, though. We want to ensure that the matching contains only tight edges, so we need to pick a tight augmenting path, an augmenting path composed of tight edges. The problem is that there may not exist a tight augmenting path for the current matching \(M\), even if \(M\) is not a perfect matching. When this happens, we adjust the potential function to create a tight augmenting path while also ensuring that all edges in the current matching \(M\) remain tight.
We perform this adjustment of \(\pi\) in iterations. Each phase consists of up to \(n + 1\) iterations. Each iteration looks for a tight augmenting path for \(M\). If it finds such a path \(P\), it replaces \(M\) with \(M \oplus P\), and it is the last iteration in this phase: We have achieved the goal of increasing the size of \(M\) by one while keeping all edges in \(M\) tight. If the iteration does not find a tight augmenting path, it adjusts \(\pi\) to create more tight edges. Then we start another iteration. As we will see, it takes at most \(n\) such incremental adjustments of the potential function \(\pi\) to create a tight augmenting path. Hence the upper bound of \(n + 1\) on the number of iterations in each phase. We will also see that each iteration takes \(O\bigl(n^2\bigr)\) time. With \(n\) phases and up to \(n + 1\) iterations per phase, this gives a running time of \(O\bigl(n^4\bigr)\) for the whole algorithm.
Note that the grouping of the iterations of the algorithm into phases is only conceptual, a vehicle for analyzing the algorithm. (This will change when we develop a more efficient implementation of the algorithm in Section 7.5.3.) An alternative description of the algorithm is that, as long as \(|M| < n\), it repeats the following steps: It tries to find a tight augmenting path \(P\) for \(M\). If this succeeds, it replaces \(M\) with \(M \oplus P\). Otherwise, it adjusts \(\pi\) to create more tight edges.
It remains to describe the search for \(P\) in each iteration, and the adjustment of \(\pi\) if this search fails.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.