6.8. Repeated Capacity Scaling*

The key to the strongly polynomial running time of the minimum mean cycle cancelling algorithm is that after every \(O(nm \lg n)\) iterations, the flow along at least one edge becomes fixed and does not change anymore. In a sense, the algorithm discovers the optimal flow along individual edges incrementally. The repeated capacity scaling algorithm uses a similar idea of discovering the solution one label at a time. In contrast to the minimum mean cycle cancelling algorithm, however, it is the vertex potentials (or rather the difference between pairs of vertex potentials) in an optimal potential function that are identified one at a time: The optimal flow is computed only at the end of the algorithm from the computed potential function. Thus, the repeated capacity scaling algorithm solves the dual problem first and then constructs the primal solution at the end. It is the only minimum-cost flow algorithm we discuss that has this property.

Another difference to the minimum mean cycle cancelling algorithm is that, in that algorithm, the discovery of the optimal edge flows in certain iterations was only a vehicle used in the analysis of the algorithm: The algorithm was completely oblivious to the flow values along edges in the graph becoming fixed at certain points during its execution. In contrast, the repeated capacity scaling algorithm explicitly checks whether the difference between two vertex potentials is guaranteed to remain fixed for the remainder of the algorithm and exploits this to guarantee fast termination.

The repeated capacity scaling algorithm assumes that the input network is strongly connected, that all edges are uncapacitated, and that every edge has a non-negative cost.

We organize the discussion of the repeated capacity scaling algorithm as follows:

  • We start in Section 6.8.1 by showing how to compute a minimum-cost flow from an optimal potential function. This step can be formulated as a maximum flow problem and can thus be carried out in \(O\bigl(n\sqrt{m}\bigr)\) time using the highest vertex preflow push algorithm. With this procedure for converting an optimal potential function into a minimum-cost flow in hand, we can focus on computing an optimal potential function in the remainder of the discussion.

  • As the name of the repeated capacity scaling algorithm suggests, it runs the capacity scaling algorithm repeatedly, on increasingly smaller subgraphs. Given that we assume that all edges are uncapacitated, we can simplify the capacity scaling algorithm a little. In particular, there is no need for the initialization step at the beginning of each phase to ensure all edges in \(G^{f,\Delta}\) have non-negative residual costs. This also leads to a slightly better running time of the algorithm. We discuss this in Section 6.8.2.

  • In Section 6.8.3, we present the main property of the capacity scaling algorithm on which the repeated capacity scaling algorithm is based: Once the flow along an edge \((x,y)\) becomes large enough relative to \(\Delta\) in a given \(\Delta\)-phase of the algorithm, we can guarantee that the final flow produced by the capacity scaling algorithm sends a non-zero amount of flow along this edge. Using complementary slackness, we will be able to determine the difference between \(\pi_x\) and \(\pi_y\) for any optimal potential function \(\pi\). That is, even though we don't know an optimal flow or an optimal potential function yet, we learn \(\pi_x - \pi_y\) the moment we discover that the edge \((x,y)\) carries a large amount of flow.

  • Section 6.8.4 discusses the complete capacity scaling algorithm. The main strategy can be summarized as follows: We run capacity scaling until we either discover a minimum-cost flow and, along with it, the optimal potential function we are really interested in, or we discover an edge that carries a large amount of flow in the current \(\Delta\)-phase. When we discover such an edge, we contract it: We remove it and replace its endpoints by a single vertex. Then we recursively run the algorithm on the resulting smaller graph \(G'\) to find an optimal potential function \(\pi'\) in \(G'\). We will show how to construct an optimal potential function in \(G\) from this potential function \(\pi'\).

  • Section 6.8.5 establishes the correctness of the repeated capacity scaling algorithm.

  • Section 6.8.6 bounds its running time.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.