6.9. Enhanced Capacity Scaling*
As the repeated capacity scaling algorithm, the enhanced capacity scaling algorithm assumes that the network is uncapacitated and strongly connected, and that all edges have non-negative costs.
Let us start by reviewing the main characteristics of the capacity scaling and repeated capacity scaling algorithms:
The main shortcoming of the capacity scaling algorithm is that it guarantees only a bound of \(\lfloor\lg U\rfloor\) on the number of scaling phases, which may not be polynomial in \(n\) or \(m\) if the edge capacities are very large. Moreover, the capacity scaling algorithm cannot deal with non-integral edge capacities at all.
The repeated capacity scaling algorithm ensures that in each recursive call, it runs the capacity scaling algorithm for only \(O(\lg n)\) phases. Since the algorithm makes at most \(n\) recursive calls, the total number of phases of the capacity scaling algorithm executed by the repeated capacity scaling algorithm is thus \(O(n \lg n)\).
On an uncapacitated network, each phase of the capacity scaling algorithm may perform up to \(O(n)\) flow augmentations. Thus, the total number of flow augmentations performed by the repeated capacity scaling algorithm can be as high as \(O\bigl(n^2 \lg n\bigr)\). This is the main bottleneck of the repeated capacity scaling algorithm.
It is difficult to reduce the number of phases below \(O(n \lg n)\). However, we can reduce the total number of flow augmentations across all phases from \(O\bigl(n^2 \lg n\bigr)\) to \(O(n \lg n)\). This results in an algorithm that is faster by a factor of \(n\).
The main source of inefficiency of the repeated capacity scaling algorithm is that whenever it makes a recursive call on a compressed graph \(G''\), it computes an optimal potential function \(\pi''\) from scratch, completely ignoring any information about an optimal flow or optimal potential function it may have gained in the current invocation (other than finding an edge that can be contracted).
The main idea of the enhanced capacity scaling algorithm is to hold on to this information and, in a sense, make each recursive call pick up from where the parent invocation left off. It is much easier to do this in a non-recursive fashion. Thus, instead of making a number of recursive calls, each of which applies the capacity scaling algorithm, the enhanced capacity scaling algorithm consists of a single application of capacity scaling. While it runs, the algorithm keeps track of the set of "fixed" edge, the ones that the repeated capacity scaling algorithm would have contracted. In the context of the enhanced capacity scaling algorithm, these edges are called "abundant" edges, because they carry an abundant amount of flow.
To guarantee that the number of scaling phases is only \(O(n \lg n)\), the enhanced capacity scaling algorithm does not necessarily decrease \(\Delta\) by a factor of \(2\) from one phase to the next. It decreases \(\Delta\) by at least \(2\) in each phase, but whenever the absolute excess of every vertex is much smaller than the current value of \(\Delta\), then the value of \(\Delta\) in the next phase is also chosen to be much smaller than \(\frac{\Delta}{2}\). Intuitively, this allows the enhanced capacity scaling algorithm to skip phases that wouldn't do anything because there is no vertex with sufficient excess or deficit.
Possibly the most baffling difference between the enhanced capacity scaling algorithm and the successive shortest paths algorithm, the capacity scaling algorithm, and the repeated capacity scaling algorithm is that it may turn excess vertices into deficit vertices and vice versa. The successive shortest paths algorithm, the capacity scaling algorithm, and the repeated capacity scaling algorithm never do that because they move at most \(\min(e_x, -e_y)\) units of flow from an excess vertex \(x\) to a deficit vertex \(y\) in each flow augmentation. It is difficult to justify intuitively why it helps to sometimes move too much flow from an excess vertex to a deficit vertex, something that needs to be corrected subsequently to obtain a valid flow.
For the description of the enhanced capacity scaling algorithm, it will be convenient to number the scaling phases starting from \(0\) and to refer to the value of \(\Delta\) during the \(i\)th scaling phase as \(\Delta^{(i)}\). Similarly, we refer to the pseudo-flow and potential function at the beginning of the \({(i)}\)th scaling phase as \(f^{(i)}\) and \(\pi^{(i)}\), respectively, and to the excess of every vertex \(x\) at the beginning of the \(i\)th scaling phase as \(e^{(i)}_x\).
The discussion of the enhanced capacity scaling algorithm is now organized as follows:
-
We start in Section 6.9.1 by introducing the concept of abundant edges and abundant components. As already mentioned, the former are edges that carry a high amount of flow relative to the current value of \(\Delta\). They are the equivalent of "fixed" edges in the repeated capacity scaling algorithm. The abundant subgraph is the subgraph of the flow network whose edge set is the set of all abundant edges. The abundant components are the connected components of the abundant subgraph.
A key property of abundant edges is that once they become abundant, they stay abundant until the algorithm exits. Thus, the abundant subgraph only grows over time and abundant components may merge to form larger abundant components. This will be crucial.
-
In Section 6.9.2, we introduce the concepts of active vertices, low-excess vertices, and high-excess vertices and discuss their properties. An active vertex is simply a vertex with sufficiently high excess. These vertices are further divided into low and high-excess vertices based on how high their excess values are.
-
After introducing these key concepts, we have the terminology necessary to describe the enhanced capacity scaling algorithm in detail, in Section 6.9.3.
-
Section 6.9.1 introduces the two key invariants that the algorithm maintains. In Section 6.9.4, we prove that the algorithm maintains these invariants.
-
This enables us to prove the correctness of the algorithm in Section 6.9.5.
-
Finally, we analyze the running time of the algorithm in Section 6.9.6.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.