6.6. Capacity Scaling

For small integral edge costs and/or edge capacities, the negative cycle cancelling and successive shortest paths algorithms are reasonably efficient and have the benefit of being very simple. If the edge costs and edge capacities are large integers, the efficiency of these algorithms suffers. If they are real numbers, these algorithms may not terminate at all, in a manner similar to the failure of augmenting paths maximum flow algorithms to terminate unless the augmenting paths are chosen carefully.

The next algorithm we consider still assumes that the edge capacities are integers, but it exponentially increases the range of edge capacities for which the algorithm is efficient, by reducing the dependency of the running time of the algorithm on the maximum edge capacity from \(U\) to \(\lg U\). This improvement is achieved using capacity scaling, an interesting technique that has applications in many areas of algorithm design. This scaling technique can also be applied to edge costs (see e.g., Section 10.3 of Networks Flows: Theory, Algorithms, and Applications, but we do not discuss this further here.

The problem with the successive shortest paths algorithm is that it may send only a small amount of flow from a node with positive excess to a node with negative excess in each iteration. This is why it may take up to \(\frac{Un}{2}\) iterations to terminate.

The capacity scaling algorithm operates in phases and uses a strategy very similar to the successive shortest paths algorithm in each phase. However, in early phases, the algorithm guarantees that the amount of flow sent from an excess node to a deficit node in each augmentation is large. In particular, the excess node has a sufficiently large excess and the deficit node has a sufficiently large deficit. Once there is no more path from an excess node with large enough excess to a deficit node with large enough deficit, the algorithm begins the next phase, in which it is willing to consider node pairs with lower excess and deficit.

As we will see, the algorithm terminates after \(\lfloor \lg U \rfloor\) phases and performs \(O(n + m)\) augmentations in each phase. The cost of each augmentation is once again dominated by the cost of finding a shortest path from an excess node to a deficit node in (a subgraph of) the residual network and is thus \(O(n \lg n + m)\) using Dijkstra's algorithm. Thus, the capacity scaling algorithm takes \(O((n \lg n + m)(n + m)\lg U) = O\bigl(\bigl(nm \lg n + m^2\bigr) \lg U\bigr)\) time, which is better than the \(O\bigl(U\bigl(n^2 \lg n + nm\bigr)\bigr)\) time bound of the successive shortest paths algorithm as long as \(U = \omega\Bigl(\frac{m}{n}\lg\frac{m}{n}\Bigr)\). In particular, the algorithm performs better than the successive shortest paths algorithm on sparse graphs, even if the edge capacities are fairly small.

Being a variant of the successive shortest paths algorithm, the capacity scaling algorithm also assumes that there exists an uncapacitated path between every pair of vertices, and that all node supply balances and edge capacities are integers.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.