6.8.6. Analysis

To finish the discussion of the repeated capacity scaling algorithm, we need to analyze its running time. We start by bounding the number of phases the capacity scaling algorithm executes before it discovers an optimal potential function or an edge \((x,y)\) with \(f_{x,y} \ge 4\Delta\).

Lemma 6.52: Consider any flow \(f\) during the \(\Delta\)-phase of the capacity scaling algorithm. If a vertex \(x\) has a node supply balance of \(b_x \ge 4\Delta n^2 + 2\Delta n\), then there exists an out-edge \((x, y)\) of \(x\) whose flow value is \(f_{x,y} > 4\Delta\).

Proof: By Corollary 6.45, the excess of every node in \(G\) is less than \(2\Delta n\) during the \(\Delta\)-phase. Thus, if \(b_x \ge 4\Delta n^2 + 2\Delta n\), then the total out-flow of \(x\) is

\[\sum_{y \in V} f_{x,y} \ge \sum_{y \in V} (f_{x,y} - f_{y,x}) = b_x - e_x > 4\Delta n^2.\]

Since \(x\) has at most \(n\) neighbours, this implies that there exists an out-edge \((x, y)\) of \(x\) whose flow value is \(f_{x,y} > \frac{4 \Delta n^2}{n} = 4\Delta n\). ▨

Corollary 6.53: If the capacity scaling algorithm runs for at least \(3\lg n + 4\) phases, then in the \((3\lg n + 4)\)th \(\Delta\)-phase, there exists an edge \((x, y)\) with \(f_{x,y} > 4\Delta\).

Proof: The choice of \(\Delta\) in the first phase of the algorithm is

\[\Delta_0 = C = \max_{x \in V} |b_x|\]

because all edges are uncapacitated. Since the total supply of all supply nodes equals the total demand of all demand nodes, this implies that

\[\Delta_0 \le n \cdot \max_{x \in V} b_x.\]

Let \(z\) be a node such that \(b_z = \max_{x \in V} b_x\). Then \(\Delta_0 \le n b_z\).

The value of \(\Delta\) in the \((3\lg n + 4)\)th phase is

\[\Delta = \frac{\Delta_0}{2^{3 \lg n + 3}} = \frac{\Delta_0}{8n^3} \le \frac{b_z}{8n^2}.\]

Thus,

\[b_z \ge 8\Delta n^2 \ge 4\Delta n^2 + 2\Delta n.\]

By Lemma 6.52, this shows that some out-edge \((z, y)\) of \(z\) satisfies \(f_{z,y} > 4\Delta\) during the \((3\lg n + 4)\)th phase of the algorithm. ▨

By Corollary 6.53, the capacity scaling algorithm runs for at most \(3 \lg n + 4\) phases before it discovers an optimal potential function or an edge \((x, y)\) such that \(f_{x,y} > 4\Delta\). We are now ready to finish the analysis of the repeated capacity scaling algorithm:

Theorem 6.54: The repeated capacity scaling algorithm computes a minimum-cost flow in an uncapacitated network in \(O((n \lg n + m)n^2 \lg n)\) time. For a capacitated network, its running time is \(O((n \lg n + m)m^2 \lg n)\).

Proof: First assume that \(G\) in uncapacitated. Then observe that Step 1 of the algorithm makes at most \(n\) recursive calls because every time we make a recursive call, the graph \(G''\) we recurse on has one vertex less than the current graph \(G\).

By Corollary 6.53, each recursive call runs the capacity scaling algorithm for \(O(\lg n)\) phases. As discussed in Section 6.6.3, each phase of the capacity scaling algorithm takes \(O\bigl(n^2 \lg n + nm\bigr)\) time.1 Thus, the cost per recursive call is \(O((n \lg n + m)n\lg n)\). Since we make at most \(n\) recursive calls, the total cost of Step 1 is \(O\bigl((n \lg n + m)n^2 \lg n\bigr)\).

By Lemma 6.43, Step 2 takes \(O\bigl(n^2 \sqrt{m}\bigr)\) time. Since this is less than the cost of Step 1, the total running time of the repeated capacity scaling algorithm on an uncapacitated network is \(O\bigl((n \lg n + m)n^2 \lg n\bigr)\).

For a capacitated network, we can apply the transformation from Section 6.3.3 to transform it into an equivalent uncapacitated network with at most \(n + m\) vertices and at most \(2m\) edges. By the first part of the theorem, the running time on this uncapacitated network is \(O((n \lg n + m)m^2 \lg n)\). (Technically, the first part of the theorem only implies a running time of \(O(m \lg n \cdot m^2 \lg n)\). However, recall that the single-source shortest paths problem in the transformed network can still be solved in \(O(n \lg n + m)\) time even though the network has \(n + m\) vertices (see Exercise 6.3). Thus, the cost per flow augmentation remains \(O(n \lg n + m)\). Only the number of augmentations increases from \(n^2 \lg n\) to \(m^2 \lg m = O(m^2 \lg n)\).) ▨

1

For the capacity scaling algorithm, we proved that the cost per phase is \(O((n + m)(n \lg n + m))\), but that analysis was based on the phase executing \(O(n + m)\) flow augmentations because the total absolute excess of all vertices is at most \(2\Delta + 4\Delta m\). By Corollary 6.45, the total absolute excess of all vertices in the \(\Delta\)-phase is at most \(2\Delta\) when running the algorithm on an uncapacitated network. Thus, the number of flow augmentations is only \(O(n)\), reducing the cost per phase to \(O\bigl(n^2 \lg n + nm\bigr)\).


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.