Before You Start
Overview
1.
Overview
❱
1.1.
Topics Covered
1.2.
Prerequisites
1.3.
A Note on Graphs
Linear Programming
2.
Linear Programming
❱
2.1.
Linear Programs
2.2.
Shortest Paths as an LP
❱
2.2.1.
The Single-Source Shortest Paths Problem
2.2.2.
Properties of Shortest Paths
2.2.3.
The LP Formulation
2.3.
Minimum Spanning Tree as an ILP
❱
2.3.1.
The Minimum Spanning Tree Problem
2.3.2.
Two Natural ILP Formulations
2.3.3.
An ILP Formulation of Polynomial Size*
2.4.
Complexity of LP and ILP
2.5.
LP Relaxations
❱
2.5.1.
ILPs With Integrality Gap 1
2.5.2.
An ILP Formulation of the MST Problem With Integrality Gap 1
2.6.
Canonical and Standard Form
❱
2.6.1.
Canonical Form
2.6.2.
Standard Form
3.
The Simplex Algorithm
❱
3.1.
The Basic Solution and the Basic Feasible Solution
3.2.
Overview of the Algorithm
3.3.
Pivoting
❱
3.3.1.
Finding a Better Solution
3.3.2.
Transforming the LP
3.4.
Initialization of the Algorithm
❱
3.4.1.
Making s Non-Basic
3.4.2.
Dropping s
3.4.3.
Restoring the Objective Function
3.4.4.
Solving the Auxiliary LP
3.5.
An Extended Example and Tableaux
❱
3.5.1.
Initialization
3.5.2.
Optimization
3.6.
Cycling and Bland's Rule*
4.
LP Duality and Complementary Slackness
❱
4.1.
LP Duality
4.2.
Correctness of the Simplex Algorithm
4.3.
Complementary Slackness
4.4.
An MST ILP with Integrality Gap 1*
4.5.
The Dual of a Non-Canonical LP
Graph Algorithms
5.
Maximum Flows
❱
5.1.
Augmenting Path Algorithms
❱
5.1.1.
The Ford-Fulkerson Algorithm
❱
5.1.1.1.
Ford-Fulkerson May Not Terminate
5.1.1.2.
Correctness of Ford-Fulkerson
5.1.1.3.
The Max-Flow Min-Cut Theorem
5.1.2.
The Edmonds-Karp Algorithm
5.1.3.
Dinitz's Algorithm*
❱
5.1.3.1.
Blocking Flows
5.1.3.2.
Finding a Blocking Flow
5.2.
Preflow-Push Algorithms
❱
5.2.1.
The Generic Preflow-Push Algorithm
❱
5.2.1.1.
Correctness
5.2.1.2.
Running Time Analysis
5.2.2.
The Relabel-To-Front Algorithm*
❱
5.2.2.1.
The Discharge Operation
5.2.2.2.
The Algorithm
5.2.2.3.
Running Time Analysis
5.2.2.4.
Correctness of the Algorithm
5.2.3.
The Highest-Vertex Preflow-Push Algorithm*
❱
5.2.3.1.
Running Time Analysis
5.3.
Existence of Integral Flows
6.
Minimum-Cost Flows
❱
6.1.
Characterizations
❱
6.1.1.
Complementary Slackness
6.1.2.
Reduced Costs
6.1.3.
Negative Cycles
6.2.
Decomposing Flows*
❱
6.2.1.
Circulations
6.2.2.
Decomposition into Cycles and Paths
6.2.3.
Flows and Circulations in the Residual Network
6.3.
Simplifying the Network*
❱
6.3.1.
Making the Network Strongly Connected
6.3.2.
Limiting Edge Capacities
6.3.3.
Removing Edge Capacities
6.3.4.
Making Edge Costs Non-Negative
6.4.
Negative Cycle Cancelling
❱
6.4.1.
Finding a Feasible Flow
6.4.2.
Cancelling Negative Cycles
6.5.
Successive Shortest Paths
❱
6.5.1.
The Primal-Dual Schema
6.5.2.
The Algorithm
6.6.
Capacity Scaling
❱
6.6.1.
Overview of the Algorithm
6.6.2.
Correctness
❱
6.6.2.1.
Maintenance of Bounded Excesses
6.6.2.2.
Maintenance of Non-Negative Residual Costs
6.6.3.
Running Time Analysis
6.7.
Minimum Mean Cycle Cancelling*
❱
6.7.1.
Finding a Minimum Mean Cost Cycle
❱
6.7.1.1.
Finding the Minimum Mean Cost
6.7.1.2.
Finding the Cycle
6.7.2.
Bounding the Number of Iterations
❱
6.7.2.1.
Decreasing Minimum Mean Costs
6.7.2.2.
Preliminary Analysis for Integer Costs
6.7.2.3.
Final Running Time Analysis
6.8.
Repeated Capacity Scaling*
❱
6.8.1.
From Potential Function to Flow
6.8.2.
Simplified Capacity Scaling
6.8.3.
Fixing Edges
6.8.4.
The Algorithm
6.8.5.
Correctness Proof
6.8.6.
Analysis
6.9.
Enhanced Capacity Scaling*
❱
6.9.1.
Abundant Edges and Components
6.9.2.
Active Vertices, Low or High Excess
6.9.3.
Detailed Algorithm Description
6.9.4.
Maintenance of the Flow and Excess Invariants
6.9.5.
Correctness of the Algorithm
6.9.6.
Running Time Analysis
❱
6.9.6.1.
Bounding the Number of Phases
6.9.6.2.
Bounding the Number of Flow Augmentations
6.9.6.3.
Putting Things Together
7.
Matchings
❱
7.1.
Definitions
7.2.
Maximal Matching
7.3.
Bipartite Matching via Network Flows
❱
7.3.1.
Maximum-Cardinality Matching
7.3.2.
Maximum-Weight Matching
7.3.3.
Minimum-Weight Perfect Matching
7.4.
Bipartite Maximum-Cardinality Matching
❱
7.4.1.
Augmenting Paths
7.4.2.
Alternating Forests and Alternating BFS
7.4.3.
A Simple \(O(nm)\)-Time Algorithm
7.4.4.
The Hopcroft-Karp Algorithm*
❱
7.4.4.1.
The Level Graph
7.5.
Bipartite Minimum-Weight Perfect Matching
❱
7.5.1.
Minimum-Weight Perfect Matching as an LP
7.5.2.
The Hungarian Algorithm
❱
7.5.2.1.
A Single Iteration
7.5.2.2.
Correctness of Each Iteration
7.5.2.3.
Summary of the Algorithm
7.5.3.
A Faster Implementation*
7.6.
Bipartite Maximum-Weight Matching
❱
7.6.1.
An Even Faster Implementation of the Hungarian Algorithm*
❱
7.6.1.1.
The Necessary Data Structures
7.6.1.2.
Initialization of Each Phase
7.6.1.3.
Implementing Each Iteration
7.6.1.4.
Abort When There's No Perfect Matching
7.6.1.5.
Summary
7.6.2.
Maximum-Weight Matching
7.7.
General Maximum-Cardinality Matching
❱
7.7.1.
Edmonds's Algorithm
❱
7.7.1.1.
Finding and Contracting Blossoms
7.7.1.2.
Expanding Blossoms
7.7.1.3.
An Example
7.7.2.
Gabow's Algorithm*
Approximation Algorithms
8.
Coping with NP-Hardness
❱
8.1.
The Vertex Cover Problem
8.2.
Approximation Algorithms
8.3.
Exponential-Time Algorithms
8.4.
Fixed-Parameter Tractability
9.
Greedy Approximation Algorithms and Layering
❱
9.1.
Bin Packing
❱
9.1.1.
A Simple Greedy Algorithm
9.1.2.
A Family of Tight Inputs
9.2.
Greedy Set Cover
❱
9.2.1.
The Right Greedy Choice
9.2.2.
Proof of Approximation Ratio
9.2.3.
A Family of Tight Inputs
9.3.
Set Cover via Layering
❱
9.3.1.
A Recursive Algorithm
9.3.2.
A Family of Tight Inputs
9.4.
Feedback Vertex Set via Layering*
❱
9.4.1.
Feedback Vertex for Cyclomatic Weight Functions
❱
9.4.1.1.
Finite Fields and Finite Vector Spaces
9.4.1.2.
Cyclomatic Weight Functions
9.4.1.3.
A Minimal Feedback Vertex Set is a 2-Approximation
9.4.2.
Feedback Vertex for Arbitrary Weight Functions
10.
Parametric Pruning
❱
10.1.
Metric k-Center
10.2.
Inapproximability of Metric k-Center
10.3.
Weighted Metric k-Center
11.
Dual Fitting
❱
11.1.
Set Cover Revisited
11.2.
Set Multicover*
12.
Approximation via the Primal-Dual Schema
❱
12.1.
Relaxed Complementary Slackness
12.2.
Set Cover via the Primal-Dual Schema
12.3.
Uncapacitated Metric Facility Location
13.
LP Rounding
❱
13.1.
Half-Integrality
❱
13.1.1.
Vertex Cover
13.1.2.
Multiway Vertex Cut*
13.2.
Randomized Rounding
❱
13.2.1.
Prelude: Randomized Algorithms
13.2.2.
Set Cover
13.2.3.
Multiway Edge Cut*
❱
13.2.3.1.
A "Better" ILP Formulation
13.2.3.2.
A Monte Carlo Algorithm
❱
13.2.3.2.1.
Randomized Rounding: Reduction to a Special Case
13.2.3.2.2.
Randomized Rounding: The Special Case
13.2.3.3.
A Las Vegas Algorithm
13.3.
Derandomization
❱
13.3.1.
A Randomized Monte Carlo Algorithm for MAX-SAT
❱
13.3.1.1.
A Good Algorithm for Large Clauses
13.3.1.2.
A Good Algorithm for Small Clauses
13.3.1.3.
Combining the Two
13.3.2.
A Deterministic Algorithm for MAX-SAT
Exponential-Time Algorithms
14.
Introduction to Exponential-Time Algorithms
❱
14.1.
Exponential Time
14.2.
Fixed-Parameter Tractability
14.3.
Road Map
15.
Data Reduction and Problem Kernels
❱
15.1.
Kernelization and Fixed-Paramater Tractability
15.2.
Reduction Rules
❱
15.2.1.
Vertex Cover
15.2.2.
Cluster Editing
15.3.
Crown Reduction
❱
15.3.1.
Vertex Cover
15.3.2.
Maximum Satisfiability
15.4.
Kernelization via Linear Programming
15.5.
A Vertex Cover Kernel via Matching*
16.
Branching Algorithms
❱
16.1.
Branching Numbers and Branching Vectors
❱
16.1.1.
A Simple Branching Algorithm for the Vertex Cover Problem
16.1.2.
Branching Numbers and Branching Vectors
16.1.3.
Maximum Independent Set
16.2.
Depth-Bounded Search
❱
16.2.1.
Vertex Cover and (Not) Independent Set
16.2.2.
Cluster Editing
16.2.3.
Closest String
16.3.
Better Branching Rules
❱
16.3.1.
Vertex Cover
16.3.2.
Satisfiability*
❱
16.3.2.1.
A First Simple Algorithm
16.3.2.2.
Reducing the Running Time Using Autarkies
16.4.
Combining Recursive Calls
❱
16.4.1.
Cluster Editing
❱
16.4.1.1.
Case 1
16.4.1.2.
Case 2
16.4.1.3.
Case 3
16.4.2.
Vertex Cover*
❱
16.4.2.1.
Case 1
16.4.2.2.
Case 2
16.4.2.3.
Case 3
16.4.2.4.
Case 4
16.4.2.5.
Case 5
16.4.2.6.
Case 6
16.4.2.7.
Analysis
17.
Measure and Conquer
❱
17.1.
Maximum Independent Set
17.2.
Feedback Vertex Set
❱
17.2.1.
Extending Independent Sets
17.2.2.
Generalized Neighbours
17.2.3.
The Algorithm
17.2.4.
Correctness
17.2.5.
Analysis
17.3.
Dominating Set and Set Cover*
❱
17.3.1.
The Algorithm
17.3.2.
Analysis
18.
Dynamic Programming
❱
18.1.
Travelling Salesman
18.2.
Steiner Tree
❱
18.2.1.
A Simple Algorithm
18.2.2.
Simplifying the Input
18.2.3.
A Dynamic Programming Solution
19.
Inclusion-Exclusion
❱
19.1.
The Inclusion-Exclusion Principle
19.2.
Hamiltonian Cycle
❱
19.2.1.
Deciding Whether a Hamiltonian Cycle Exists
19.2.2.
Findin a Hamiltonian Cycle
19.3.
Graph Colouring
❱
19.3.1.
Deciding Whether a \(k\)-Colouring Exists
19.3.2.
Finding a \(k\)-Colouring
20.
Iterative Compression
❱
20.1.
Vertex Cover
20.2.
Feedback Vertex Set
Appendix
21.
NP Hardness
❱
21.1.
Decision vs Optimization
21.2.
Models of Computation
21.3.
P
21.4.
NP
21.5.
NP-Hardness and NP-Completeness
21.6.
Polynomial-Time Reductions
22.
A Simple Union-Find Data Structure
Light (default)
Nord Light
Plan 9
Sepia
Rust
Nord Dark
Coal
Navy
Ayu
Sepia Dark
Algorithms II
6.1. Characterizations
This work is licensed under a
Creative Commons Attribution-ShareAlike 4.0 International License
.