1. Before You Start
  2. Overview
  3. 1. Overview
    ❱
    1. 1.1. Topics Covered
    2. 1.2. Prerequisites
    3. 1.3. A Note on Graphs
  4. Linear Programming
  5. 2. Linear Programming
    ❱
    1. 2.1. Linear Programs
    2. 2.2. Shortest Paths as an LP
      ❱
      1. 2.2.1. The Single-Source Shortest Paths Problem
      2. 2.2.2. Properties of Shortest Paths
      3. 2.2.3. The LP Formulation
    3. 2.3. Minimum Spanning Tree as an ILP
      ❱
      1. 2.3.1. The Minimum Spanning Tree Problem
      2. 2.3.2. Two Natural ILP Formulations
      3. 2.3.3. An ILP Formulation of Polynomial Size*
    4. 2.4. Complexity of LP and ILP
    5. 2.5. LP Relaxations
      ❱
      1. 2.5.1. ILPs With Integrality Gap 1
      2. 2.5.2. An ILP Formulation of the MST Problem With Integrality Gap 1
    6. 2.6. Canonical and Standard Form
      ❱
      1. 2.6.1. Canonical Form
      2. 2.6.2. Standard Form
  6. 3. The Simplex Algorithm
    ❱
    1. 3.1. The Basic Solution and the Basic Feasible Solution
    2. 3.2. Overview of the Algorithm
    3. 3.3. Pivoting
      ❱
      1. 3.3.1. Finding a Better Solution
      2. 3.3.2. Transforming the LP
    4. 3.4. Initialization of the Algorithm
      ❱
      1. 3.4.1. Making s Non-Basic
      2. 3.4.2. Dropping s
      3. 3.4.3. Restoring the Objective Function
      4. 3.4.4. Solving the Auxiliary LP
    5. 3.5. An Extended Example and Tableaux
      ❱
      1. 3.5.1. Initialization
      2. 3.5.2. Optimization
    6. 3.6. Cycling and Bland's Rule*
  7. 4. LP Duality and Complementary Slackness
    ❱
    1. 4.1. LP Duality
    2. 4.2. Correctness of the Simplex Algorithm
    3. 4.3. Complementary Slackness
    4. 4.4. An MST ILP with Integrality Gap 1*
    5. 4.5. The Dual of a Non-Canonical LP
  8. Graph Algorithms
  9. 5. Maximum Flows
    ❱
    1. 5.1. Augmenting Path Algorithms
      ❱
      1. 5.1.1. The Ford-Fulkerson Algorithm
        ❱
        1. 5.1.1.1. Ford-Fulkerson May Not Terminate
        2. 5.1.1.2. Correctness of Ford-Fulkerson
        3. 5.1.1.3. The Max-Flow Min-Cut Theorem
      2. 5.1.2. The Edmonds-Karp Algorithm
      3. 5.1.3. Dinitz's Algorithm*
        ❱
        1. 5.1.3.1. Blocking Flows
        2. 5.1.3.2. Finding a Blocking Flow
    2. 5.2. Preflow-Push Algorithms
      ❱
      1. 5.2.1. The Generic Preflow-Push Algorithm
        ❱
        1. 5.2.1.1. Correctness
        2. 5.2.1.2. Running Time Analysis
      2. 5.2.2. The Relabel-To-Front Algorithm*
        ❱
        1. 5.2.2.1. The Discharge Operation
        2. 5.2.2.2. The Algorithm
        3. 5.2.2.3. Running Time Analysis
        4. 5.2.2.4. Correctness of the Algorithm
      3. 5.2.3. The Highest-Vertex Preflow-Push Algorithm*
        ❱
        1. 5.2.3.1. Running Time Analysis
    3. 5.3. Existence of Integral Flows
  10. 6. Minimum-Cost Flows
    ❱
    1. 6.1. Characterizations
      ❱
      1. 6.1.1. Complementary Slackness
      2. 6.1.2. Reduced Costs
      3. 6.1.3. Negative Cycles
    2. 6.2. Decomposing Flows*
      ❱
      1. 6.2.1. Circulations
      2. 6.2.2. Decomposition into Cycles and Paths
      3. 6.2.3. Flows and Circulations in the Residual Network
    3. 6.3. Simplifying the Network*
      ❱
      1. 6.3.1. Making the Network Strongly Connected
      2. 6.3.2. Limiting Edge Capacities
      3. 6.3.3. Removing Edge Capacities
      4. 6.3.4. Making Edge Costs Non-Negative
    4. 6.4. Negative Cycle Cancelling
      ❱
      1. 6.4.1. Finding a Feasible Flow
      2. 6.4.2. Cancelling Negative Cycles
    5. 6.5. Successive Shortest Paths
      ❱
      1. 6.5.1. The Primal-Dual Schema
      2. 6.5.2. The Algorithm
    6. 6.6. Capacity Scaling
      ❱
      1. 6.6.1. Overview of the Algorithm
      2. 6.6.2. Correctness
        ❱
        1. 6.6.2.1. Maintenance of Bounded Excesses
        2. 6.6.2.2. Maintenance of Non-Negative Residual Costs
      3. 6.6.3. Running Time Analysis
    7. 6.7. Minimum Mean Cycle Cancelling*
      ❱
      1. 6.7.1. Finding a Minimum Mean Cost Cycle
        ❱
        1. 6.7.1.1. Finding the Minimum Mean Cost
        2. 6.7.1.2. Finding the Cycle
      2. 6.7.2. Bounding the Number of Iterations
        ❱
        1. 6.7.2.1. Decreasing Minimum Mean Costs
        2. 6.7.2.2. Preliminary Analysis for Integer Costs
        3. 6.7.2.3. Final Running Time Analysis
    8. 6.8. Repeated Capacity Scaling*
      ❱
      1. 6.8.1. From Potential Function to Flow
      2. 6.8.2. Simplified Capacity Scaling
      3. 6.8.3. Fixing Edges
      4. 6.8.4. The Algorithm
      5. 6.8.5. Correctness Proof
      6. 6.8.6. Analysis
    9. 6.9. Enhanced Capacity Scaling*
      ❱
      1. 6.9.1. Abundant Edges and Components
      2. 6.9.2. Active Vertices, Low or High Excess
      3. 6.9.3. Detailed Algorithm Description
      4. 6.9.4. Maintenance of the Flow and Excess Invariants
      5. 6.9.5. Correctness of the Algorithm
      6. 6.9.6. Running Time Analysis
        ❱
        1. 6.9.6.1. Bounding the Number of Phases
        2. 6.9.6.2. Bounding the Number of Flow Augmentations
        3. 6.9.6.3. Putting Things Together
  11. 7. Matchings
    ❱
    1. 7.1. Definitions
    2. 7.2. Maximal Matching
    3. 7.3. Bipartite Matching via Network Flows
      ❱
      1. 7.3.1. Maximum-Cardinality Matching
      2. 7.3.2. Maximum-Weight Matching
      3. 7.3.3. Minimum-Weight Perfect Matching
    4. 7.4. Bipartite Maximum-Cardinality Matching
      ❱
      1. 7.4.1. Augmenting Paths
      2. 7.4.2. Alternating Forests and Alternating BFS
      3. 7.4.3. A Simple \(O(nm)\)-Time Algorithm
      4. 7.4.4. The Hopcroft-Karp Algorithm*
        ❱
        1. 7.4.4.1. The Level Graph
    5. 7.5. Bipartite Minimum-Weight Perfect Matching
      ❱
      1. 7.5.1. Minimum-Weight Perfect Matching as an LP
      2. 7.5.2. The Hungarian Algorithm
        ❱
        1. 7.5.2.1. A Single Iteration
        2. 7.5.2.2. Correctness of Each Iteration
        3. 7.5.2.3. Summary of the Algorithm
      3. 7.5.3. A Faster Implementation*
    6. 7.6. Bipartite Maximum-Weight Matching
      ❱
      1. 7.6.1. An Even Faster Implementation of the Hungarian Algorithm*
        ❱
        1. 7.6.1.1. The Necessary Data Structures
        2. 7.6.1.2. Initialization of Each Phase
        3. 7.6.1.3. Implementing Each Iteration
        4. 7.6.1.4. Abort When There's No Perfect Matching
        5. 7.6.1.5. Summary
      2. 7.6.2. Maximum-Weight Matching
    7. 7.7. General Maximum-Cardinality Matching
      ❱
      1. 7.7.1. Edmonds's Algorithm
        ❱
        1. 7.7.1.1. Finding and Contracting Blossoms
        2. 7.7.1.2. Expanding Blossoms
        3. 7.7.1.3. An Example
      2. 7.7.2. Gabow's Algorithm*
  12. Approximation Algorithms
  13. 8. Coping with NP-Hardness
    ❱
    1. 8.1. The Vertex Cover Problem
    2. 8.2. Approximation Algorithms
    3. 8.3. Exponential-Time Algorithms
    4. 8.4. Fixed-Parameter Tractability
  14. 9. Greedy Approximation Algorithms and Layering
    ❱
    1. 9.1. Bin Packing
      ❱
      1. 9.1.1. A Simple Greedy Algorithm
      2. 9.1.2. A Family of Tight Inputs
    2. 9.2. Greedy Set Cover
      ❱
      1. 9.2.1. The Right Greedy Choice
      2. 9.2.2. Proof of Approximation Ratio
      3. 9.2.3. A Family of Tight Inputs
    3. 9.3. Set Cover via Layering
      ❱
      1. 9.3.1. A Recursive Algorithm
      2. 9.3.2. A Family of Tight Inputs
    4. 9.4. Feedback Vertex Set via Layering*
      ❱
      1. 9.4.1. Feedback Vertex for Cyclomatic Weight Functions
        ❱
        1. 9.4.1.1. Finite Fields and Finite Vector Spaces
        2. 9.4.1.2. Cyclomatic Weight Functions
        3. 9.4.1.3. A Minimal Feedback Vertex Set is a 2-Approximation
      2. 9.4.2. Feedback Vertex for Arbitrary Weight Functions
  15. 10. Parametric Pruning
    ❱
    1. 10.1. Metric k-Center
    2. 10.2. Inapproximability of Metric k-Center
    3. 10.3. Weighted Metric k-Center
  16. 11. Dual Fitting
    ❱
    1. 11.1. Set Cover Revisited
    2. 11.2. Set Multicover*
  17. 12. Approximation via the Primal-Dual Schema
    ❱
    1. 12.1. Relaxed Complementary Slackness
    2. 12.2. Set Cover via the Primal-Dual Schema
    3. 12.3. Uncapacitated Metric Facility Location
  18. 13. LP Rounding
    ❱
    1. 13.1. Half-Integrality
      ❱
      1. 13.1.1. Vertex Cover
      2. 13.1.2. Multiway Vertex Cut*
    2. 13.2. Randomized Rounding
      ❱
      1. 13.2.1. Prelude: Randomized Algorithms
      2. 13.2.2. Set Cover
      3. 13.2.3. Multiway Edge Cut*
        ❱
        1. 13.2.3.1. A "Better" ILP Formulation
        2. 13.2.3.2. A Monte Carlo Algorithm
          ❱
          1. 13.2.3.2.1. Randomized Rounding: Reduction to a Special Case
          2. 13.2.3.2.2. Randomized Rounding: The Special Case
        3. 13.2.3.3. A Las Vegas Algorithm
    3. 13.3. Derandomization
      ❱
      1. 13.3.1. A Randomized Monte Carlo Algorithm for MAX-SAT
        ❱
        1. 13.3.1.1. A Good Algorithm for Large Clauses
        2. 13.3.1.2. A Good Algorithm for Small Clauses
        3. 13.3.1.3. Combining the Two
      2. 13.3.2. A Deterministic Algorithm for MAX-SAT
  19. Exponential-Time Algorithms
  20. 14. Introduction to Exponential-Time Algorithms
    ❱
    1. 14.1. Exponential Time
    2. 14.2. Fixed-Parameter Tractability
    3. 14.3. Road Map
  21. 15. Data Reduction and Problem Kernels
    ❱
    1. 15.1. Kernelization and Fixed-Paramater Tractability
    2. 15.2. Reduction Rules
      ❱
      1. 15.2.1. Vertex Cover
      2. 15.2.2. Cluster Editing
    3. 15.3. Crown Reduction
      ❱
      1. 15.3.1. Vertex Cover
      2. 15.3.2. Maximum Satisfiability
    4. 15.4. Kernelization via Linear Programming
    5. 15.5. A Vertex Cover Kernel via Matching*
  22. 16. Branching Algorithms
    ❱
    1. 16.1. Branching Numbers and Branching Vectors
      ❱
      1. 16.1.1. A Simple Branching Algorithm for the Vertex Cover Problem
      2. 16.1.2. Branching Numbers and Branching Vectors
      3. 16.1.3. Maximum Independent Set
    2. 16.2. Depth-Bounded Search
      ❱
      1. 16.2.1. Vertex Cover and (Not) Independent Set
      2. 16.2.2. Cluster Editing
      3. 16.2.3. Closest String
    3. 16.3. Better Branching Rules
      ❱
      1. 16.3.1. Vertex Cover
      2. 16.3.2. Satisfiability*
        ❱
        1. 16.3.2.1. A First Simple Algorithm
        2. 16.3.2.2. Reducing the Running Time Using Autarkies
    4. 16.4. Combining Recursive Calls
      ❱
      1. 16.4.1. Cluster Editing
        ❱
        1. 16.4.1.1. Case 1
        2. 16.4.1.2. Case 2
        3. 16.4.1.3. Case 3
      2. 16.4.2. Vertex Cover*
        ❱
        1. 16.4.2.1. Case 1
        2. 16.4.2.2. Case 2
        3. 16.4.2.3. Case 3
        4. 16.4.2.4. Case 4
        5. 16.4.2.5. Case 5
        6. 16.4.2.6. Case 6
        7. 16.4.2.7. Analysis
  23. 17. Measure and Conquer
    ❱
    1. 17.1. Maximum Independent Set
    2. 17.2. Feedback Vertex Set
      ❱
      1. 17.2.1. Extending Independent Sets
      2. 17.2.2. Generalized Neighbours
      3. 17.2.3. The Algorithm
      4. 17.2.4. Correctness
      5. 17.2.5. Analysis
    3. 17.3. Dominating Set and Set Cover*
      ❱
      1. 17.3.1. The Algorithm
      2. 17.3.2. Analysis
  24. 18. Dynamic Programming
    ❱
    1. 18.1. Travelling Salesman
    2. 18.2. Steiner Tree
      ❱
      1. 18.2.1. A Simple Algorithm
      2. 18.2.2. Simplifying the Input
      3. 18.2.3. A Dynamic Programming Solution
  25. 19. Inclusion-Exclusion
    ❱
    1. 19.1. The Inclusion-Exclusion Principle
    2. 19.2. Hamiltonian Cycle
      ❱
      1. 19.2.1. Deciding Whether a Hamiltonian Cycle Exists
      2. 19.2.2. Findin a Hamiltonian Cycle
    3. 19.3. Graph Colouring
      ❱
      1. 19.3.1. Deciding Whether a \(k\)-Colouring Exists
      2. 19.3.2. Finding a \(k\)-Colouring
  26. 20. Iterative Compression
    ❱
    1. 20.1. Vertex Cover
    2. 20.2. Feedback Vertex Set
  27. Appendix
  28. 21. NP Hardness
    ❱
    1. 21.1. Decision vs Optimization
    2. 21.2. Models of Computation
    3. 21.3. P
    4. 21.4. NP
    5. 21.5. NP-Hardness and NP-Completeness
    6. 21.6. Polynomial-Time Reductions
  29. 22. A Simple Union-Find Data Structure

Algorithms II

1. Overview


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.