21. NP-Hardness

Designing algorithms is (to a large degree) about finding the fastest possible way to solve a given problem \(\Pi\). To verify that we have found the fastest possible algorithm, we need to prove that there is no faster algorithm to solve \(\Pi\) than the one we have found. For example, we can prove that \(\Omega(n \lg n)\) is the best possible running time achievable for sorting a sequence of \(n\) elements if we use only comparisons to decide in which order the input elements should be arranged. Merge Sort, Quicksort, and Heap Sort all run in \(O(n \lg n)\) time and use only comparisons to determine the correct order of the input elements. In that sense, they are optimal.

Interestingly, for most problems which we don't even know how to solve in polynomial time—any polynomial will do—we also don't know how to prove that there is no polynomial-time algorithm to solve them. The best we know how to do is to provide evidence that it is, in a formal mathematical sense, highly unlikely that a polynomial-time algorithm solving such a problem exists. NP-hardness is the tool we use to do this.

This appendix gives a brief overview of NP-hardness.

  • We start, in Section 21.1 by defining decision problems and exploring their relationship to optimization problems.

  • In Section 21.2, we briefly touch on models of computation, which are necessary to formalize the computational difficulty of problems.

  • We define the complexity classes P and NP in Sections 21.3 and 21.4.

  • In Section 21.5, we define what it means for a problem to be NP-hard and NP-complete.

  • We conclude, in Section 21.6, with a discussion of how to use polynomial-time reductions to prove that a problem is NP-hard.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.