See any bugs/typos/confusing explanations? Open a GitHub issue. You can also comment below

★ See also the **PDF version of this chapter** (better formatting/references) ★

# NP, NP completeness, and the Cook-Levin Theorem

- Introduce the class \(\mathbf{NP}\) capturing a great many important computational problems

- \(\mathbf{NP}\)-completeness: evidence that a problem might be intractable.

- The \(\mathbf{P}\) vs \(\mathbf{NP}\) problem.

“In this paper we give theorems that suggest, but do not imply, that these problems, as well as many others, will remain intractable perpetually”, Richard Karp, 1972

“Sad to say, but it will be many more years, if ever before we really understand the Mystical Power of Twoness… 2-SAT is easy, 3-SAT is hard, 2-dimensional matching is easy, 3-dimensional matching is hard. Why? oh, Why?”Eugene Lawler

## The class \(\mathbf{NP}\)

So far we have shown that 3SAT is no harder than Quadratic Equations, Independent Set, Maximum Cut, and Longest Path. But to show that these problems are *computationally equivalent* we need to give reductions in the other direction, reducing each one of these problems to 3SAT as well. It turns out we can reduce all three problems to 3SAT in one fell swoop.

In fact, this result extends far beyond these particular problems. All of the problems we discussed in Chapter 13, and a great many other problems, share the same commonality: they are all *search* problems, where the goal is to decide, given an instance \(x\), whether there exists a *solution* \(y\) that satisfies some condition that can be verified in polynomial time. For example, in 3SAT, the instance is a formula and the solution is an assignment to the variable; in Max-Cut the instance is a graph and the solution is a cut in the graph; and so on and so forth. It turns out that *every* such search problem can be reduced to 3SAT.

To make this precise, we make the following mathematical definition: we define the class \(\mathbf{NP}\) to contain all Boolean functions that correspond to a *search problem* of the form above\(-\) that is, functions that output \(1\) on \(x\) if and only if there exists a solution \(w\) such that the pair \((x,w)\) satisfies some polynomial-time checkable condition. Formally, \(\mathbf{NP}\) is defined as follows:

We say that \(F:\{0,1\}^* \rightarrow \{0,1\}\) is in \(\mathbf{NP}\) if there exists some constants \(a,b \in \N\) and \(V:\{0,1\}^* \rightarrow \{0,1\}\) such that \(V\in \mathbf{P}\) and for every \(x\in \{0,1\}^n\), \[ F(x)=1 \Leftrightarrow \exists_{w \in \{0,1\}^{an^b}} \text{ s.t. } V(xw)=1 \;. \;\;(14.1) \]

In other words, for \(F\) to be in \(\mathbf{NP}\), there needs to exist some polynomial-time computable verification function \(V\), such that if \(F(x)=1\) then there must exist \(w\) (of length polynomial in \(|x|\)) such that \(V(xw)=1\), and if \(F(x)=0\) then for *every* such \(w\), \(V(xw)=0\). Since the existence of this string \(w\) certifies that \(F(x)=1\), \(w\) is often referred to as a *certificate*, *witness*, or *proof* that \(F(x)=1\).

See also Figure 14.1 for an illustration of Definition 14.1. The name \(\mathbf{NP}\) stands for “nondeterministic polynomial time” and is used for historical reasons; see the bibiographical notes. The string \(w\) in ?? is sometimes known as a *solution*, *certificate*, or *witness* for the instance \(x\).

The definition of \(\mathbf{NP}\) means that for every \(F\in \mathbf{NP}\) and string \(x\in \{0,1\}^*\), \(F(x)=1\) if and only if there is a *short and efficiently verifiable proof* of this fact. That is, we can think of the function \(V\) in Definition 14.1 as a *verifier* algorithm, similar to what we’ve seen in Section 10.1. The verifier checks whether a given string \(w\in \{0,1\}^*\) is a valid proof for the statement “\(F(x)=1\)”. Essentially all proof systems considered in mathematics involve line-by-line checks that can be carried out in polynomial time. Thus the heart of \(\mathbf{NP}\) is asking for statements that have *short* (i.e., polynomial in the size of the statements) proof. Indeed, as we will see in Chapter 15, Kurt Gödel phrased the question of whether \(\mathbf{NP}=\mathbf{P}\) as asking whether “the mental work of a mathematician [in proving theorems] could be completely replaced by a machine”.

Definition 14.1 is *assymetric* in the sense that there is a difference between an output of \(1\) and an output of \(0\). You should make sure you understand why this definition does *not* guarantee that if \(F \in \mathbf{NP}\) then the function \(1-F\) (i.e., the map \(x \mapsto 1-F(x)\)) is in \(\mathbf{NP}\) as well. In fact, it is believed that there do exist functions \(F\) satisfying \(F\in \mathbf{NP}\) but \(1-F \not\in \mathbf{NP}\).^{1} This is in contrast to the class \(\mathbf{P}\) which (as you should verify) *does* satisfy that if \(F\in \mathbf{P}\) then \(1-F\) is in \(\mathbf{P}\) as well.

### Examples of \(\mathbf{NP}\) functions

\(3\ensuremath{\mathit{SAT}}\) is in \(\mathbf{NP}\) since for every \(\ell\)-variable formula \(\varphi\), \(3\ensuremath{\mathit{SAT}}(\varphi)=1\) if and only if there exists a satisfying assignment \(x \in \{0,1\}^\ell\) such that \(\varphi(x)=1\), and we can check this condition in polynomial time.

The above reasoning explains why \(3\ensuremath{\mathit{SAT}}\) is in \(\mathbf{NP}\), but since this is our first example, we will now belabor the point and expand out in full formality what is the precise representation of the witness \(w\) and the algorithm \(V\) that demonstrate that \(3\ensuremath{\mathit{SAT}}\) is in \(\mathbf{NP}\).

Specifically, we can represent a 3CNF formula \(\varphi\) with \(k\) variables and \(m\) clauses as a string of length \(n=O(m\log k)\), since every one of the \(m\) clauses involves three variables and their negation, and the identity of each variable can be represented using \(\lceil \log_2 k \rceil\). We assume that every variable participates in some clause (as otherwise it can be ignored) and hence that \(m \geq k\), which in particular means that \(n\) is larger than both \(m\) and \(k\).

We can represent an assignment to the \(k\) variables using a \(k\)-length string, which, since \(n > k\), can be “padded” to a string \(w\in \{0,1\}^n\) in some standard way. (For example, if \(y\in \{0,1\}^k\) is the assignment, we can let \(w=y10^{n-k-1}\); given the string \(w\) we can “read off” \(y\), by chopping off all the zeroes at the end of \(w\) until we encounter the first \(1\), which we remove as well.)

Now checking whether a given assignment \(y\in \{0,1\}^k\) satisfies a given \(k\)-variable 3CNF \(\varphi\) can be done in polynomial time through the following algorithm \(V\):

Algorithm \(V\):

Input:

3CNF formula \(\varphi\) with \(k\) variables and \(m\) clauses (encoded as a string of length \(n=O(m\log k))\)

Assignment \(y\in \{0,1\}^k\) to the variables of \(\varphi\) (encoded using padding as a string \(w \in \{0,1\}^n\))

Output:\(1\) if and only if \(y\) satisfies \(\varphi\).

Operation:

For every clause \(C = (\ell_1 \vee \ell_2 \vee \ell_3)\) of \(\varphi\) (where \(\ell_1,\ell_2,\ell_3\) are literals), if all three literals evaluate to

falseunder the assignment \(y\) then halt and output \(0\).- Output \(1\).

Algorithm \(V\) runs in time polynomial in the length \(n\) of \(\varphi\)’s description as a string. Indeed there are \(m\) clauses, and checking the evaluation of a literal of the form \(y_i\) or \(\neg y_j\) can be done by scanning the \(k\)-length string \(y\), and hence the running time of Algorithm \(V\) is at most \(O(mk)=O(n^2)\), as both \(k\) and \(m\) are smaller than \(n\).

By its definition the algorithm outputs \(1\) if and only if the assignment \(y\) satisfies all the clauses of the 3CNF formula \(\varphi\), which means that \(3\ensuremath{\mathit{SAT}}(\varphi)=1\) if and only if there exists some \(w\in \{0,1\}^n\) such that \(V(\varphi w)=1\) which is precisely the condition needed to show that \(3\ensuremath{\mathit{SAT}} \in \mathbf{NP}\) per Definition 14.1.

The “padding trick” we used in Example 14.3 can always be used to expand a witness of length smaller than \(an^b\) to a witness of exactly that length. Therefore one can think of the condition ?? in Definition 14.1as simply stipulating that the “solution” \(w\) to the problem \(x\) is of length *at most* polynomial in \(|x|\).

Here are some more examples for problems in \(\mathbf{NP}\). For each one of these problems we merely sketch how the witness is represented and why it is efficiently checkable, but working out the details can be a good way to get more comfortable with Definition 14.1:

\(\ensuremath{\mathit{QUADEQ}}\) is in \(\mathbf{NP}\) since for every \(\ell\)-variable instance of quadratic equations \(E\), \(\ensuremath{\mathit{QUADEQ}}(E)=1\) if and only if there exists an assignment \(x\in \{0,1\}^\ell\) that satisfies \(E\). We can check the condition that \(x\) satisfies \(E\) in polynomial time by enumerating over all the equations in \(E\), and for each such equation \(e\), plug in the values of \(x\) and verify that \(e\) is satisfied.

\(\ensuremath{\mathit{ISET}}\) is in \(\mathbf{NP}\) since for every graph \(G\) and integer \(k\), \(\ensuremath{\mathit{ISET}}(G,k)=1\) if and only if there exists a set \(S\) of \(k\) vertices that contains no pair of neighbors in \(G\). We can check the condition that \(S\) is an independent set of size \(\geq k\) in polynomial time by first checking that \(|S| \geq k\) and then enumerating over all edges \(\{u,v \}\) in \(G\), and for each such edge verify that either \(u\not\in S\) or \(v\not\in S\).

\(\ensuremath{\mathit{LONGPATH}}\) is in \(\mathbf{NP}\) since for every graph \(G\) and integer \(k\), \(\ensuremath{\mathit{LONGPATH}}(G,k)=1\) if and only if there exists a simple path \(P\) in \(G\) that is of length at least \(k\). We can check the condition that \(P\) is a simple path of length \(k\) in polynomial time by checking that it has the form \((v_0,v_1,\ldots,v_k)\) where each \(v_i\) is a vertex in \(G\), no \(v_i\) is repeated, and for every \(i \in [k]\), the edge \(\{v_i,v_{i+1}\}\) is present in the graph.

\(\ensuremath{\mathit{MAXCUT}}\) is in \(\mathbf{NP}\) since for every graph \(G\) and integer \(k\), \(\ensuremath{\mathit{MAXCUT}}(G,k)=1\) if and only if there exists a cut \((S,\overline{S})\) in \(G\) that cuts at least \(k\) edges. We can check that condition that \((S,\overline{S})\) is a cut of value at least \(k\) in polynomial time by checking that \(S\) is a subset of \(G\)’s vertices and enumerating over all the edges \(\{u,v\}\) of \(G\), counting those edges such that \(u\in S\) and \(v\not\in S\) or vice versa.

### Basic facts about \(\mathbf{NP}\)

The definition of \(\mathbf{NP}\) is one of the most important definitions of this book, and is worth while taking the time to digest and internalize. The following solved exercises establish some basic properties of this class. As usual, I highly recommend that you try to work out the solutions yourself.

Prove that \(\mathbf{P} \subseteq \mathbf{NP}\).

Suppose that \(F \in \mathbf{P}\). Define the following function \(V\): \(V(x0^n)=1\) iff \(n=|x|\) and \(F(x)=1\). (\(V\) outputs \(0\) on all other inputs.) Since \(F\in \mathbf{P}\) we can clearly compute \(V\) in polynomial time as well.

Let \(x\in \{0,1\}^n\) be some string. If \(F(x)=1\) then \(V(x0^n)=1\). On the other hand, if \(F(x)=0\) then for every \(w\in \{0,1\}^n\), \(V(xw)=0\). Therefore, setting \(a=b=1\), we see that \(V\) satisfies ??, and establishes that \(F \in \mathbf{NP}\).

People sometimes think that \(\mathbf{NP}\) stands for “non polynomial time”. As Solvedexercise 14.1 shows, this is far from the truth, and in fact every polynomial-time computable function is in \(\mathbf{NP}\) as well.

If \(F\) is in \(\mathbf{NP}\) it certainly does *not* mean that \(F\) is hard to compute (though it does not, as far as we know, necessarily mean that it’s easy to compute either). Rather, it means that \(F\) is *easy to verify*, in the technical sense of Definition 14.1.

Prove that \(\mathbf{NP} \subseteq \mathbf{EXP}\).

Suppose that \(F\in \mathbf{NP}\) and let \(V\) be the polynomial-time computable function that satisfies ?? and \(a,b\) the corresponding constants. Then the following is an exponential-time algorithm \(A\) to compute \(F\):

Algorithm \(A\):

Input:\(x \in \{0,1\}^*\), let \(n=|x|\)

Operation:

For every \(w\in \{0,1\}^{an^b}\), if \(V(xw)=1\) then halt and output \(1\).

- Output \(0\).

Since \(V \in \mathbf{P}\), for every \(x\in \{0,1\}^n\), Algorithm \(A\) runs in time \(poly(n)2^{an^b}\). Moreover by ??, \(A\) will output \(1\) on \(x\) if and only if \(F(x)=1\).

Solvedexercise 14.1 and Solvedexercise 14.2 together imply that

\[\mathbf{P} \subseteq \mathbf{NP} \subseteq \mathbf{EXP}\;.\]

The time hierarchy theorem (Theorem 12.10) implies that \(\mathbf{P} \subsetneq \mathbf{EXP}\) and hence at least one of the two inclusions \(\mathbf{P} \subseteq \mathbf{NP}\) or \(\mathbf{NP} \subseteq \mathbf{EXP}\) is *strict*. It is believed that both of them are in fact strict inclusions. That is, it is believed that there are functions in \(\mathbf{NP}\) that cannot be computed in polynomial time (this is the \(\mathbf{P} \neq \mathbf{NP}\) conjecture) and that there are functions \(F\) in \(\mathbf{EXP}\) for which we cannot even efficiently *certify* that \(F(x)=1\) for a given input \(x\).^{2}

We have previously informally equated the notion of \(F \leq_p G\) with \(F\) being “no harder than \(G\)” and in particular have seen in Solvedexercise 13.1 that if \(G \in \mathbf{P}\) and \(F \leq_p G\), then \(F \in \mathbf{P}\) as well. The following exercise shows that if \(F \leq_p G\) then it is also “no harder to verify” than \(G\). That is, regardless of whether or not it is in \(\mathbf{P}\), if \(G\) has the property that solutions to it can be efficiently verified, then so does \(F\).

Let \(F,G:\{0,1\}^* \rightarrow \{0,1\}\). Show that if \(F \leq_p G\) and \(G\in \mathbf{NP}\) then \(F \in \mathbf{NP}\).

Suppose that \(G\) is in \(\mathbf{NP}\) and in particular there exists \(a,b\) and \(V \in \mathbf{P}\) such that for every \(y \in \{0,1\}^*\), \(G(y)=1 \Leftrightarrow \exists_{w\in \{0,1\}^{a|y|^b}} V(yw)=1\). Define \(V'(x,w)=1\) iff \(V(R(x)w)=1\) where \(R\) is the polynomial-time reduction demonstrating that \(F \leq_p G\). Then for every \(x\in \{0,1\}^*\),

\[F(x)=1 \Leftrightarrow G(R(x)) =1 \Leftrightarrow \exists_{w \in \{0,1\}^{a|R(x)|^b} V(R(x)w) = 1 \Leftrightarrow \exists_{w\in \{0,1\}^{a|R(x)|^b} } V'(x,w)=1 }\]

Since there are some constants \(a',b'\) such that \(|R(x)| \leq a'|x|^{b'}\) for every \(x\in \{0,1\}^*\), by simple padding we can modify \(V'\) to an algorithm that certifies that \(F \in \mathbf{NP}\).

## From \(\mathbf{NP}\) to 3SAT: The Cook-Levin Theorem

We have seen everal example of problems for which we do not know if their best algorithm is polynomial or exponential, but we can show that they are in \(\mathbf{NP}\). That is, we don’t know if they are easy to *solve*, but we do know that it is easy to *verify* a given solution. There are many, many, *many*, more examples of interesting functions we would like to compute that are easily shown to be in \(\mathbf{NP}\). What is quite amazing is that if we can solve 3SAT then we can solve all of them!

The following is one of the most fundamental theorems in Computer Science:

For every \(F\in \mathbf{NP}\), \(F \leq_p 3\ensuremath{\mathit{SAT}}\).

We will soon show the proof of Theorem 14.6, but note that it immediately implies that \(\ensuremath{\mathit{QUADEQ}}\), \(\ensuremath{\mathit{LONGPATH}}\), and \(\ensuremath{\mathit{MAXCUT}}\) all reduce to \(3\ensuremath{\mathit{SAT}}\). Combining it with the reductions we’ve seen in Chapter 13, it implies that all these problems are *equivalent!* For example, to reduce \(\ensuremath{\mathit{QUADEQ}}\) to \(\ensuremath{\mathit{LONGPATH}}\), we can first reduce \(\ensuremath{\mathit{QUADEQ}}\) to \(3\ensuremath{\mathit{SAT}}\) using Theorem 14.6 and use the reduction we’ve seen in Theorem 13.7 from \(3\ensuremath{\mathit{SAT}}\) to \(\ensuremath{\mathit{LONGPATH}}\). That is, since \(\ensuremath{\mathit{QUADEQ}} \in \mathbf{NP}\), Theorem 14.6 implies that \(\ensuremath{\mathit{QUADEQ}} \leq_p 3\ensuremath{\mathit{SAT}}\), and Theorem 13.7 implies that \(3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{LONGPATH}}\), which by the transitivity of reductions (Lemma 13.2) means that \(\ensuremath{\mathit{QUADEQ}} \leq_p \ensuremath{\mathit{LONGPATH}}\). Similarly, since \(\ensuremath{\mathit{LONGPATH}} \in \mathbf{NP}\), we can use Theorem 14.6 and Theorem 13.4 to show that \(\ensuremath{\mathit{LONGPATH}} \leq_p 3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{QUADEQ}}\), concluding that \(\ensuremath{\mathit{LONGPATH}}\) and \(\ensuremath{\mathit{QUADEQ}}\) are computationally equivalent.

There is of course nothing special about \(\ensuremath{\mathit{QUADEQ}}\) and \(\ensuremath{\mathit{LONGPATH}}\) here: by combining Theorem 14.6 with the reductions we saw, we see that just like \(3\ensuremath{\mathit{SAT}}\), *every* \(F\in \mathbf{NP}\) reduces to \(\ensuremath{\mathit{LONGPATH}}\), and the same is true for \(\ensuremath{\mathit{QUADEQ}}\) and \(\ensuremath{\mathit{MAXCUT}}\). All these problems are in some sense “the hardest in \(\mathbf{NP}\)” since an efficient algorithm for any one of them would imply an efficient algorithm for *all* the problems in \(\mathbf{NP}\). This motivates the following definition:

We say that \(G:\{0,1\}^* \rightarrow \{0,1\}\) is *\(\mathbf{NP}\) hard* if for every \(F\in \mathbf{NP}\), \(F \leq_p G\).

We say that \(G:\{0,1\}^* \rightarrow \{0,1\}\) is *\(\mathbf{NP}\) complete* if \(G\) is \(\mathbf{NP}\) hard and \(G\) is in \(\mathbf{NP}\).

The Cook-Levin Theorem (Theorem 14.6) can be rephrased as saying that \(3\ensuremath{\mathit{SAT}}\) is \(\mathbf{NP}\) hard, and since it is also in \(\mathbf{NP}\), this means that \(3\ensuremath{\mathit{SAT}}\) is \(\mathbf{NP}\) complete. Together with the reductions of Chapter 13, Theorem 14.6 shows that despite their superficial differences, 3SAT, quadratic equations, longest path, independent set, and maximum cut, are all \(\mathbf{NP}\)-complete. Many thousands of additional problems have been shown to be \(\mathbf{NP}\)-complete, arising from all the sciences, mathematics, economics, engineering and many other fields.^{3}

### What does this mean?

As we’ve seen in Solvedexercise 14.1, \(\mathbf{P} \subseteq \mathbf{NP}\). *The* most famous conjecture in Computer Science is that this containment is *strict*. That is, it is widely conjectured that \(\mathbf{P} \neq \mathbf{NP}\). One way to refute the conjecture that \(\mathbf{P} \neq \mathbf{NP}\) is to give a polynomial-time algorithm for even a single one of the \(\mathbf{NP}\)-complete problems such as 3SAT, Max Cut, or the thousands of others that have been studied in all fields of human endeavors. The fact that these problems have been studied by so many people, and yet not a single polynomial-time algorithm for any of them has been found, supports that conjecture that indeed \(\mathbf{P} \neq \mathbf{NP}\). In fact, for many of these problems (including all the ones we mentioned above), we don’t even know of a \(2^{o(n)}\)-time algorithm! However, to the frustration of computer scientists, we have not yet been able to prove that \(\mathbf{P}\neq\mathbf{NP}\) or even rule out the existence of an \(O(n)\)-time algorithm for 3SAT. Resolving whether or not \(\mathbf{P}=\mathbf{NP}\) is known as the \(\mathbf{P}\) vs \(\mathbf{NP}\) problem. A million-dollar prize has been offered for the solution of this problem, a popular book has been written, and every year a new paper comes out claiming a proof of \(\mathbf{P}=\mathbf{NP}\) or \(\mathbf{P}\neq\mathbf{NP}\), only to wither under scrutiny.^{4} The following 120 page survey of Aaronson, as well as chapter 3 in Wigderson’s upcoming book are excellent sources for summarizing what is known about this problem.

One of the mysteries of computation is that people have observed a certain empirical “zero-one law” or “dichotomy” in the computational complexity of natural problems, in the sense that many natural problems are either in \(\mathbf{P}\) (often in \(\ensuremath{\mathit{TIME}}(O(n))\) or \(\ensuremath{\mathit{TIME}}(O(n^2))\)), or they are are \(\mathbf{NP}\) hard. This is related to the fact that for most natural problems, the best known algorithm is either exponential or polynomial, with not too many examples where the best running time is some strange intermediate complexity such as \(2^{2^{\sqrt{\log n}}}\). However, it is believed that there exist problems in \(\mathbf{NP}\) that are neither in \(\mathbf{P}\) nor are \(\mathbf{NP}\)-complete, and in fact a result known as “Ladner’s Theorem” shows that if \(\mathbf{P} \neq \mathbf{NP}\) then this is indeed the case (see also Exercise 14.1 and Figure 14.2).

^{5}

^{6}

### The Cook-Levin Theorem: Proof outline

We will now prove the Cook-Levin Theorem, which is the underpinning to a great web of reductions from 3SAT to thousands of problems across great many fields. Some problems that have been shown to be \(\mathbf{NP}\)-complete include: minimum-energy protein folding, minimum surface-area foam configuration, map coloring, optimal Nash equilibrium, quantum state entanglement, minimum supersequence of a genome, minimum codeword problem, shortest vector in a lattice, minimum genus knots, positive Diophantine equations, integer programming, and many many more. The worst-case complexity of all these problems is (up to polynomial factors) equivalent to that of 3SAT, and through the Cook-Levin Theorem, to all problems in \(\mathbf{NP}\).

To prove Theorem 14.6 we need to show that \(F \leq_p 3\ensuremath{\mathit{SAT}}\) for every \(F\in \mathbf{NP}\). We will do so in three stages. We define two intermediate problems: \(\ensuremath{\mathit{NANDSAT}}\) and \(3\ensuremath{\mathit{NAND}}\). We will shortly show the definitions of these two problems, but Theorem 14.6 will follow from combining the following three results:

\(\ensuremath{\mathit{NANDSAT}}\) is \(\mathbf{NP}\) hard (Lemma 14.8).

\(\ensuremath{\mathit{NANDSAT}} \leq_p 3\ensuremath{\mathit{NAND}}\) (Lemma 14.10).

\(3\ensuremath{\mathit{NAND}} \leq_p 3\ensuremath{\mathit{SAT}}\) (Lemma 14.11).

By the transitivity of reductions, it will follow that for every \(F \in \mathbf{NP}\),

\[ F \leq_p \ensuremath{\mathit{NANDSAT}} \leq_p 3\ensuremath{\mathit{NAND}} \leq_p 3\ensuremath{\mathit{SAT}} \]

hence establishing Theorem 14.6.

We will prove these three results Lemma 14.8, Lemma 14.10 and Lemma 14.11 one by one, providing the requisite definitions as we go along.

## The \(\ensuremath{\mathit{NANDSAT}}\) Problem, and why it is \(\mathbf{NP}\) hard.

We define the \(\ensuremath{\mathit{NANDSAT}}\) problem as follows. On input a string \(Q\in \{0,1\}^*\), we define \(\ensuremath{\mathit{NANDSAT}}(Q)=1\) if and only if \(Q\) is a valid representation of an \(n\)-input and single-output NAND program and there exists some \(w\in \{0,1\}^n\) such that \(Q(w)=1\). While we don’t need this to prove Lemma 14.8, note that \(\ensuremath{\mathit{NANDSAT}}\) is in \(\mathbf{NP}\) since we can verify that \(Q(w)=1\) using the polyonmial-time algorithm for evaluating NAND-CIRC programs.^{7} We now prove that \(\ensuremath{\mathit{NANDSAT}}\) is \(\mathbf{NP}\) hard.

\(\ensuremath{\mathit{NANDSAT}}\) is \(\mathbf{NP}\) hard.

To prove Lemma 14.8 we need to show that for every \(F\in \mathbf{NP}\), \(F \leq_p \ensuremath{\mathit{NANDSAT}}\). The high-level idea is that by the definition of \(\mathbf{NP}\), there is some NAND-TM program \(P^*\) and some polynomial \(T(\cdot)\) such that \(F(x)=1\) if and only if there exists some \(w \in \{0,1\}^{a|x|^b}\) such that \(P^*(xw)\) outputs \(1\) within \(T(|x|)\) steps. Now by “unrolling the loop” of the NAND-TM program \(P^*\) we can convert it into an \(O(T(n))\) NAND-CIRC program \(Q'\) with \(n + an^b\) inputs and a single output such that for every \(x\in \{0,1\}^n\) and \(w\in \{0,1\}^{an^b}\), \(Q'(xw)=P^*(xw)\). on input \(x \in \{0,1\}\) that on input \(w\) will simulate \(P^*(xw)\) for \(T(|x|)\) steps. The next step is to *hardwire* the input \(x\) to \(Q'\) to obtain an \(O(T(n))\) line NAND-CIRC program \(Q\) with \(m=an^b\) inputs such that for every \(w\in \{0,1\}^m\), \(Q'(w)=Q(xw)\). By construction it will be the case that for every \(x\in \{0,1\}^n\), \(F(x)=1\) if and only if there exists \(w\in \{0,1\}^{an^b}\) such that \(Q(w)=1\), and hence this shows that \(F \leq_p \ensuremath{\mathit{NANDSAT}}\).

The proof is a little bit technical but ultimately follows quite directly from the definition of \(\mathbf{NP}\), as well as of NAND and NAND-TM programs. If you find it confusing, try to pause here and work out the proof yourself from these definitions, using the idea of “unrolling the loop” of a NAND-TM program. It might also be useful for you to think how you would implement in your favorite programming language the function `unroll`

which on input a NAND-TM program \(P\) and numbers \(T,n\) would output an \(n\)-input NAND-CIRC program \(Q\) of \(O(|T|)\) lines such that for every input \(z\in \{0,1\}^n\), if \(P\) halts on \(z\) within at most \(T\) steps and outputs \(y\), then \(Q(z)=y\).

We now present the details. Let \(F \in \mathbf{NP}\). To prove Lemma 14.8 we need to give a polynomial-time computable function that will map every \(x^* \in \{0,1\}^*\) to a NAND-CIRC program \(Q\) such that \(F(x)=\ensuremath{\mathit{NANDSAT}}(Q)\).

Let \(x^* \in \{0,1\}^*\) be such a string and let \(n=|x^*|\) be its length. By Definition 14.1 there exists \(V \in \mathbf{P}\) and \(a,b \in \N\) such that \(F(x^*)=1\) if and only if there exists \(w\in \{0,1\}^{an^b}\) such that \(V(x^*w)=1\).

Let \(m=an^b\). Since \(V\in \mathbf{P}\) there is some NAND-TM program \(P^*\) that computes \(V\) on inputs of the form \(xw\) with \(x\in \{0,1\}^n\) and \(w\in \{0,1\}^m\) in at most \({(n+m)}^c\) time for some constant \(c\). Using our “unrolling the loop NAND-TM to NAND compiler” of Theorem 12.13, we can obtain a NAND-CIRC program \(Q'\) that has \(n+m\) inputs and at most \(O((n+m)^c)\) lines such that \(Q'(xw)= P^*(xw)\) for every \(x\in \{0,1\}^n\) and \(w \in \{0,1\}^m\).

Now we can use the following simple but useful “hardwiring” technique to obtain a program:

Given a \(T\)-line NAND-CIRC program \(Q'\) of \(n+m\) inputs and \(x^* \in \{0,1\}^n\), we can obtain in polynomial a program \(Q\) with \(m\) inputs of \(T+3\) lines such that for ever \(w\in \{0,1\}^m\), \(Q(w)= Q'(x^*w)\).

To compute \(Q\), we simply do a “search and replace” for all references in \(Q'\) to `X[`

\(i\)`]`

for \(i \in [n]\), and transform them to either the variable `zero`

or `one`

depending on whether \(x^*_i\) is equal to \(0\) or \(1\) respectively. By adding three lines to the beginning of \(Q'\), we can ensure that the `zero`

and `one`

variables will have the correct value. The only thing that then remains to do another search and replace to transform all references to the variables `X[`

\(n\)`]`

,\(\ldots\), `X[`

\(n+m-1\)`]`

to the variables `X[`

\(0\)`]`

, \(\ldots\), `X[`

\(m-1\)`]`

so that the \(m\) inputs to the new program \(Q\) will correspond to last \(m\) inputs of the original program \(Q'\). See Figure 14.4 for an implementation of this reduction in Python.

Using Lemma 14.9, we obtain a program \(Q\) of \(m\) inputs such that \(Q(w)=Q'(x^*w)=P^*(x^*w)\) for every \(w\in \{0,1\}^m\). Since we know that \(F(x^*)=1\) if and only if there exists \(w\in \{0,1\}^m\) such that \(P^*(x^*w)=1\), this means that \(F(x^*)=1\) if and only if \(\ensuremath{\mathit{NANDSAT}}(Q)=1\), which is what we wanted to prove.

## The \(3\ensuremath{\mathit{NAND}}\) problem

The \(3\ensuremath{\mathit{NAND}}\) problem is defined as follows: the input is a logical formula \(\varphi\) on a set of variables \(z_0,\ldots,z_{r-1}\) which is an AND of constraints of the form \(z_i = \ensuremath{\mathit{NAND}}(z_j,z_k)\). For example, the following is a \(3\ensuremath{\mathit{NAND}}\) formula with \(5\) variables and \(3\) constraints:

\[ \left( z_3 = \ensuremath{\mathit{NAND}}(z_0,z_2) \right) \wedge \left( z_1 = \ensuremath{\mathit{NAND}}(z_0,z_2) \right) \wedge \left( z_4 = \ensuremath{\mathit{NAND}}(z_3,z_1) \right) \]

The output of \(3\ensuremath{\mathit{NAND}}\) on input \(\varphi\) is \(1\) if and only if there is an assignment to the variables of \(\varphi\) that makes it evaluate to “true” (that is, there is some assignment \(z \in \{0,1\}^r\) satisfying all of the constraints of \(\varphi\)). As usual, we can represent \(\varphi\) as a string, and so think of \(3\ensuremath{\mathit{NAND}}\) as a function mapping \(\{0,1\}^*\) to \(\{0,1\}\). We now prove that \(3\ensuremath{\mathit{NAND}}\) is \(\mathbf{NP}\) hard:

\(\ensuremath{\mathit{NANDSAT}} \leq_p 3\ensuremath{\mathit{NAND}}\).

To prove Lemma 14.10 we need to give a polynomial-time map from every NAND-CIRC program \(Q\) to a 3NAND formula \(\Psi\) such that there exists \(w\) such that \(Q(w)=1\) if and only if there exists \(z\) satisfying \(\Psi\). For every line \(i\) of \(Q\), we define a corresponding variable \(z_i\) of \(\Psi\). If the line \(i\) has the form `foo = NAND(bar,blah)`

then we will add the clause \(z_i = \ensuremath{\mathit{NAND}}(z_j,z_k)\) where \(j\) and \(k\) are the last lines in which `bar`

and `blah`

were written to. We will also set variables corresponding to the input variables, as well as add a clause to ensure that the final output is \(1\). The resulting reduction can be implemented in about a dozen lines of Python, see ??.

To prove Lemma 14.10 we need to give a reduction from \(\ensuremath{\mathit{NANDSAT}}\) to \(3\ensuremath{\mathit{NAND}}\). Let \(Q\) be a NAND-CIRC program with \(n\) inputs, one output, and \(m\) lines. We can assume without loss of generality that \(Q\) contains the variables `one`

and `zero`

as usual.

We map \(Q\) to a \(3\ensuremath{\mathit{NAND}}\) formula \(\Psi\) as follows:

\(\Psi\) has \(m+n\) variables \(z_0,\ldots,z_{m+n-1}\).

The first \(n\) variables \(z_0,\ldots,z_{n-1}\) will corresponds to the inputs of \(Q\). The next \(m\) variables \(z_n,\ldots,z_{n+m-1}\) will correspond to the \(m\) lines of \(Q\).

For every \(\ell\in \{n,n+1,\ldots,n+m \}\), if the \(\ell-n\)-th line of the program \(Q\) is

`foo = NAND(bar,blah)`

then we add to \(\Psi\) the constraint \(z_\ell = \ensuremath{\mathit{NAND}}(z_j,z_k)\) where \(j-n\) and \(k-n\) correspond to the last lines in which the variables`bar`

and`blah`

(respectively) were written to. If one or both of`bar`

and`blah`

was not written to before then we use \(z_{\ell_0}\) instead of the corresponding value \(z_j\) or \(z_k\) in the constraint, where \(\ell_0-n\) is the line in which`zero`

is assigned a value. If one or both of`bar`

and`blah`

is an input variable`X[i]`

then we we use \(z_i\) in the constraint.Let \(\ell^*\) be the last line in which the output

`y_0`

is assigned a value. Then we add the constraint \(z_{\ell^*} = \ensuremath{\mathit{NAND}}(z_{\ell_0},z_{\ell_0})\) where \(\ell_0-n\) is as above the last line in which`zero`

is assigned a value. Note that this is effectively the constraint \(z_{\ell^*}=\ensuremath{\mathit{NAND}}(0,0)=1\).

To complete the proof we need to show that there exists \(w\in \{0,1\}^n\) s.t. \(Q(w)=1\) if and only if there exists \(z\in \{0,1\}^{n+m}\) that satisfies all constraints in \(\Psi\). We now show both sides of this equivalence.

**Part I: Completeness.** Suppose that there is \(w\in \{0,1\}^n\) s.t. \(Q(w)=1\). Let \(z\in \{0,1\}^{n+m}\) be defined as follows: for \(i\in [n]\), \(z_i=w_i\) and for \(i\in \{n,n+1,\ldots,n+m\}\) \(z_i\) equals the value that is assigned in the \((i-n)\)-th line of \(Q\) when executed on \(w\). Then by construction \(z\) satisfies all of the constraints of \(\Psi\) (including the constraint that \(z_{\ell^*}=\ensuremath{\mathit{NAND}}(0,0)=1\) since \(Q(w)=1\).)

**Part II: Soundness.** Suppose that there exists \(z\in \{0,1\}^{n+m}\) satisfying \(\Psi\). Soundness will follow by showing that \(Q(z_0,\ldots,z_{n-1})=1\) (and hence in particular there exists \(w\in \{0,1\}^n\), namely \(w=z_0\cdots z_{n-1}\), such that \(Q(w)=1\)). To do this we will prove the following claim \((*)\): for every \(\ell \in [m]\), \(z_{\ell+n}\) equals the value assigned in the \(\ell\)-th step of the execution of the program \(Q\) on \(z_0,\ldots,z_{n-1}\). Note that because \(z\) satisfies the constraints of \(\Psi\), \((*)\) is sufficient to prove the soundness condition since these constraints imply that the last value assigned to the variable `y_0`

in the execution of \(Q\) on \(z_0\cdots w_{n-1}\) is equal to \(1\). To prove \((*)\) suppose, towards a contradiction, that it is false, and let \(\ell\) be the smallest number such that \(z_{\ell+n}\) is *not* equal to the value assigned in the \(\ell\)-th step of the exeuction of \(Q\) on \(z_0,\ldots,z_{n-1}\). But since \(z\) satisfies the constraints of \(\Psi\), we get that \(z_{\ell+n}=\ensuremath{\mathit{NAND}}(z_i,z_j)\) where (by the assumption above that \(\ell\) is *smallest* with this property) these values *do* correspond to the values last assigned to the variables on the righthand side of the assignment operator in the \(\ell\)-th line of the program. But this means that the value assigned in the \(\ell\)-th step is indeed simply the NAND of \(z_i\) and \(z_j\), contradicting our assumption on the choice of \(\ell\).

## From \(3\ensuremath{\mathit{NAND}}\) to \(3\ensuremath{\mathit{SAT}}\)

To conclude the proof of Theorem 14.6, we need to show Lemma 14.11 and show that \(3\ensuremath{\mathit{NAND}} \leq_p 3\ensuremath{\mathit{SAT}}\):

\(3\ensuremath{\mathit{NAND}} \leq_p 3\ensuremath{\mathit{SAT}}\).

To prove Lemma 14.11 we need to map a 3NAND formula \(\varphi\) into a 3SAT formula \(\psi\) such that \(\varphi\) is satisfiable if and only if \(\psi\) is. The idea is that we can transform every NAND constraint of the form \(a=\ensuremath{\mathit{NAND}}(b,c)\) into the AND of ORs involving the variables \(a,b,c\) and their negations, where each of the ORs contains at most three terms. The construction is fairly straightforward, and the details are given below.

It is a good exercise for you to try to find a 3CNF formula \(\xi\) on three variables \(a,b,c\) such that \(\xi(a,b,c)\) is true if and only if \(a = \ensuremath{\mathit{NAND}}(b,c)\). Once you do so, try to see why this implies a reduction from \(3\ensuremath{\mathit{NAND}}\) to \(3\ensuremath{\mathit{SAT}}\), and hence completes the proof of Lemma 14.11

The constraint \[ z_i = \ensuremath{\mathit{NAND}}(z_j,z_k) \;\;(14.6) \] is satisfied if \(z_i=1\) whenever \((z_j,z_k) \neq (1,1)\). By going through all cases, we can verify that Equation 14.6 is equivalent to the constraint

\[ (\overline{z_i} \vee \overline{z_j} \vee\overline{z_k} ) \wedge (z_i \vee z_j ) \wedge (z_i \vee z_k) \;\;. \;\;(14.7) \]

Indeed if \(z_j=z_k=1\) then the first constraint of Equation 14.7 is only true if \(z_i=0\). On the other hand, if either of \(z_j\) or \(z_k\) equals \(0\) then unless \(z_i=1\) either the second or third constraints will fail. This means that, given any 3NAND formula \(\varphi\) over \(n\) variables \(z_0,\ldots,z_{n-1}\), we can obtain a 3SAT formula \(\psi\) over the same variables by replacing every \(3\ensuremath{\mathit{NAND}}\) constraint of \(\varphi\) with three \(3\ensuremath{\mathit{OR}}\) constraints as in Equation 14.7.^{8} Because of the equivalence of Equation 14.6 and Equation 14.7, the formula \(\psi\) satisfies that \(\psi(z_0,\ldots,z_{n-1})=\varphi(z_0,\ldots,z_{n-1})\) for every assignment \(z_0,\ldots,z_{n-1} \in \{0,1\}^n\) to the variables. In particular \(\psi\) is satisfiable if and only if \(\varphi\) is, thus completing the proof.

## Wrapping up

We have shown that for every function \(F\) in \(\mathbf{NP}\), \(F \leq_p \ensuremath{\mathit{NANDSAT}} \leq_p 3\ensuremath{\mathit{NAND}} \leq_p 3\ensuremath{\mathit{SAT}}\), and so \(3\ensuremath{\mathit{SAT}}\) is \(\mathbf{NP}\)-hard. Since in Chapter 13 we saw that \(3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{QUADEQ}}\), \(3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{ISET}}\), \(3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{MAXCUT}}\) and \(3\ensuremath{\mathit{SAT}} \leq_p \ensuremath{\mathit{LONGPATH}}\), all these problems are \(\mathbf{NP}\)-hard as well. Finally, since all the aforementioned problems are in \(\mathbf{NP}\), they are all in fact \(\mathbf{NP}\)-complete and have equivalent complexity. There are thousands of other natural problems that are \(\mathbf{NP}\)-complete as well. Finding a polynomial-time algorithm for any one of them will imply a polynomial-time algorithm for all of them.

- Many of the problems for which we don’t know polynomial-time algorithms are \(\mathbf{NP}\)-complete, which means that finding a polynomial-time algorithm for one of them would imply a polynomial-time algorithm for
*all*of them. - It is conjectured that \(\mathbf{NP}\neq \mathbf{P}\) which means that we believe that polynomial-time algorithms for these problems are not merely
*unknown*but are*nonexistent*. - While an \(\mathbf{NP}\)-hardness result means for example that a full-fledged “textbook” solution to a problem such as MAX-CUT that is as clean and general as the algorithm for MIN-CUT probably does not exist, it does not mean that we need to give up whenever we see a MAX-CUT instance. Later in this course we will discuss several strategies to deal with \(\mathbf{NP}\)-hardness, including
*average-case complexity*and*approximation algorithms*.

## Exercises

Most of the exercises have been written in the summer of 2018 and haven’t yet been fully debugged. While I would prefer people do not post online solutions to the exercises, I would greatly appreciate if you let me know of any bugs. You can do so by posting a GitHub issue about the exercise, and optionally complement this with an email to me with more details about the attempted solution.

Prove that if there is no \(n^{O(\log^2 n)}\) time algorithm for \(3\ensuremath{\mathit{SAT}}\) then there is some \(F\in \mathbf{NP}\) such that \(F \not\in \mathbf{P}\) and \(F\) is not \(\mathbf{NP}\) complete.^{9}

## Bibliographical notes

^{10}

Eugene Lawler’s quote on the “mystical power of twoness” was taken from the wonderful book “The Nature of Computation” by Moore and Mertens. See also this memorial essay on Lawler by Lenstra.

## Further explorations

Some topics related to this chapter that might be accessible to advanced students include: (to be completed)

## Acknowledgements

For example, as shown below, \(3\ensuremath{\mathit{SAT}} \in \mathbf{NP}\), but the function \(\overline{3\ensuremath{\mathit{SAT}}}\) that on input a 3CNF formula \(\varphi\) outputs \(1\) if and only if \(\varphi\) is

*not*satisfiable is not known (nor believed) to be in \(\mathbf{NP}\).One function \(F\) that is believed to lie outside \(\mathbf{NP}\) is the function \(\overline{3\ensuremath{\mathit{SAT}}}\) defined as \(\overline{3\ensuremath{\mathit{SAT}}}(\varphi)= 1 - 3\ensuremath{\mathit{SAT}}(\varphi)\) for every 3CNF formula \(\varphi\). The conjecture that \(\overline{3\ensuremath{\mathit{SAT}}}\not\in \mathbf{NP}\) is known as the “\(\mathbf{NP} \neq \mathbf{coNP}\)” conjecture. It implies the \(\mathbf{P} \neq \mathbf{NP}\) conjecture (can you see why?).

For some partial lists, see this Wikipedia page and this website.

The following web page keeps a catalog of these failed attempts. At the time of this writing, it lists about 110 papers claiming to resolve the question, of which about 60 claim to prove that \(\mathbf{P}=\mathbf{NP}\) and about 50 claim to prove that \(\mathbf{P} \neq \mathbf{NP}\).

TODO: maybe add examples of NP hard problems as a barrier to understanding - problems from economics, physics, etc.. that prevent having a closed-form solutions

TODO: maybe include knots

\(Q\) is a NAND-CIRC program and not a NAND-TM program, and hence it is only defined on inputs of some particular size \(n\). Evaluating \(Q\) on any input \(w\in \{0,1\}^n\) can be done in time polynomial in the number of lines of \(Q\).

The resulting forumula will have some of the OR’s involving only two variables. If we wanted to insist on each formula involving three distinct variables we can always add a “dummy variable” \(z_{n+m}\) and include it in all the OR’s involving only two variables, and add a constraint requiring this dummy variable to be zero.

**Hint:**Use the function \(F\) that on input a formula \(\varphi\) and a string of the form \(1^t\), outputs \(1\) if and only if \(\varphi\) is satisfiable and \(t=|\varphi|^{\log|\varphi|}\).TODO: credit surveys of Avi, Madhu

## Comments

Comments are posted on the GitHub repository using the utteranc.es app. A GitHub login is required to comment. If you don't want to authorize the app to post on your behalf, you can also comment directly on the GitHub issue for this page.

Compiled on 02/15/2019 10:37:00

Copyright 2019, Boaz Barak.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Produced using pandoc and panflute with templates derived from gitbook and bookdown.