★ See also the **PDF version of this chapter** (better formatting/references) ★

See any bugs/typos/confusing explanations? Open a GitHub issue. You can also comment below

- Get comfort with syntactic sugar or automatic translation of higher
level logic to NAND code.
- More techniques for translating informal or higher level language
algorithms into NAND.
- Learn proof of major result: every finite function can be computed
by some NAND program.
- Start thinking
*quantitatively*about number of lines required for computation.

“[In 1951] I had a running compiler and nobody would touch it because, they carefully told me, computers could only do arithmetic; they could not do programs.”, Grace Murray Hopper, 1986.

“Syntactic sugar causes cancer of the semicolon.”, Alan Perlis, 1982.

The NAND programing language is pretty much as “bare bones” as programming languages come. After all, it only has a single operation. But, it turns out we can implement some “added features” on top of it. That is, we can show how we can implement those features using the underlying mechanisms of the language.

Let’s start with a simple example. One of the most basic operations a programming language has is to assign the value of one variable into another. And yet in NAND, we cannot even do that, as we only allow assignments of the result of a NAND operation. Yet, it is possible to “pretend” that we have such an assignment operation, by transforming code such as

into the valid NAND code:

the reason being that for every \(a\in \{0,1\}\),
\(NAND(a,a)=NOT(a AND a)=NOT(a)\) and so in these two lines `notbar`

is
assigned the negation of `bar`

and so `foo`

is assigned the negation of
the negation of `bar`

, which is simply `bar`

.

Thus in describing NAND programs we can (and will) allow ourselves to
use the variable assignment operation, with the understanding that in
actual programs we will replace every line of the first form with the
two lines of the second form. In programming language parlance this is
known as “syntactic sugar”, since we are not changing the definition of
the language, but merely introducing some convenient notational
shortcuts.

In this section, we will list some additional examples of “syntactic sugar” transformations. Going over all these examples can be somewhat tedious, but we do it for two reasons:

- To convince you that despite its seeming simplicity and limitations,
the NAND programming language is actually quite powerful and can
capture many of the fancy programming constructs such as
`if`

statements and function definitions that exists in more fashionable languages. - So you can realize how lucky you are to be taking a theory of
computation course and not a compilers course…
`:)`

We can create variables `zero`

and `one`

that have the values \(0\) and
\(1\) respectively by adding the lines

Note that since for every \(x\in \{0,1\}\), \(NAND(x,\overline{x})=1\), the
variable `one`

will get the value \(1\) regardless of the value of \(x_0\),
and the variable `zero`

will get the value \(NAND(1,1)=0\).

Another staple of almost any programming language is the ability to
execute *functions*. However, we can achieve the same effect as (non
recursive) functions using the time honored technique of “copy and
paste”. That is, we can replace code such as

where `function_code'`

is obtained by replacing all occurrences of `a`

with `d`

,`b`

with `e`

, `c`

with `f`

. When doing that we will need to
ensure that all other variables appearing in `function_code'`

don’t
interfere with other variables by replacing every instance of a variable
`foo`

with `upfoo`

where `up`

is some unique prefix.

Function definition allow us to express NAND programs much more cleanly
and succinctly. For example, because we can compute AND,OR, NOT using
NANDs, we can compute the *Majority* function as well.

This is certainly much more pleasant than the full NAND alternative:

Another sorely missing feature in NAND is a conditional statement such
as the `if`

/`then`

constructs that are found in many programming
languages. However, using functions, we can obtain an ersatz if/then
construct. First we can compute the function
\(IF:\{0,1\}^3 \rightarrow \{0,1\}\) such that \(IF(a,b,c)\) equals \(b\) if
\(a=1\) and \(c\) if \(a=0\).

Try to see how you could compute the \(IF\) function using \(NAND\)’s. Once
you you do that, see how you can use that to emulate `if`

/`then`

types
of constructs.

The \(IF\) function is also known as the *multiplexing* function, since
\(cond\) can be thought of as a switch that controls whether the output is
connected to \(a\) or \(b\). We leave it as mux-ex to verify that
this program does indeed compute this function.

Using the \(IF\) function, we can implement conditionals in NAND: To achieve something like

we can use code of the following form

or even

using an extension of the \(IF\) function to more inputs and outputs.

We can use “copy paste” to implement a bounded variant of *loops*, as
long we only need to repeat the loop a fixed number of times. For
example, we can use code such as:

as shorthand for

One can also consider fancier versions, including inner loops and so on.
The crucial point is that (unlike most programming languages) we do not
allow the number of times the loop is executed to depend on the input,
and so it is always possible to “expand out” the loop by simply copying
the code the requisite number of times. We will use standard Python
syntax such as `range(n)`

for the sets we can range over.

Using the above features, we can write the integer addition function as follows:

where `zero`

is the constant zero function, and `MAJ`

and `XOR`

correspond to the majority and XOR functions respectively. This
“sugared” version is certainly easier to read than even the two bit NAND
addition program (obtained by restricting the above to the case \(n=2\)):

Which corresponds to the following circuit:

We can go even beyond this, and add more “syntactic sugar” to NAND. The
key observation is that all of these are *not* extra features to NAND,
but only ways that make it easier for us to write programs.

As stated, the NAND programming language only allows for “one
dimensional arrays”, in the sense that we can use variables such as
`Foo[7]`

or `Foo[29]`

but not `Foo[5][15]`

. However we can easily embed
two dimensional arrays in one-dimensional ones using a one-to-one
function \(PAIR:\N^2 \rightarrow \N\). (For example, we can use
\(PAIR(x,y)=2^x3^y\), but there are also more efficient embeddings, see
embedtuples-ex.) Hence we can replace any variable of the form
`Foo[`

\(\expr{i}\)`][`

\(\expr{j}\)`]`

with `foo[`

\(\expr{PAIR(i,j)}\) `]`

, and
similarly for three dimensional arrays.

While the basic variables in NAND++ are Boolean (only have \(0\) or \(1\)),
we can easily extend this to other objects using encodings. For example,
we can encode the alphabet \(\{\)`a`

,`b`

,`c`

,`d`

,`e`

,`f`

\(\}\) using three
bits as \(000,001,010,011,100,101\). Hence, given such an encoding, we
could use the code

would be a shorthand for the program

(Where we use the constant functions `zero`

and `one`

, which we can
apply to any variable.) Using our notion of multi-indexed arrays, we can
also use code such as

as a shorthand for

which can then in turn be mapped to standard NAND code using a one-to-one embedding \(pair: \N \times \N \rightarrow \N\) as above.

We can also handle non-finite alphabets, such as integers, by using some
prefix-free encoding and encoding the integer in an array. For example,
to store non-negative integers, we can use the convention that `01`

stands for \(0\), `11`

stands for \(1\), and `00`

is the end marker. To
store integers that could be potentially negative we can use the
convention `10`

in the first coordinate stands for the negative
sign.

will be shorthand for

Using multidimensional arrays, we can use arrays of integers and hence replace code such as

with the equivalent NAND expressions.

We have seen in addexample how to use the grade-school algorithm to show that NAND programs can add \(n\)-bit numbers for every \(n\). By following through this example, we can obtain the following result

For every \(n\), let \(ADD_n:\{0,1\}^{2n}\rightarrow \{0,1\}^{n+1}\) be the function that, given \(x,x'\in \{0,1\}^n\) computes the representation of the sum of the numbers that \(x\) and \(x'\) represent. Then there is a NAND program that computes the function \(ADD_n\). Moreover, the number of lines in this program is smaller than \(100n\).

We omit the full formal proof of addition-thm, but it can be obtained by going through the code in addexample and:

- Proving that for every \(n\), this code does indeed compute the addition of two \(n\) bit numbers.
- Proving that for every \(n\), if we expand the code out to its “unsweetened” version (i.e., to a standard NAND program), then the number of lines will be at most \(100n\).

See addnumoflinesfig for a figure illustrating the number of lines our program has as a function of \(n\). It turns out that this implementation of \(ADD_n\) uses about \(13n\) lines.

Once we have addition, we can use the grade-school algorithm to obtain multiplication as well, thus obtaining the following theorem:

For every \(n\), let \(MULT_n:\{0,1\}^{2n}\rightarrow \{0,1\}^{2n}\) be the function that, given \(x,x'\in \{0,1\}^n\) computes the representation of the product of the numbers that \(x\) and \(x'\) represent. Then there is a NAND program that computes the function \(MULT_n\). Moreover, the number of lines in this program is smaller than \(1000n^2\).

We omit the proof, though in multiplication-ex we ask you to supply a “constructive proof” in the form of a program (in your favorite programming language) that on input a number \(n\), outputs the code of a NAND program of at most \(1000n^2\) lines that computes the \(MULT_n\) function. In fact, we can use Karatsuba’s algorithm to show that there is a NAND program of \(O(n^{\log_2 3})\) lines to compute \(MULT_n\) (and one can even get further asymptotic improvements using the newer algorithms).

We have seen that NAND programs can add and multiply numbers. But can they compute other type of functions, that have nothing to do with arithmetic? Here is one example:

For every \(k\), the *lookup* function
\(LOOKUP_k: \{0,1\}^{2^k+k}\rightarrow \{0,1\}\) is defined as follows:
For every \(x\in\{0,1\}^{2^k}\) and \(i\in \{0,1\}^k\), \[
LOOKUP_k(x,i)=x_i
\] where \(x_i\) denotes the \(i^{th}\) entry of \(x\), using the binary
representation to identify \(i\) with a number in \(\{0,\ldots,2^k - 1 \}\).

The function \(LOOKUP_1: \{0,1\}^3 \rightarrow \{0,1\}\) maps \((x_0,x_1,i) \in \{0,1\}^3\) to \(x_i\). It is actually the same as the \(IF\)/\(MUX\) function we have seen above, that has a 4 line NAND program. However, can we compute higher levels of \(LOOKUP\)? This turns out to be the case:

For every \(k\), there is a NAND program that computes the function \(LOOKUP_k: \{0,1\}^{2^k+k}\rightarrow \{0,1\}\). Moreover, the number of lines in this program is at most \(4\cdot 2^k\).

We now prove lookup-thm. We will do so by induction. That is, we show how to use a NAND program for computing \(LOOKUP_k\) to compute \(LOOKUP_{k+1}\). Let us first see how we do this for \(LOOKUP_2\). Given input \(x=(x_0,x_1,x_2,x_3)\) and an index \(i=(i_0,i_1)\), if the most significant bit \(i_1\) of the index is \(0\) then \(LOOKUP_2(x,i)\) will equal \(x_0\) if \(i_0=0\) and equal \(x_1\) if \(i_0=1\). Similarly, if the most significant bit \(i_1\) is \(1\) then \(LOOKUP_2(x,i)\) will equal \(x_2\) if \(i_0=0\) and will equal \(x_3\) if \(i_0=1\). Another way to say this is that \[ LOOKUP_2(x_0,x_1,x_2,x_3,i_0,i_1) = LOOKUP_1(LOOKUP_1(x_0,x_1,i_0),LOOKUP_1(x_2,x_3,i_0),i_1) \] That is, we can compute \(LOOKUP_2\) using three invocations of \(LOOKUP_1\). The “pseudocode” for this program will be

(Note that since we call this function with \((x_0,x_1,x_2,x_3,i_0,i_1)\),
the inputs `x_4`

and `x_5`

correspond to \(i_0\) and \(i_1\).) We can obtain
an actual “sugar free” NAND program of at most \(12\) lines by replacing
the calls to `LOOKUP_1`

by an appropriate copy of the program above.

We can generalize this to compute \(LOOKUP_3\) using two invocations of \(LOOKUP_2\) and one invocation of \(LOOKUP_1\). That is, given input \(x=(x_0,\ldots,x_7)\) and \(i=(i_0,i_1,i_2)\) for \(LOOKUP_3\), if the most significant bit of the index \(i_2\) is \(0\), then the output of \(LOOKUP_3\) will equal \(LOOKUP_2(x_0,x_1,x_2,x_3,i_0,i_1)\), while if this index \(i_2\) is \(1\) then the output will be \(LOOKUP_2(x_4,x_5,x_6,x_7,i_0,i_1)\), meaning that the following pseudocode can compute \(LOOKUP_3\),

where again we can replace the calls to `LOOKUP_2`

and `LOOKUP_1`

by
invocations of the process above.

Formally, we can prove the following lemma:

For every \(k \geq 2\), \(LOOKUP_k(x_0,\ldots,x_{2^k-1},i_0,\ldots,i_{k-1})\) is equal to \[ LOOKUP_1(LOOKUP_{k-1}(x_0,\ldots,x_{2^{k-1}-1},i_0,\ldots,i_{k-2}), LOOKUP_{k-1}(x_{2^{k-1}},\ldots,x_{2^k-1},i_0,\ldots,i_{k-2}),i_{k-1}) \]

If the most significant bit \(i_{k-1}\) of \(i\) is zero, then the index \(i\) is in \(\{0,\ldots,2^{k-1}-1\}\) and hence we can perform the lookup on the “first half” of \(x\) and the result of \(LOOKUP_k(x,i)\) will be the same as \(a=LOOKUP_{k-1}(x_0,\ldots,x_{2^{k-1}-1},i_0,\ldots,i_{k-1})\). On the other hand, if this most significant bit \(i_{k-1}\) is equal to \(1\), then the index is in \(\{2^{k-1},\ldots,2^k-1\}\), in which case the result of \(LOOKUP_k(x,i)\) is the same as \(b=LOOKUP_{k-1}(x_{2^{k-1}},\ldots,x_{2^k-1},i_0,\ldots,i_{k-1})\). Thus we can compute \(LOOKUP_k(x,i)\) by first computing \(a\) and \(b\) and then outputting \(LOOKUP_1(a,b,i_{k-1})\).

lookup-rec-lem directly implies lookup-thm. We prove by induction on \(k\) that there is a NAND program of at most \(4\cdot 2^k\) lines for \(LOOKUP_k\). For \(k=1\) this follows by the four line program for \(LOOKUP_1\) we’ve seen before. For \(k>1\), we use the following pseudocode

In Python, this can be described as follows

If we let \(L(k)\) be the number of lines required for \(LOOKUP_k\), then the above shows that \[ L(k) \leq 2L(k-1)+4 \;. \label{induction-lookup} \] We will prove by induction that \(L(k) \leq 4(2^k-1)\). This is true for \(k=1\) by our construction. For \(k>1\), using the inductive hypothesis and \eqref{induction-lookup}, we get that \[ L(k) \leq 2\cdot 4 \cdot (2^{k-1}-1)+4= 4\cdot 2^k - 8 + 4 = 4(2^k-1) \] completing the proof of lookup-thm. (See lookuplinesfig for a plot of the actual number of lines in our implementation of \(LOOKUP_k\).)

At this point we know the following facts about NAND programs:

- They can compute at least some non trivial functions.
- Coming up with NAND programs for various functions is a very tedious task.

Thus I would not blame the reader if they were not particularly looking
forward to a long sequence of examples of functions that can be computed
by NAND programs. However, it turns out we are not going to need this,
as we can show in one fell swoop that NAND programs can compute *every*
finite function:

For every \(n,m\) and function \(F: \{0,1\}^n\rightarrow \{0,1\}^m\), there is a NAND program that computes the function \(F\). Moreover, there is such a program with at most \(O(m 2^n)\) lines.

The implicit constant in the \(O(\cdot)\) notation can be shown to be at most \(10\). We also note that the bound of NAND-univ-thm can be improved to \(O(m 2^n/n)\), see tight-upper-bound.

To prove NAND-univ-thm, we need to give a NAND program for
*every* possible function. We will restrict our attention to the case of
Boolean functions (i.e., \(m=1\)). In mult-bit-ex you will show
how to extend the proof for all values of \(m\). A function
\(F: \{0,1\}^n\rightarrow \{0,1\}\) can be specified by a table of its
values for each one of the \(2^n\) inputs. For example, the table below
describes one particular function
\(G: \{0,1\}^4 \rightarrow \{0,1\}\):

Input (\(x\)) | Output (\(G(x)\)) |
---|---|

\(0000\) | 1 |

\(1000\) | 1 |

\(0100\) | 0 |

\(1100\) | 0 |

\(0010\) | 1 |

\(1010\) | 0 |

\(0110\) | 0 |

\(1110\) | 1 |

\(0001\) | 0 |

\(1001\) | 0 |

\(0101\) | 0 |

\(1101\) | 0 |

\(0011\) | 1 |

\(1011\) | 1 |

\(0111\) | 1 |

\(1111\) | 1 |

We can see that for every \(x\in \{0,1\}^4\), \(G(x)=LOOKUP_4(1100100100001111,x)\). Therefore the following is NAND “pseudocode” to compute \(G\):

Recall that we can translate this pseudocode into an actual NAND program
by adding three lines to define variables `zero`

and `one`

that are
initialized to \(0\) and \(1\) repsectively, and then replacing a statement
such as `Gxxx = 0`

with `Gxxx = NAND(one,one)`

and a statement such as
`Gxxx = 1`

with `Gxxx = NAND(zero,zero)`

. The call to `LOOKUP`

will be
replaced by the NAND program that computes \(LOOKUP_4\), but we will
replace the variables `X[16]`

,\(\ldots\),`X[19]`

in this program with
`X[0]`

,\(\ldots\),`X[3]`

and the variables `X[0]`

,\(\ldots\),`X[15]`

with
`G000`

, \(\ldots\), `G1111`

.

There was nothing about the above reasoning that was particular to this program. Given every function \(F: \{0,1\}^n \rightarrow \{0,1\}\), we can write a NAND program that does the following:

- Initialize \(2^n\) variables of the form
`F00...0`

till`F11...1`

so that for every \(z\in\{0,1\}^n\), the variable corresponding to \(z\) is assigned the value \(F(z)\). - Compute \(LOOKUP_n\) on the \(2^n\) variables initialized in the
previous step, with the index variable being the input variables
`X[`

\(\expr{0}\)`]`

,…,`X[`

\(\expr{2^n-1}\)`]`

. That is, just like in the pseudocode for`G`

above, we use`Y[0] = LOOKUP(F00..00,F10...00,...,F11..1,X[0],..,x[`

\(\expr{n-1}\)`])`

The total number of lines in the program will be \(2^n\) plus the \(4\cdot 2^n\) lines that we pay for computing \(LOOKUP_n\). This completes the proof of NAND-univ-thm.

The NAND programming language website allows you to construct a NAND program for an arbitrary function.

While NAND-univ-thm seems striking at first, in retrospect, it
is perhaps not that surprising that every finite function can be
computed with a NAND program. After all, a finite function
\(F: \{0,1\}^n \rightarrow \{0,1\}^m\) can be represented by simply the
list of its outputs for each one of the \(2^n\) input values. So it makes
sense that we could write a NAND program of similar size to compute it.
What is more interesting is that *some* functions, such as addition and
multiplication, have a much more efficient representation: one that only
requires \(O(n^2)\) or even smaller number of lines.

By being a little more careful, we can improve the bound of
NAND-univ-thm and show that every function
\(F:\{0,1\}^n \rightarrow \{0,1\}^m\) can be computed by a NAND program of
at most \(O(m 2^n/n)\) lines. As before, it is enough to prove the case
that \(m=1\). > The idea is to use the technique known as *memoization*.
Let \(k= \log(n-2\log n)\) (the reasoning behind this choice will become
clear later on). For every \(a \in \{0,1\}^{n-k}\) we define
\(F_a:\{0,1\}^k \rightarrow \{0,1\}\) to be the function that maps
\(w_0,\ldots,w_{k-1}\) to \(F(a_0,\ldots,a_{n-k-1},w_0,\ldots,w_{k-1})\). On
input \(x=x_0,\ldots,x_{n-1}\), we can compute \(F(x)\) as follows: First we
compute a \(2^{n-k}\) long string \(P\) whose \(a^{th}\) entry (identifying
\(\{0,1\}^{n-k}\) with \([2^{n-k}]\)) equals \(F_a(x_{n-k},\ldots,x_{n-1})\).
One can verify that \(F(x)=LOOKUP_{n-k}(P,x_0,\ldots,x_{n-k-1})\). Since
we can compute \(LOOKUP_{n-k}\) using \(O(2^{n-k})\) lines, if we can
compute the string \(P\) (i.e., compute variables `P_`

\(\expr{0}\), …,
`P_`

\(\expr{2^{n-k}-1}\)) using \(T\) lines, then we can compute \(F\) in
\(O(2^{n-k})+T\) lines. The trivial way to compute the string \(P\) would be
to use \(O(2^k)\) lines to compute for every \(a\) the map
\(x_0,\ldots,x_{k-1} \mapsto F_a(x_0,\ldots,x_{k-1})\) as in the proof of
NAND-univ-thm. Since there are \(2^{n-k}\) \(a\)’s, that would be
a total cost of \(O(2^{n-k} \cdot 2^k) = O(2^n)\) which would not improve
at all on the bound of NAND-univ-thm. However, a more careful
observation shows that we are making some *redundant* computations.
After all, there are only \(2^{2^k}\) distinct functions mapping \(k\) bits
to one bit. If \(a\) and \(a'\) satisfy that \(F_a = F_{a'}\) then we don’t
need to spend \(2^k\) lines computing both \(F_a(x)\) and \(F_{a'}(x)\) but
rather can only compute the variable `P_`

\(\expr{a}\) and then copy
`P_`

\(\expr{a}\) to `P_`

\(\expr{a'}\) using \(O(1)\) lines. Since we have
\(2^{2^k}\) unique functions, we can bound the total cost to compute \(P\)
by \(O(2^{2^k}2^k)+O(2^{n-k})\). Now it just becomes a matter of
calculation. By our choice of \(k\), \(2^k = n-2\log n\) and hence
\(2^{2^k}=\tfrac{2^n}{n^2}\). Since \(n/2 \leq 2^k \leq n\), we can bound
the total cost of computing \(F(x)\) (including also the additional
\(O(2^{n-k})\) cost of computing \(LOOKUP_{n-k}\)) by
\(O(\tfrac{2^n}{n^2}\cdot n)+O(2^n/n)\), which is what we wanted to prove.

For every \(n,m,T \in \N\), we denote by \(SIZE_{n,m}(T)\), the set of all functions from \(\{0,1\}^n\) to \(\{0,1\}^m\) that can be computed by NAND programs of at most \(T\) lines. NAND-univ-thm shows that \(SIZE_{n,m}(4 m 2^n)\) is the set of all functions from \(\{0,1\}^n\) to \(\{0,1\}^m\). The results we’ve seen before can be phrased as showing that \(ADD_n \in SIZE_{2n,n+1}(100 n)\) and \(MULT_n \in SIZE_{2n,2n}(10000 n^{\log_2 3})\). See sizeclassesfig.

Note that \(SIZE_{n,m}(T)\) does *not* correspond to a set of programs!
Rather, it is a set of *functions*. This distinction between *programs*
and *functions* will be crucial for us in this course. You should always
remember that while a program *computes* a function, it is not *equal*
to a function. In particular, as we’ve seen, there can be more than one
program to compute the same function.

A NAND program \(P\) can only compute a function with a certain number \(n\) of inputs and a certain number \(m\) of outputs. Hence for example there is no single NAND program that can compute the increment function \(INC:\{0,1\}^* \rightarrow \{0,1\}^*\) that maps a string \(x\) (which we identify with a number via the binary representation) to the string that represents \(x+1\). Rather for every \(n>0\), there is a NAND program \(P_n\) that computes the restriction \(INC_n\) of the function \(INC\) to inputs of length \(n\). Since it can be shown that for every \(n>0\) such a program \(P_n\) exists of length at most \(10n\), \(INC_n \in SIZE(10n)\) for every \(n>0\).

If \(T:\N \rightarrow \N\) and \(F:\{0,1\}^* \rightarrow \{0,1\}^*\), we will sometimes slightly abuse notation and write \(F \in SIZE(T(n))\) to indicate that for every \(n\) the restriction \(F_n\) of \(F\) to inputs in \(\{0,1\}^n\) is in \(SIZE(T(n))\). Hence we can write \(INC \in SIZE(10n)\). We will come back to this issue of finite vs infinite functions later in this course.

In this exercise we prove a certain “closure property” of the class \(SIZE(T(n))\). That is, we show that if \(f\) is in this class then (up to some small additive term) so is the complement of \(f\), which is the function \(g(x)=1-f(x)\).

Prove that there is a constant \(c\) such that for every \(f:\{0,1\}^n \rightarrow \{0,1\}\) and \(s\in \N\), if \(f \in SIZE(s)\) then \(1-f \in SIZE(s+c)\).

If \(f\in SIZE(s)\) then there is an \(s\)-line program \(P\) that computes
\(f\). We can rename the variable `Y[0]`

in \(P\) to a unique variable
`unique_temp`

and add the line

at the very end to obtain a program \(P'\) that computes \(1-f\).

- We can define the notion of computing a function via a simplified “programming language”, where computing a function \(F\) in \(T\) steps would correspond to having a \(T\)-line NAND program that computes \(F\).
- While the NAND programming only has one operation, other operations such as functions and conditional execution can be implemented using it.
- Every function \(F:\{0,1\}^n \rightarrow \{0,1\}^m\) can be computed by a NAND program of at most \(O(m 2^n)\) lines (and in fact at most \(O(m 2^n/n)\) lines).
- Sometimes (or maybe always?) we can translate an
*efficient*algorithm to compute \(F\) into a NAND program that computes \(F\) with a number of lines comparable to the number of steps in this algorithm.

Most of the exercises have been written in the summer of 2018 and haven’t yet been fully debugged. While I would prefer people do not post online solutions to the exercises, I would greatly appreciate if you let me know of any bugs. You can do so by posting a GitHub issue about the exercise, and optionally complement this with an email to me with more details about the attempted solution.

- Prove that the map \(F(x,y)=2^x3^y\) is a one-to-one map from \(\N^2\)
to \(\N\).
- Show that there is a one-to-one map \(F:\N^2 \rightarrow \N\) such
that for every \(x,y\), \(F(x,y) \leq 100\cdot \max\{x,y\}^2+100\).
- For every \(k\), show that there is a one-to-one map \(F:\N^k \rightarrow \N\) such that for every \(x_0,\ldots,x_{k-1} \in \N\), \(F(x_0,\ldots,x_{k-1}) \leq 100 \cdot (x_0+x_1+\ldots+x_{k-1}+100k)^k\).

Prove that the NAND program below computes the function \(MUX\) (or \(LOOKUP_1\)) where \(MUX(a,b,c)\) equals \(a\) if \(c=0\) and equals \(b\) if \(c=1\):

Give a NAND program of at most 6 lines to compute \(MAJ:\{0,1\}^3 \rightarrow \{0,1\}\) where \(MAJ(a,b,c) = 1\) iff \(a+b+c \geq 2\).

In this exercise we will show that even though the NAND programming
language does not have an `if .. then .. else ..`

statement, we can
still implement it. Suppose that there is an \(s\)-line NAND program to
compute \(F:\{0,1\}^n \rightarrow \{0,1\}\) and an \(s'\)-line NAND program
to compute \(F':\{0,1\}^n \rightarrow \{0,1\}\). Prove that there is a
program of at most \(s+s'+10\) lines to compute the function
\(G:\{0,1\}^{n+1} \rightarrow \{0,1\}\) where \(G(x_0,\ldots,x_{n-1},x_n)\)
equals \(F(x_0,\ldots,x_{n-1})\) if \(x_n=0\) and equals
\(F'(x_0,\ldots,x_{n-1})\) otherwise.

Write a program using your favorite programming language that on input an integer \(n\), outputs a NAND program that computes \(ADD_n\). Can you ensure that the program it outputs for \(ADD_n\) has fewer than \(10n\) lines?

Write a program using your favorite programming language that on input an integer \(n\), outputs a NAND program that computes \(MULT_n\). Can you ensure that the program it outputs for \(MULT_n\) has fewer than \(1000\cdot n^2\) lines?

Write a program using your favorite programming language that on input
an integer \(n\), outputs a NAND program that computes \(MULT_n\) and has at
most \(10000 n^{1.9}\) lines.**Hint:** Use Karatsuba’s algorithm

Prove that

a. If there is an \(s\)-line NAND program to compute
\(F:\{0,1\}^n \rightarrow \{0,1\}\) and an \(s'\)-line NAND program to
compute \(F':\{0,1\}^n \rightarrow \{0,1\}\) then there is an \(s+s'\)-line
program to compute the function \(G:\{0,1\}^n \rightarrow \{0,1\}^2\) such
that \(G(x)=(F(x),F'(x))\).

b. For every function \(F:\{0,1\}^n \rightarrow \{0,1\}^m\), there is a
NAND program of at most \(10m\cdot 2^n\) lines that computes \(F\).

Some topics related to this chapter that might be accessible to advanced students include:

(to be completed)

Copyright 2018, Boaz Barak.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

HTML version is produced using the Distill template, Copyright 2018, The Distill Template Authors.

## Comments

Comments are posted on the GitHub repository using the utteranc.es app. A GitHub login is required to comment. If you don't want to authorize the app to post on your behalf, you can also comment directly on the GitHub issue for this page.