Details Left To The Reader…
The buck stops here—on a blog, that is
Stasys Jukna has written a comprehensive book on Boolean circuit complexity, called Boolean Function Complexity: Advances and Frontiers. It includes a discussion of Mike Fischer’s Theorem on negations, which we recently re-gifted.
Today Ken and I would like to fill in some missing details to Mike’s famous result.
Recall the theorem says:
Theorem 1 Let . Then if a Boolean function can be computed by a circuit of size over the basis then it can be computed by a circuit of size over the same basis and only using at most negations.
There are a number of places where the “proof” of this theorem is given. In Fischer’s original paper, in more recent improvements, and in Jukna’s book. All say essentially: “we leave to the reader the remaining details.” In the book the details are sketched and left as an exercise, with a long hint. We have discussed this before—see here.
Proving and Using
There is nothing wrong with leaving out details, but when everyone seems to do that it can cause a problem. I looked at the proof sketches and thought for a bit—okay a more like a byte—that there might even be an error. I was worried there was a mistake in the proof. There is none. The proof is just fine. I was wrong.
But there is some reason to be concerned. For all the beauty of Fischer’s Theorem, it does not seem to have been used to prove something else. I would argue that the real way we get confidence in mathematical results is not by checking their proofs, but by a social process. This process can be improving the main result, which has been done in Fischer’s case. However, these improvements actually affect mostly other parts of the proof, but still leave some details unexplained.
The best way to avoid being concerned and gain confidence in a theorem is to use it to prove results. Who does not believe that numbers modulo a prime form a field? This has been used over and over. A theorem that is used to prove other theorems is much more likely to be correct. If it has a bug and is wrong, there is a kind of strange-attractor phenomena that tends to lead to a contradiction. Or if not an out-and-out contradiction, it may at least lead to a result that is so surprising as to make one doubt the original theorem.
An example of a theorem where maybe only a couple dozen people have fully vetted the proof, but the result is used all the time, is the -depth, -size sorting network of comparison gates designed by Miklós Ajtai, János Komlós, and Endre Szemerédi (AKS). A comparison gate has two input values and gives two output values, and . The hides a big constant—the network is galactic—but its improvement over Ken Batcher’s -depth betwork was a boon to studying low-depth circuit complexity. If it were wrong, we’d expect to have seen some unbelievable circuit results by now.
Let me try and explain the proof and give you all—well most—of the details, and I will try not to leave anything to the reader. As we recently posted, it suffices to invert the input string so that we have also the sequence using only negations. Then we can compute by a monotone circuit of those values, at worst doubling the size from to , and in a sense given at the end of this post we can even do a little better.
The first step is to sort the input bits. This can be done by simulating comparison gates without any negations, since when are bits, is the AND and is the OR. If we care strongly about low depth we can use the AKS network for this, but we could also use Batcher’s. Or whatever.
The second part of the proof, which is critical for us since it uses the negations, is the following: Let be the inputs sorted so that
Just to add the obvious, which is not always obvious to all, the list must look like
where . The goal, given these sorted values, is to construct the vector
so that each is equal to .
The third and final part is to work the initial sorting backwards so that the outputs get routed to their correct locations, corresponding to the ‘s they negate. For those who care about reducing the size of the network, this is where much of the research action is. Fischer’s original idea, used also by Jukna, uses a trick, on pain of needing the quadratic size stated above.
The trick’s idea is to re-interpret the sorted bit as telling whether has at least 1’s—call this bit’s value . Now for each , , repeat the initial sorting step on the string minus bit , and say that the results give values . All of these values are obtained using monotone gates. The trick itself is that for all ,
The point is that if , then bit never makes a difference to any “threshold” , so for all , so one of and is always true, so the big AND gives . Whereas if then there is some for which is true but isn’t, and the big AND gives .
Thus the bits are really supplying the values needed for this trick to work. Later authors have had other sorting-based ideas that improve the size, but the second part where the negations arise is the same. With these full details digested, we can focus on this part.
The Second Part
The claim is that for such sorted values it is possible to construct the vector
so that each is equal to , by using a polynomial size circuit having only negations. Let be this function.
Let’s look at the intuition why this should be true. It is really just a simple divide-and-conquer recursion: one negation allows us to reduce the problem to one of half the size. This clearly yields a logarithmic bound. As usual we will assume that is a power of . Look at the middle bit where . There are two cases:
In this case . Then we get that
In this case . Then we get that
So what is the difficulty? In pseudo-code we are doing the following:
else if then
Well this would be just fine if we were using a programming language, but we are using circuits. They do not allow the full flexible array of constructs—at least not in the obvious way—that programming languages do. So this is the dirty detail that is “left to …”
The issue is that we must use circuits to do two things: (i) the recursive call on two different sets of variables based on the value of ; and (ii) the output of two different return values, again based on the value of . All this must happen without incurring the cost of an extra negation.
Here is how we do this. Let . We will have access to the values of and . We will re-use these values many times, but of course we can do this all with one negation. Once and are computed by the circuit, we can by fan-out use the value as many times as we wish. Of course we want to keep the size of the circuit polynomial, but that will follow.
The first problem is how to do two different calls to the circuit without incurring extra negations. If we naively just had a circuit for each call, we would wind up with a linear number of negations—we must avoid two recursive calls. So define new variables as follows:
Then we compute
Note that is set up so that it is one of the two possible calls, and we use the value of to decide which one to call. This uses no additional negations, which would have been terrible.
The second problem is that we need to output different values based on the returned answer and the value of . Recall that if we want the output to be
and if we want it to be
The way to do this in a circuit is as follows:
Then the -th bit of the output is
Ken and I hope this helped you feel comfortable with the proof. I know that I feel better about it now. In a sense this is all “obvious” to strong circuit programmers, just as certain other tasks are “obvious” to strong Python—or whatever your favorite language is—programmers. There are always a set of idioms that they know and use daily in their programming. These idioms are so well encoded into their brains that they do not see any reason to supply details. For those of us who are not strong circuit programmers, that includes me, spelling out the details helps.
The real open problem is: can we use Fischer’s Theorem to prove some things that are new and interesting? We wonder.