An old theorem re-packaged?

Emily Post was America’s premier writer on etiquette for much of the 20th Century. The Emily Post Institute, which she founded in 1946, carries on much of her legacy, directed mostly by family at the “great-” and “great-great-” grandchild level. It features a free online “Etipedia.” Alas searches for terms that would relate to theory research etiquette come up empty. This makes us wonder whether a service could be named for Emil Post, the famous co-founder of computability theory. I believe there is no connection between Emil and Emily, except for the obvious one.

Today, known as Boxing Day in British culture, is often re-boxing day in the US and everywhere, as we return gifts that are not-quite-right. But instead I wish to talk about the alternate practice of re-gifting, which raises issues of relevance to both Emily and Emil.

Post—here I mean Emily—felt that re-gifting is fine, as long as one is careful to do it properly. She said:

You’re certain that the gift is something the recipient would really like to receive. The gift is brand new (no cast-offs allowed) and comes with its original box and instructions. The gift isn’t one that the original giver took great care to select or make.

Other suggestions are:

Jacqueline Whitmore: Ms. Whitmore, a business etiquette expert, gives the go-ahead for regifting, but reminds readers to “consider the taste” of the receiver and to destroy all evidence that the item is being regifted. She wisely suggests that if a gift is expired or really not desirable, just chuck it; don’t regift it.

Louise Fox: Protocol expert Louise Fox advises readers to be very cautious when regifting. She suggests that one should never regift something that is an heirloom or was handcrafted by the original giver. She also suggests that you regift only if the gift is something you truly would have bought the recipient.

Well we will try to apply some of this advice to re-gifting a theorem, that is proving a fresh, repackaged version of a known result. I show this with a theorem about Boolean formulas originally due to Mike Fischer. Let’s look at Mike’s original gift and then regift.

Mike’s result was itself an improvement on an earlier theorem of the great Andrey Markov, Jr. But I would not say that Mike regifted it. Markov proved in 1958 a remarkable theorem—especially for that time—on the number of negations needed to compute a Boolean function. His result is titled: On the Inversion Complexity of a System of Functions, and appeared in the J.ACM way back in 1958.

In order to state Markov’s theorem we need the following function: ${b(n) = \lceil \log(n+1) \rceil}$.

Theorem: Any Boolean function on ${n}$ variables can be computed by a circuit over ${\{\vee,\wedge,\neg\}}$ using at most ${b(n)}$ NOT gates.

This is quite remarkable, in my opinion, and is more remarkable for being one of the earliest non-trivial results about Boolean functions.

Almost twenty years later Mike, in 1974, vastly improved Markov’s existence proof and showed that restricting the number of negations to ${b(n)}$ requires at most a polynomial blowup in circuit size. Markov result showed that only ${b(n)}$ negations were needed, but he did not worry about the computational complexity of making that restriction. Not surprising since it was a theorem proved in the 1950’s.

Mike’s theorem is:

Theorem: If a Boolean function on ${n}$ variables can be computed by a circuit over ${\{\vee,\wedge,\neg\}}$ of size ${t}$, then it can be computed by a circuit of size at most ${2t + O(n^{2}\log^{2} n)}$ using at most ${b(n)}$ NOT gates.

This is a surprising theorem: it seems quite unexpected that using so few negations has almost no effect on the size of the circuit.

Recall a monotone circuit is one that only uses ${\{\vee,\wedge\}}$ as gates. Let’s regift Mike’s theorem as follows:

Theorem: There is a fixed family of Boolean functions ${{\mathfrak M}_{n}:\{0,1\}^{n} \rightarrow \{0,1\}^{b(n)}}$ such that for any Boolean function ${f}$ on ${n}$ variables, there is a monotone circuit ${A(x,y)}$ so that its size is polynomial in the Boolean complexity over ${\{\vee,\wedge,\neg\}}$ of ${f}$ and for all ${x}$,

$\displaystyle A(x,{\mathfrak M}_{n}(x)) = f(x).$

The cool part of this theorem is that all the negations information is coded up into the function ${{\mathfrak M}_{n}}$. The function is universal, since it only depends on the inputs and is independent of the actual function ${f}$. This seems to me to be quite remarkable. I called the functions ${{\mathfrak M}_{n}(x)}$ in honor of Markov, since he started this whole series of results. Perhaps we show use ${{\mathfrak F}_{n}}$ instead. What do you think?

## The Proof

The regifted theorem does not seem to follow directly from Fischer’s Theorem, but it does follow directly from the proof of the theorem. So to regift we had to open the gift—Fischer’s Theorem—and then rewrap. The rewrapping is simple, but the new “gift” seems quite new and cool. Funny how some shiny new wrapping paper can improve an old gift so much.

Let’s turn now to the actual proof. The critical insight is that Fischer’s proof constructs an inverter circuit which is a function from inputs ${x_{1},\dots,x_{n}}$ to outputs ${x_{1},\dots,x_{n},y_{1},\dots,y_{n}}$ so that each ${y_{i} = \neg x_{i}}$. The internal details of this circuit are unimportant except that it uses only ${b(n)}$ negations and all the other gates are from ${\{\vee,\wedge\}}$. Note, the size of this inverter circuit is also small, of size at most ${O(n^{2}\log^{2} n)}$.

Let ${I_{n}(x)=(x,y)}$ be this circuit. Imagine removing the wires that lead into one of the ${b(n)}$ negations. Let the new circuit be ${I^{*}_{n}(x,z) = (x,y)}$ where ${z}$ are the logarithmic many new inputs that are the “outputs” of the negations. By construction this is a monotone circuit. Now let ${{\mathfrak M}_{n}(x) = z}$ be the Boolean function that computes the values of the ${z}$‘s. Then,

$\displaystyle I_{n}(x) = I^{*}_{n}(x,{\mathfrak M}_{n}(x)).$

This then shows that the new theorem is true.

## Open Problems

Can we use this theorem to prove some general Boolean lower bounds? It seems quite related to other results on monotone complexity, but I cannot see any direct statements right now.

1. December 26, 2013 1:14 pm

yep, great/elegant stuff! the theory of negations also ties in a lot with slice functions which seem undeservedly obscure but which have some remarkable/signifcant/key? theoretical aspects. see also this neat paper which ties in the # of negations to the P=?NP problem: A Superpolynomial Lower Bound for a Circuit Computing the Clique Function with at most (1/6) log log n Negation Gates (1998) by amano/maruoka. I personally conjecture a P!=NP proof may come eventually in the form of proving exponential lower bounds on a NP complete slice function. benjamin rossmans neat 2008 result seems to go in that direction.
also have wondered for awhile the connection between sparse functions and slice functions, there seems to be a natural connection that nobody has remarked on, maybe you could blog on it sometime. cf. mahaney’s thm that there are no sparse NP hard sets unless P=NP.

2. December 26, 2013 1:19 pm

One obvious question this brings up. Given a circuit, what is the time complexity of finding a Fischer circuit? If I’m following what you’ve written here, it looks like as long as the inverter circuits can be efficiently computed, then the rest of these circuits should be computable in polynomial time. Is that accurate?

December 26, 2013 11:32 pm

Joshua,

Yes the M function is polynomial size general circuit.

3. December 27, 2013 3:41 am

The result you quote has been improved somewhat:

Robert Beals, Tetsuro Nishino, and Keisuke Tanaka
On the Complexity of Negation-Limited Boolean Networks
SIAM J. Comput., 27(5), 1334–1347, 1998.

shows that O(n log n) is enough size to compute all the negated inputs. The circuit depth is O(log n). If one only has constant-depth unbounded fan-in circuits (even with threshold gates), one needs many more than b(n) negations, as shown in:

Miklos Santha, Christopher B. Wilson
Polynomial Size Constant Depth Circuits with a Limited Number of Negations
STACS 1991: 228-237

December 27, 2013 10:33 am

Paul,

Thanks for the reference

December 27, 2013 1:23 pm

Actually, Markov proved a much stronger statement with b(n) replaced by b(f)=\log(m+1) where m is the smallest number such that, along any increasing chain in the binary n-cube, the value of f drops from 1 to 0 at most m times. Then b(f) negations are enough to compute f. It is clear that b(f) is at most b(n) for every f of n variables. But for some f’s, it may be smaller. Say, b(f)=0 if f is monotone.

It is known that Mike’s “regift” (great, needless to say!) does not hold with b(n) replaced by b(f), at least when f is a multi-output function: there are boolean n-to-n operators f in P/poly with b(f)=0, requiring super-polynomial circuits if only b(n)-O(\log\log n) NOT gates are allowed. Can the same be shown for some boolean function (n-to-1 operator)?