# Re-gifting An Old Theorem

* An old theorem re-packaged? *

Emily Post was America’s premier writer on etiquette for much of the 20th Century. The Emily Post Institute, which she founded in 1946, carries on much of her legacy, directed mostly by family at the “great-” and “great-great-” grandchild level. It features a free online “Etipedia.” Alas searches for terms that would relate to theory research etiquette come up empty. This makes us wonder whether a service could be named for *Emil* Post, the famous co-founder of computability theory. I believe there is no connection between Emil and Emily, except for the obvious one.

Today, known as Boxing Day in British culture, is often re-boxing day in the US and everywhere, as we return gifts that are not-quite-right. But instead I wish to talk about the alternate practice of *re-gifting*, which raises issues of relevance to both Emily and Emil.

Post—here I mean Emily—felt that re-gifting is fine, as long as one is careful to do it properly. She said:

You’re certain that the gift is something the recipient would really like to receive. The gift is brand new (no cast-offs allowed) and comes with its original box and instructions. The gift isn’t one that the original giver took great care to select or make.

Other suggestions are:

*Jacqueline Whitmore*: Ms. Whitmore, a business etiquette expert, gives the go-ahead for regifting, but reminds readers to “consider the taste” of the receiver and to destroy all evidence that the item is being regifted. She wisely suggests that if a gift is expired or really not desirable, just chuck it; don’t regift it.

*Louise Fox*: Protocol expert Louise Fox advises readers to be very cautious when regifting. She suggests that one should never regift something that is an heirloom or was handcrafted by the original giver. She also suggests that you regift only if the gift is something you truly would have bought the recipient.

Well we will try to apply some of this advice to re-gifting a *theorem*, that is proving a fresh, repackaged version of a known result. I show this with a theorem about Boolean formulas originally due to Mike Fischer. Let’s look at Mike’s original gift and then regift.

## The Original Gift

Mike’s result was itself an improvement on an earlier theorem of the great Andrey Markov, Jr. But I would not say that Mike regifted it. Markov proved in 1958 a remarkable theorem—especially for that time—on the number of negations needed to compute a boolean function. His result is titled: *On the Inversion Complexity of a System of Functions*, and appeared in the *J.ACM* way back in 1958.

In order to state Markov’s theorem we need the following function: .

Theorem:Any boolean function on variables can be computed by a circuit over using at most NOT gates.

This is quite remarkable, in my opinion, and is more remarkable for being one of the earliest non-trivial results about boolean functions.

Almost twenty years later Mike, in 1974, vastly improved Markov’s existence proof and showed that restricting the number of negations to requires at most a polynomial blowup in circuit size. Markov result showed that only negations were needed, but he did not worry about the computational complexity of making that restriction. Not surprising since it was a theorem proved in the 1950’s.

Mike’s theorem is:

Theorem:If a boolean function on variables can be computed by a circuit over of size , then it can be computed by a circuit of size at most using at most NOT gates.

This is a surprising theorem: it seems quite unexpected that using so few negations has almost no effect on the size of the circuit.

## The Re-gifted Result

Recall a **monotone circuit** is one that only uses as gates. Let’s regift Mike’s theorem as follows:

Theorem:There is a fixed family of boolean functions . For any boolean function on variables, there is a monotone circuit so that its size is polynomial in the boolean complexity over of and for all ,

The cool part of this theorem is that all the negations information is coded up into the function . The function is *universal*, since it only depends on the inputs and is independent of the actual function . This seems to me to be quite remarkable. I called the functions in honor of Markov, since he started this whole series of results. Perhaps we show use instead. What do you think?

## The Proof

The regifted theorem does not seem to follow directly from Fischer’s Theorem, but it does follow directly from the proof of the theorem. So to regift we had to open the gift—Fischer’s Theorem—and then rewrap. The rewrapping is simple, but the new “gift” seems quite new and cool. Funny how some shiny new wrapping paper can improve an old gift so much.

Let’s turn now to the actual proof. The critical insight is that Fischer’s proof constructs an **inverter circuit** which is a function from inputs to outputs so that each . The internal details of this circuit are unimportant except that it uses only negations and all the other gates are from . Note, the size of this inverter circuit is also small, of size at most .

Let be this circuit. Imagine removing the wires that lead into one of the negations. Let the new circuit be where are the logarithmic many new inputs that are the “outputs” of the negations. By construction this is a monotone circuit. Now let be the boolean function that computes the values of the ‘s. Then,

This then shows that the new theorem is true.

## Open Problems

Can we use this theorem to prove some general boolean lower bounds? It seems quite related to other results on monotone complexity, but I cannot see any direct statements right now.

yep, great/elegant stuff! the theory of negations also ties in a lot with slice functions which seem undeservedly obscure but which have some remarkable/signifcant/key? theoretical aspects. see also this neat paper which ties in the # of negations to the P=?NP problem: A Superpolynomial Lower Bound for a Circuit Computing the Clique Function with at most (1/6) log log n Negation Gates (1998) by amano/maruoka. I personally conjecture a P!=NP proof may come eventually in the form of proving exponential lower bounds on a NP complete slice function. benjamin rossmans neat 2008 result seems to go in that direction.

also have wondered for awhile the connection between sparse functions and slice functions, there seems to be a natural connection that nobody has remarked on, maybe you could blog on it sometime. cf. mahaney’s thm that there are no sparse NP hard sets unless P=NP.

One obvious question this brings up. Given a circuit, what is the time complexity of finding a Fischer circuit? If I’m following what you’ve written here, it looks like as long as the inverter circuits can be efficiently computed, then the rest of these circuits should be computable in polynomial time. Is that accurate?

Joshua,

Yes the M function is polynomial size general circuit.

The result you quote has been improved somewhat:

Robert Beals, Tetsuro Nishino, and Keisuke Tanaka

On the Complexity of Negation-Limited Boolean Networks

SIAM J. Comput., 27(5), 1334–1347, 1998.

shows that O(n log n) is enough size to compute all the negated inputs. The circuit depth is O(log n). If one only has constant-depth unbounded fan-in circuits (even with threshold gates), one needs many more than b(n) negations, as shown in:

Miklos Santha, Christopher B. Wilson

Polynomial Size Constant Depth Circuits with a Limited Number of Negations

STACS 1991: 228-237

Paul,

Thanks for the reference

Actually, Markov proved a much stronger statement with b(n) replaced by b(f)=\log(m+1) where m is the smallest number such that, along any increasing chain in the binary n-cube, the value of f drops from 1 to 0 at most m times. Then b(f) negations are enough to compute f. It is clear that b(f) is at most b(n) for every f of n variables. But for some f’s, it may be smaller. Say, b(f)=0 if f is monotone.

It is known that Mike’s “regift” (great, needless to say!) does not hold with b(n) replaced by b(f), at least when f is a multi-output function: there are boolean n-to-n operators f in P/poly with b(f)=0, requiring super-polynomial circuits if only b(n)-O(\log\log n) NOT gates are allowed. Can the same be shown for some boolean function (n-to-1 operator)?

This is a very cool theorem, imo!

ooo

Thanks. I am trying to use it to prove something new. Probably will have to be in 2014.