# Who’s Afraid of Natural Proofs?

* Obstacles to circuit lower bounds *

Alexander Razborov is one of the world experts on circuit lower bounds. Perhaps the expert. In 1990 he won the Nevanlinna Prize for his work on lower bounds, more recently in 2007 he won the Gödel Prize (with Steven Rudich) for their beautiful work on Natural proofs (NaP).

His joint work on NaP is the subject of today’s post. Should we be worried about the so-called “Natural proof obstacle” to resolving P=NP? Or is this obstacle not an obstacle? I do not know, but would like to discuss this issue with you today.

Do “impossibility proofs” matter? I mean proofs that show that “you cannot solve problem by method ”? Sometimes they do: the Godel Incompleteness Theorem is a perfect example, it essentially destroyed any attempt to complete David Hilbert’s program and prove arithmetic consistent. Yet, Gerhard Gentzen carried on and proved that first order arithmetic is consistent, although he had to use transfinite induction up to . This is the limit ordinal of the sequence,

The great logician Alfred Tarski on hearing of Gentzen’s Theorem was asked if he now was more confident in arithmetic’s consistency–he answered “by an epsilon.” (Gentzen’s work was based on a logic that he called “natural dedcution”–curious coincidence?)

Another example is the work of Paul Cohen on set theory. I met Paul Cohen at a Royal Society workshop on the notion of “mathematical proof”: this has nothing to do with NaP. It was focused on the growing number of computer aided proofs, and what the math community should think about proofs that are not checkable by humans. There is now a growing list of such proofs; first there was the four color theorem, then Kepler’s problem, then again the four color theorem, then some results in geometry, the list is growing. I will have a post in the future on this interesting issue.

Cohen is, of course, world famous for creating the notion of forcing, and for using forcing to prove that the Axiom of Choice and the Continuum Hypothesis are independent from ZF set theory. After he proved this great result he was asked if he had been worried about his approach, since it came very close to violating a known impossibility theorem. He–it is claimed–said what result?

I find this story, true or not, revealing. Perhaps to make progress on hard problems one needs to have a certain world view, keep positive, and believe that a solution is possible. We must not let impossibility theorems scare us into not thinking about problems. We can let them guide us, we can let them show us what we cannot do, and what we can do. But we must move forward.

** The Obstacle **

Suppose that you want to prove that a boolean function has high complexity–in some sense. Then, you might think that you could use special properties of this particular boolean function. Makes sense. Perhaps the function is the determinant of a square matrix, and your goal is a non-linear lower bound on its complexity. The determinant has many special properties that perhaps you can use in your lower bound proof: for example, the determinant of a invertible matrix is non-zero, or that certain determinants count the number of spanning trees of a graph, and so on.

The obstacle is, roughly, that a large class of approaches to circuit lower bounds must prove more. Your lower bound for must also prove that many other *unrelated* functions also have large complexity. Thus, you cannot use any special properties of your function. None. This seems like a major obstacle. Imagine that I ask you to prove that a certain number is prime, but your proof must at the same time prove many other unrelated numbers are prime. That sounds nuts. When you prove that a number is prime, you prove that *it* is prime: nothing else.

I think of the obstacle as a kind of *drag-along principle*: if you prove is complex, then you prove much more.

** The Obstacle One: Bait and Switch **

There are two concrete theorems that show different versions of the drag-along principle. I like to call the first one “Bait and Switch”: it is close to a question that I posed in the first post of this blog. However, it is a bit different from what I had in mine at the time, I was looking for a stronger result. More on this in the future.

Let be the set . Then, we plan to study the circuit and formula complexity of boolean functions. We use to denote the space of all boolean functions with -inputs, i.e. all so that . We also use to denote the space of all -input monotone functions.

If is -input boolean function, i.e. , then we use to denote the size of the smallest circuit for : in our circuits all the negations are at the inputs and we only allow as operations. We use to denote the monotone complexity of for .

I follow the treatment of Sanjeev Arora and Boaz Barak. The key idea is to look at *complexity measures* on boolean functions. A measure is a *complexity measure* if it assigns positive values to each boolean function. It is normalized so that it assigns to variables and their negations. Also it must satisfy the following two simple rules:

and

As they point out, a lower bound on yields a lower bound on the size of the formula complexity of : this follows by a simple induction. The Bait and Switch Lemma is the following:

Theorem:Suppose is a complexity measure and there exists a function such that . Then, for at least of all in ,

*Proof:* Let be any function in . Define where . Then, : this follows from the definition of exclusive “or”, since

By way of contradiction assume that contains more than of all of . If we pick the above function randomly, then are also random functions (though not independent). Using the trivial union bound we have

This implies by the definition of complexity measure that , which is a contradiction.

The reason this is an interesting theorem is that it shows that any formula lower bound based on a complexity measure would have to prove more. It would have to prove that not only but many other functions are hard. Thus, it would not be able to use the special structure of the boolean function .

This is a blog. Right. Well I must point out that about twenty-five years ago, a famous complexity theorist circulated a proof using exactly this method to show that SAT had super-polynomial complexity. The proof of course was wrong, but it took “us” a while to find the bug. No it was not me, and I will not say who.

** Obstacle Two: Natural Proofs **

The second drag-along principle is NaP, and is much more sophisticated. I will not explain the details of the notion of NaP’s. The main idea is suppose that we wish to prove that a boolean function has large complexity. A “natural” idea is to find a predicate on boolean functions in so that (i) is true and (ii) is true for a large fraction of .

Then, it should follow that is large. This is reasonable, since the conditions roughly say that has some key property in common with random boolean functions. Standard counting methods prove that most boolean functions are hard. Making all this work is the brilliance of the NaP paper. Note, they need a stronger condition on the predicate and other assumptions. See their paper for the details.

They show most (all?) of the standard lower bound proofs follow the above structure. That is the reason that NaP seems to such a powerful idea. A lower bound is forced to drag-along too much.

** Monotone Circuits **

I wonder if the following can avoid the drag-along obstacles: use monotone complexity. The idea is simple, in order to get a lower bound on a boolean function , for the “right” ,

then

To carry out this program we will study an old idea that defines a *transformation* on boolean functions that has several interesting properties. If is a boolean function, then is a monotone boolean function where is the transformation. Moreover, its monotone circuit complexity is almost the same as ‘s non-monotone circuit complexity. More precisely,

Theorem:For any boolean function ,

If you believe what I just said, then feel free to skip to the open problem section. Or you can read on and see the details. Or you can come back later. Or you can do something else. Your choice.

Now let us turn to defining our transformation. We need two simple auxiliary functions and . Let equal

and let equal

Let . Then, is equal to

Note, that is a function of variables, i.e. . The main properties of our transformation are contained in the next theorems.

Theorem 1For any boolean function , is a monotone function.

*Proof:* Suppose that is an -input boolean function. Let be equal to . If is not monotone, there must be so that and changing some input from to makes . Clearly, ; otherwise, after the change would still be . Since it must be the case that . But then after the change must be equal to . This is a contradiction.

Theorem 2For any boolean function ,

*Proof:* Suppose that is a boolean function. Let be equal to . Then, by definition and . Thus, is equal to .

Theorem 3For any boolean function ,

*Proof:* Suppose that is an -input boolean function. follows from Theorem 2. Now suppose that has a circuit of length : let the circuit for be We will construct a monotone circuit for . Use and as the inputs. To start let be the circuit where is constructed as follows:

- if is equal to the input , then is the same input ;
- if is equal to the negated input , then is the input ;
- if , then ;
- if , then .

Next compute :

This takes at most extra steps. So the total computation is now long.

The key is the following claim: . Suppose that is different from for some values of the inputs and . Then, clearly, ; otherwise, they would agree. Also must equal ; again, if not, the two values could not disagree. We now claim that for each , . Suppose that this was false. Then, let for some . Clearly, the common value cannot be since . Also the common value cannot be since . This proves that for each , .

Finally, a simple induction proves that for . This is clear since the only difference in the two computations is that takes the place of . This shows that and that .

All the above can be generalized to handle formula complexity instead of circuit complexity. There is essentially no difference in the proofs.

** Open Problems **

My question is does the following violate either the Bait and Switch Lemma or the NaP result. Suppose that is a boolean function that you wish to show has a certain complexity . Prove that the monotone complexity of is at least . Then, use Theorem 3 to conclude that the general complexity of is . Note, the above is interesting even for , provided

**Bait and Switch Lemma (BSL):** This does not seem to obviously apply. The problem is that we get a lower bound on which is a monotone function. There does not appear to be a BSL for monotone functions. The problem is how can we “add” something to a monotone function and then remove it later on?

**Natural Proofs:** This also does not seem to directly apply. The problem now is that is highly not random. It is a monotone function, but even more: it is almost always . As a matter of fact, it will be equal to with probability at most . How can a function that is essentially a constant look like a random one?

I think these questions may be easy to resolve, and that the two obstacles are indeed obstacles. But I am confused. For Arora’s view on NaP see his comments here.

* *

*
By the way this is the post of this blog. As Knuth pointed when he turned , such events are important to computer scientists. So thanks for your support and interest, and I hope that together we can get to many more posts. Thanks again….dick lipton.
*

### Trackbacks

- Noise Stability and Threshold Circuits « Combinatorics and more
- Natural, Natural, and Natural « Gödel’s Lost Letter and P=NP
- Unmasking Quantum Computation « Gödel’s Lost Letter and P=NP
- math monster | Turing Machine
- Barriers to P=NP Proofs « Gödel’s Lost Letter and P=NP
- Mounting or Solving Open Problems « Gödel’s Lost Letter and P=NP
- No-Go Theorems | Gödel's Lost Letter and P=NP
- Details Left To The Reader… | Gödel's Lost Letter and P=NP
- Avoiding Monsters and Non-Monsters | Gödel's Lost Letter and P=NP
- DC Post | Gödel's Lost Letter and P=NP
- News on Intermediate Problems | Gödel's Lost Letter and P=NP
- Backurs, Indyk connect strong exponential time hypothesis to edit distance; other CS time/ space hiearchy/ continuum news | Turing Machine
- On the Edge of Eclipses and P=NP | Gödel's Lost Letter and P=NP
- Proving Peano Arithmetic Partially Consistent? | Gödel's Lost Letter and P=NP
- P=NP: Perhaps I Change My Mind | Gödel's Lost Letter and P=NP
- Predictions For 2019 | Gödel's Lost Letter and P=NP
- Problems With a Point | Gödel's Lost Letter and P=NP

While Sasha Razborov is certainly a leading complexity theorist, calling him an “expert on circuit lower bounds” strikes me as going back to 1600 and looking for the world expert in quantum mechanics 🙂

I am not sure I see where’s obstacle in the first Theorem you stated (from Arora-Barak) at least with respect to proving circuit lower bounds. After all, we already know that most functions have exponential circuit complexity. So the conclusion of proving a lower bound for a specific function should not deter us. Right?

The issue is that we know that most functions are hard, but we want to prove that

thisfunction is hard. That is the issue. It is like knowing that a large fraction of the numbers with 1000 bits are prime. But we may wish to show that this number is prime. Does that help?Thanks for the 100000 excellent articles thus far! This is really, really good content — have you considered writing a book, even if it was just a collection of musings such as these? I was reading Rabin and Karp’s Turing Award lectures yesterday after class, and thinking it’d be great to have a literate “tour of complexity” book, even if it was just annotations of results ala

Proofs from the BOOK.BTW, the Dyson reactor was the Triga. Elisha Otis was at the New York World’s Fair in 1853; I was wrong about the Chicago Fair being 1896 (it was 1893).

ps Hehehe, wearing a Knuth shirt to 7530 Thursday was not based on your postscript! I’d just got my first Knuth check the day previous: http://www-cs-faculty.stanford.edu/~knuth/boss.html

I have to second Anon’s comment. We wish to show that a particular function is hard, for a particular . The BSL then means that a lot of other functions are hard. But we already knew that lots of other functions are hard.

Dick, you’re response seems to support the idea that the BSL doesn’t hinder progress. If we wish to show that a particular number is prime, then knowing that lots of other numbers are prime certainly doesn’t hurt. It may not help, but it certainly doesn’t hurt.

I’m with Anon and Jeremy. To use your example, suppose we wanted to test if some number n is composite. Unfortunately, suppose we have a BSL-type Lemma: If n is composite, then most of the numbers between 10n and 100n are as well.

But we already knew the conclusion, so I see no reason to be worried at all. We could just go and prove that n is composite.

Great site this rjlipton.wordpress.com and I am really pleased to see you have what I am actually looking for here and this this post is exactly what I am interested in. I shall be pleased to become a regular visitor 🙂

I guess what that result says is that one can prove a function if complex, but using any special property of that specific function is unlikely to help.

In my view, the drag-along principle should not be viewed as some kind of “absolute” barrier, but more of an observation about the kinds of lower-bound proofs we have been able to come up with.

There are several known ways to get around the drag-along principle: (1) monotone circuits, as you observe here and as noted in Razborov and Rudich’s paper; (2) diagonalization; (3) detailed combinatorial analysis, as in the best known linear lower bounds on circuit sizes of explicit functions (N. Blum, Lachish-Raz). The problem isn’t so much that we don’t know how to escape the drag-along principle at all; the problem is that for the explicit functions of greatest interest, these known ways of escaping it don’t seem very promising. The most obvious attempts to separate NP from P by diagonalization run into Baker-Gill-Solovay, and so far nobody seems to have any idea how to exploit the special combinatorial structure of SAT to prove a circuit lower bound. As for monotone circuits, there is again no definite barrier here, but what is lacking is an idea of how to proceed once (say) the P vs. NP problem is recast in this framework.

To put it another way, if you have some idea about how to prove interesting general circuit lower bounds by means of monotone circuits, then I think you should definitely pursue it, rather than assuming that there must be some barrier here that you’re missing.

I agree. But we I still do not see exactly how the current barriers work against certain other approaches. Such as monotone or others. I think there is a lot more to say about this issue and I thank you for your thoughts.

If you’re interested in pushing the naturalization barrier further, here’s something to think about. Razborov and Rudich stated their result as a barrier against proving P/poly circuit lower bounds. Avi Wigderson suggested that perhaps their result could be strengthened to a barrier against proving superlinear circuit lower bounds. I can’t see how to do this, but it is true that Razborov and Rudich’s construction is easily adapted to show that if SIZE() contains -hard pseudorandom number generators, then there is no natural proof that is useful against SIZE() for any . For details see Theorem 1 of my paper on almost-natural proofs. It would be interesting to strengthen this to a barrier against superlinear circuit lower bounds.

First of all, I found this Dicks discussion, as well as *new* results appearing there a “hammer”. Usually people think that monotone circuits and lower bounds for them are useless. Dick shows — this is not the case.

About “natural proofs barrier”. Here I have very mixed feelings. (I myself am fighting with lower bounds.) Sasha and Rudich say : to prove a lower bound you have to find a criteria that is: (i) large (fulfilled by a random function), (ii) constructive (can be easily verified starting from the truth-table), and (iii) useful (satisfying it means being out, having large circuit complexity). Though (iii) is a trivial requirement, both (i) and (ii) seem to me quite artificial. Say, a random graph is not K_4 free, nor is K_4-freeness easy to test (in log of number of vertices). So, why should we stick on properties of random functions and/or properties that are constructive?

I work with lower bounds for years, and “natural proofs” haven’t frightened me at all! I hang on what Avi Wigderson once said: prove *any* lower bound!

Go ahead! Prove lower bounds! Don’t look at any “barriers”!

P.S. I just realized that the meaning of the expression “to be a hammer” is not so common. This actually is a German substitute for “amazing”. I often heard Germans saying “das ist ein Hammer” (“this is a hammer”) when seeing something more than cool. Neither English nor German is my native language, but I like this expression …

When saying “a hammer” I meant Dicks very nice trick to make *any* boolean function monotone, that is, to reduce the lower bounds problem to the monotone case. This is different from the standard trick to take a slice of a function f, that is, to let f'(x) be equal to f(x) for all vectors x with k ones, and let f’ be equal to 0 (resp. 1) for all vectors with fewer (resp., more) ones.

We already have several arguments to prove high lower bounds for monotone circuits and formulas. Unfortunately, so as they are, they work neither for slice functions nor for functions of the form L(f). Still, this “not working for some functions as yet” is much better (at least less depressing) situation than “no argument for any function”, as in the case of non-monotone circuits.

[Also, I use *word* to underline the word, this not “word”.]

Why do lower bounds work NOT for these Slice and L(f) functions?

A small typo: proving Cm(L(f)) >= t(n) should imply C(f) >= t(n) – 4n, instead you wrote Cm(L(f)) >= t(n) – 4n should imply C(f) >= t(n) .

What a nifty construction! I have always felt that negations do not play a significant role in circuit complexity, but I never thought of making this that explicit.

About the open problems: note that instead of resorting to monotone complexity, you could resort to complexity in the model of L(-)-style circuits, that is, monotone circuits with input pairs (x_i,y_i) that always return 0 when x_i and y_i are both 0 for some i and always return 1 if x_i and y_i are both 1 for some i. Obviously, bounds on monotone circuits also prove lower bounds on this restricted model. However, in the restricted model we can apply the same methods as in the general circuit model, because all the interesting stuff happens when we have inputs with their negated versions (which again shows the restriction is not essential, although hopefully fruitful).

The typo in your post reminded me to my very first class in linear algebra. We were tought by one of the major experts in operator theory, but hardly anyone of us was aware of that. Before turning to the blackboard for the first time, he warned us:

“Note that I will be making some mistakes in my lessons. This is intentionally, I want to make you pay attention.”

He indeed included a lot of mistakes, mostly exchanging one symbol for another and obvious to fix. A few lessons later however, he did something sneaky. Unfortunately I took a second class of his covering the same topic in more depth for mathematics students simultaneously and so I have no notes on this event, so I cannot be fully accurate. In essence, in a lengthy sequence of equivalence-preserving equation transformations he injected a multiplication with the term “x-y”. This seemed to simplify the terms a lot. Then in the very last line of the proof, he set x=y and obtained 0=0, which he interpreted as a proof of the equation. While setting x=y was perfectly o.k. in the general context, it of course nulled the “proof”. As first semester students, we did not dare to question his proof, but I did want to know how to fix it. I asked my neighbour about this, just loud enough that only the professor and a few students nearby could hear it. Next lesson, he presented a similar proof (for a similar claim) where he carefully avoided setting x=y, using continuousness and limit arguments. In essence, this also showed how to fix the previous proof. Without it, we would not have understood what was actually going on. So he indeed forced us to pay attention.