Skip to content

Bait and Switch: Why Are Lower Bounds so Hard?

February 12, 2009


A potential obstacle to lower bounds

images4

Steven Rudich is the 2007 recipient of the famous Gödel Prize for his work on ”natural proofs”. I will cover that work in another post. In addition, Steven has done some beautiful work in the foundations of computational complexity. Lately, he has turned his considerable talents to education of students. He is a wonderful explainer of ideas and I wish I could be half as good a lecturer as he is.

Steven once explained to me why he thought that lower bounds are so hard to prove. His idea is really quite simple. Imagine that you have a friend who is interested in computing some boolean function f(x_{1},\dots,x_{n}). Your job is to try to convince your friend that no matter how they proceed the computation of f will take many steps. Steven’s point is that one way you might argue to your friend is: ”Look any computation that computes f must make slow progress toward f.” Each computational step can only get you a little bit closer to the final goal, thus the computation will take many steps. This sounds like a plausible method.

Here is a more precise way to state this method. Let us define some measure on how close a boolean function g is to f. Then, we would show that each variable x_{i} is very far from f. The key would be next to show that if g and h are far from f, then g \vee h cannot be much closer. (And the same for the other boolean operations of ”negation” and ”and” and ”exclusive-or”.) The fundamental idea is that as one computes the best one can do is to slowly move closer to f. The rub so far has been the difficulty in finding how to measure the distance between boolean functions and f.

This method makes a lot of sense to me. Any computation starts with the the lowly variables that must be far away from the goal of f. Then, as each step of the computation is made, the computation gets closer to the goal of f. This has a nice geometric flavor: we can think of the computation as taking a ”walk” from variables toward the goal. Since f, the goal, is far away, this process takes many steps and we have a lower bound method. Sounds great. The trouble is that it cannot work.

Bait and Switch

Steven’s brilliant insight is what I call a ”bait and switch” trick. Bait and switch is my term–do not blame Steven. The trick is only a heuristic at this time. However, it wipes out most attempts that researchers have tried to use to prove lower bounds. I think that it is a very powerful tool for seeing that an approach to proving lower bounds is doomed to fail.

Recall that a bait and switch is a form of fraud. The idea is to lure customers into a store by advertising a product at a ridiculously low price, and then attempt to get them to buy a different product. The original product may not have even existed.

We can use the same method of fraud to ”shoot down” lower bound attempts. Imagine again that we want to show that f is hard to compute, say that it takes at least n^{2} steps to compute. We pick a random boolean function r that requires n^{2} steps to compute. There are lots of these. Now we compute f as follows:

First we compute a=f \oplus r;
Then, we compute b=r;
Finally, we compute a \oplus b.

Clearly, the final answer is f since a \oplus b = (f \oplus r) \oplus r = f \oplus (r \oplus r) = f.

I think of this as a bait and switch trick: you first compute f \oplus r which has nothing to do with f. Also the same for r. Neither has anything by themselves to reveal f. They are the bait. Then, at the very last step of the computation you put a and b together to form f. This is the switch step.

The reason this destroys lower bound methods is that computing a and the b by themselves has nothing to do with f. Nothing at all. So intuitively how can any measure of progress make sense. How can we say that computing a=f \oplus r is making any progress towards f? The function a is just some random boolean function of about the same complexity as f. So how can we measure slow progress towards f. Then, the switch of computing a \oplus b magically reveals the ”fraud”. We have actually been computing f, but this is only apparent at the last step of the whole computation.

I find Steven’s idea very compelling. I think that he has hit on the core of why lower bounds are so elusive. About two years ago I was on a program committee that received a paper claiming a major lower bound result. The paper was hard to read, and while we doubted the result we could not find any flaw. Outside experts who had been working on the paper for a while, such as Steve Cook, had no idea if the paper was correct or not. The committee was in a bind: we could reject a potentially famous paper or we could accept an incorrect paper. I applied the ”bait and switch” rule to the paper. The author had used a type of progress measure. He argued that as the computation proceed at time t the algorithm must have tested at least some d(t) of certain cases. Clearly, this would not pass the bait and switch trick. I felt confident in voting against the paper based on Steven’s insight. I did not see how the author’s method could avoid the bait and switch trick. We rejected the paper. Since then the author has withdrawn his claims, so we were correct.

Open Problems

The obvious open problem is to try to build Steven’s insight into a theorem. Can we prove that there is no measure approach to showing lower bounds? This seems possible to me. I cannot yet see how to make this into a formal theorem, however. Note, also these comments about bait and switch work for arithmetic computations too. We just replace f \oplus r by f - r. All the same comments apply. For example, the work on trying to prove lower bounds on the permanent would fall to this same trick.

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading