Old Ideas And Lower Bounds On SAT
Known lower bound methods for SAT
Fred Hennie was a professor of Electrical Engineering at MIT for many years, who did seminal work in the theory of computation. He is famous for a number of beautiful results on lower bounds, as well as work on algorithms and even discrete mathematics.
Today I want to talk about attempts over the years to get lower bounds on SAT. Of course if PNP is ever proved then the following weak results will all become uninteresting. However, until that happens these are the best results we have for proving lower bounds on Turing-type machines.
The lower bounds on SAT use connections between old ideas and new ideas—between problems once studied for their own sake and problems that now are considered central.
The models we are interested in are all single-worktape Turing Machines (TM’s). Well, that is a lie: the models we are really interested in are multiple-tape Turing Machines, but no one seems to have a clue on how to get even an lower bound on such powerful models, for SAT or any “natural” NP-complete problem. So we do what we can. Even one-tape machines are non-trivial.
Here are the three models of Turing Machines that we will discuss:
Sometimes the basic model was called “off-line,” and other times the last model was called this too. This has confused me—so be careful when reading any papers in this area. For some reason the names shifted years ago, and this shift can cause lots of confusion. I wonder if the confusion might have been driven by the interest years ago in real-time computations.
A computation that computes a function in real-time must output a bit every step of the computation. This type of computation was interesting because it is possible to actually prove lower bounds on them. For example, it is still impossible to prove non-linear lower bounds for multi-tape TM’s for natural problems that are in polynomial time. Fifty years. No progress. None. However, Mike Paterson, Michael Fischer, Albert Meyer were able in 1974 to prove strong lower bounds on an even more inclusive model of computations for integer multiplication. Their proof used a method called the overlap method created by Steve Cook and Stål Aanderaa in 1968. I think that this concept and their work is one of great results in complexity theory—I will have to discuss it in detail in the future.
I believe that Hennie proved the first quadratic lower bounds on the basic model. Note, his result shows that the bound is also tight.
Theorem: recognition requires time in the basic model.
Recall a string is a palindrome if satisfies where is the reverse of the string: is . In English the rules are relaxed a bit to allow spaces and punctation to be ignored. Thus, the following is a “palindrome.”
“No, it never propagates if I set a gap or prevention.”
More than the result what was interesting was the proof method. Hennie used what is called a crossing sequence argument. Roughly, he showed that in testing if a string is a palindrome the tape head would have to move across the middle part of the tape order- times. This yields the lower bound of order .
There have been many results since Hennie’s pioneering paper. Many of these papers use crossing sequence arguments, sometimes variations of the original method. I will just summarize some of the main results—I apologize if I have left out your favorite results.
- Wolfgang Maass proved a similar lower bound on the basic model with a one-way input tape.
- Dieter van Melkebeek and Ran Raz proved a pretty result on a model that is even stronger than the basic model plus two-way input. They allowed random access to the input and still proved a lower bound of order where .
- Ryan Williams in this paper has recently proved related and even stronger results.
The results by van Melkebeek and Raz extend to Turing machines with one -dimensional worktape, provided . But as soon as you add a second worktape, even a -dimensional one, all bets are off. Why is that? An explanation is given by another great result of Hennie, with Richard Stearns: A Turing machine with any number of worktapes can be simulated by a two-worktape machine with only a log-factor overhead in time. Thus an lower bound like theirs with for a two-worktape machine would imply an lower bound for TM’s with any number of tapes.
Toward Even Stronger Bounds
The stronger results by Ryan Williams are based on the fact that SAT is complete, which is something that Hennie did not have available to him back in the 1960’s. Also at the time it was interesting that relatively simple languages could be used to get lower bounds. Today we are more interested in SAT than any artificial problem.
Ryan in his paper also gives evidence that getting much stronger lower bounds on SAT via these methods will require new ideas. Perhaps they are already out there, perhaps not.
One question I believe is open is: what happens here for probabilistic one-tape machines? I am especially interested in these since they are related to getting lower bounds on practical SAT algorithms such as GSAT. Recall that algorithms like GSAT operate as follows: They select a random assignment for the variables, and then look for an unsatisfied clause. If there are none, then they have found a satisfying assignment. If there is an unsatisfied clause, then they randomly pick such a clause and flip a variable.
The key point is this: any algorithm of this type can be implemented efficiently on the basic model plus a random access input. Thus, suppose van Melkebeek and Raz’s theorem could be improved to work for probabilistic TM’s. Then it would follow that any algorithm like GSAT would take at least flips. This is a pretty small number of steps, but at least it would be a bound.
One problem is to check the details that GSAT style algorithms can be implemented as claimed. The much harder problem is to show that the lower bounds can extend to randomized TM’s.