We Believe A Lot, But Can Prove Little
The gap between what we believe and what we can prove
Stan Eisenstat is a numerical analyst who works on various aspects of real computing, and is one of the top experts in solving sparse linear systems. He is a faculty member at Yale in Computer Science now, and was there when I was at Yale in the 1970’s.
Today I want to discuss what we believe and what we know about complexity classes.
This is something I have commented on before: beliefs vs knowing. But I feel that it is worth discussing further, and I hope that you agree.
I had the pleasure, when I was at Yale, to work with Stan on a variety of projects. Stan combines a deep understanding of the mathematics of algorithms with a great feel for what is practical and what is important. He is also a master programmer—a great deal of his early work was on actual implementation of algorithms for numerical computing. He is a wizard at solving recurrences; I believe this ability is related to his understanding of differential equations.
Stan used to tell me, more than once, two statements:
- The function is bounded.
- The function is .
He would tell me these when I ran into his office excited that I had improved some algorithm by some small amount—perhaps by a log. If the new bound was from , Stan’s comment would be: So it went from linear to .
Both his statements are technically false, but both have a grain of truth to them, especially back in the 70’s when Stan told me his insights. They are related to our notion of galactic algorithms that Ken Regan and I discussed before here. Even today is not too large, and certainly is very small. A related quote is:
goes to infinity with great dignity.
Please keep these insights in mind as we examine what we know about complexity classes and what we believe.
Time vs Space
One of the oldest, one of the best, and one of the most beautiful results we know concerns the relationship between time and space. The best theorem to date is the classic theorem due to John Hopcroft, Wolfgang Paul, and Leslie Valiant:
Theorem: For any function ,
This is a wonderful result—one of my all time favorite theorems in complexity theory. Most of us believe more should be true, but it is unlikely that we can prove much more soon—their result is over 30 years old. There is a technical reason to believe that the term could be slightly improved, without a huge breakthrough, but I will not go into the details.
The proof is very clever and if you do not know it take a look at the famous paper. I will not go into any details of the proof, but will discuss the pebble game that the proof rests on, among other tricks and ideas. The pebble game is a solitaire game that is played on an acyclic graph . The rules of the game are quite simple:
- A pebble can be placed on any vertex provided there are pebbles on all the vertices with incoming arcs to .
- A pebble can be removed from any vertex at any time.
- The goal of the game is to get a pebble on a particular vertex —the target.
It should be clear the game can always be won: just place pebbles on all sources. Then on all vertices that have all their incoming vertices pebbled and so on. What makes the game interesting is the number of pebbles is limited. Thus pebbles must be placed and removed in a strategic manner. The key result is that any bounded degree acyclic graph can be pebbled with
pebbles: the constant is only a function of the degree of the graph.
The rationale behind the game is that you should think of placing a pebble on a vertex as a step of the computation. A step is only possible if all the inputs to the vertex are known, corresponding to those which are already pebbled. The removal rule allows computations to be forgotten. Essentially this rule corresponds to the ability of memory to be reused, since once a pebble is removed it can be used elsewhere on the graph.
Bob Tarjan and Thomas Lengauer in 1982 proved lower bounds on the number of pebbles need to win on a bounded degree graph. Their paper is titled, “Asymptotically tight bounds on time-space trade-offs in a pebble game.”
Stan had an early practical result around when the time-space separation theorem was proved. He had an algorithm that “threw” away information it had computed, and re-computed that information later on—again. His algorithm, which was joint with Martin Schultz, essentially played the pebbling game.
I recall talking to them about their result and telling them it was related to a recent major theory result. The reason they needed to throw away information was that the limiting resource for their algorithm was space not time. They discovered that the algorithm ran faster if rather than writing to disk and storing the information, they simply threw it away and re-computed as needed.
Deterministic vs Nondeterministic
The belief that is not equal to shows that most believe that nondeterministic time is not equal to deterministic time for any . This is a strong belief, one that many be true, but the best we know is vastly far from this.
The best theorem to date is the classic theorem due to Wolfgang Paul, Nick Pippenger, Endre Szemerédi and William Trotter:
Rahul Santhanam, more recently, showed an improvement to their classic result:
Recall Stan’s comments about the growth of functions. Forget the square root, the function by itself grows to infinity very vey slowly. I am not sure what Stan would say about : it is exactly , forever? From his comments one might pooh-pooh the result as being just fuss about notions of “linear,” but this only strengthens the technical appreciation that one must have for it. That it took years to be proved also shows how deep are the mysteries of nondeterminism.
One of the strange properties of both these theorems is they do not extend to arbitrary running times. One of fundamental tricks in complexity theory is the notion of padding, which often allows a result of the form
for example. Here and are complexity classes. That padding and all known tricks do not work here is an indication of how deep the above two results are. And in particular the importance of Santhanam’s contribution.
Quantum vs Nondeterministic
Atri Rudra, Ken, and I have been looking at what is provable about quantum complexity, mainly the class . This is the complexity class of sets accepted by polynomial size quantum circuits, with two-sided bounded error. The class is the quantum analogy of .
Most believe that quantum is much stronger than classic computation. There is ample evidence of its power; perhaps the main one is Peter Shor’s famous result that factoring is in . However, the best concrete result that we have been able to come up with, the best where there is a proof, is the following theorem:
Theorem: At least one of the following is true:
- The class is not contained in .
- The class is not contained in circuits of polynomial size.
It is weak because it is a disjunction of two possible results—both are probably true. But who knows. A long-time friend, Forbes Lewis, called this type of result: “truth or consequences.” The point is that we can also view this theorem as saying: If is in , then the class is not contained in circuits of polynomial size.
There are several interesting open questions. Of course the fundamental one is to improve any of these or related results about complexity classes.
Time vs Space: Can this bound be improved? The graphs that Bob and Thomas construct show that the pebbling game cannot be improved. However, I believe that there is a small loophole. The worst case graphs that they construct may not occur on Turing machines, and this would allow an improvement to be possible.
Deterministic vs Nondeterministic: Can this bound be improved? I find this one of those mathematical embarrassments I have talked about before here. How can we know separation only for and a slightly higher function, and yet not be able to prove that separation occurs for any function?
Quantum vs Nondeterministic: Is the stated theorem the best known? We may have missed something—if this is the case we apologize in advance. If it is the best known, can it be improved?