Savitch’s Theorem: Opportunities Missed and Found

Walter Savitch has done seminal work in computational complexity theory and has made many important contributions. For all his wonderful work, he is undoubtedly best known for his beautiful result on the relationship between deterministic space and non-deterministic space. Probably everyone knows his famous theorem:

Theorem: For any ${s(n) \ge \log n}$,

$\displaystyle \mathsf{NSPACE}(s(n)) \subseteq \mathsf{DSPACE}(s(n)^{2}).$

There are countless descriptions of the proof of his famous theorem: in books, on the net, and on blogs, So, if somehow you do not know the proof check one of them out. My goal today is to talk about a missed opportunity, seeing the opportunity, and about the future.

Language Theory

My understanding of how Savitch came to prove his theorem is based on talking with the main players over the years, and looking carefully at the literature. I could have some details wrong, but I think the story is too good to not be true.

Savitch was a graduate student working for Steve Cook, when he proved his theorem; actually the theorem was his thesis. Where did the idea for his theorem come from? Did he create it out of thin air? Looking back, the argument is so simple, so elegant, so clearly from “the book” that it seems hard to believe that it was not always known. But it was not known before 1970 when he proved it.

He did not create it out of thin air. In the 1960’s one of the main focuses of people working on, what we call complexity theory today, was language theory. This was the study of regular languages, context-free languages, and context-sensitive languages. These correspond, of course, to finite automata, to non-deterministic pushdown automata, and to linear bounded non-deterministic machines. The rationale for this interest was driven by Noam Chomsky and Marcel-Paul Schützenberger, especially Chomsky who was interested in a theory of “natural langauges”. At the same time programming languages, such Algol 60, were beginning to appear that needed a theory to support compiler construction.

Language theory was an important source of questions for theorists. One the so called LBA problem, I have already discussed in an earlier post was raised in the 1960’s and took almost thirty years to get solved. Many other problems from language theory played an important role in shaping theory’s direction at the time.

In 1965 at IFIP Congress in New York City an important paper was presented: Memory bounds for recognition of context-free and context-sensitive languages by Philip Lewis, Juris Hartmanis and Richard Stearns (LHS).

IFIP stands for “International Federation for Information Processing” and once was one of the top international conferences. Today there are so many other international conferences that it is probably fair to say that it is less important than it once was. In my post on Karp I pointed out that I first met Dick at the 1974 IFIP Congress, so for me IFIP will always bring back good memories. When, Lewis, Hartmanis, and Stearns presented their paper, IFIP was one of the top conferences.

They sketched the proof of a number of theorems in their paper, but the one that we are concerned with was the following:

Theorem: If ${L}$ is a context-free language, then

$\displaystyle L \in \mathsf{DSPACE}( \log^{2} n).$

This is a hard theorem–I knew the proof once, but could not explain it today with looking it up. The “miracle” is that they must show how to simulate a pushdown that can hold as many as ${n}$ symbols, with only ${\log^{2} n}$ space. This is not easy.

Cook Makes a Phone Call

Savitch realized that the LHS paper had a fundamental idea, an idea that was very clever, yet an idea that even the authors did not completely understand. In proving that every context-free language was accepted by only ${\mathsf{DSPACE}( \log^{2} n)}$, LHS had used a clever method for keeping track of the pushdown states. Essentially, Savitch saw that when separated from context-free languages, what they were doing was exactly what he need to prove his theorem. Exactly. The paper LHS had the key idea, but by applying it to the less interesting question of space complexity for context-free languages they missed a huge opportunity. They came very close in 1965 to proving Savitch’s theorem. We should not feel too bad: two of them, Hartmanis and Stearns, went on to win the Turing award for other work. But they were close.

Apparently, once Savitch and his advisor Cook realized that they could use the method of the LHS paper–not the theorem itself–they phoned Hartmanis. They told Juris that they thought his 1965 result was terrific–Juris later told me that he was always happy to hear that someone liked one of his papers. Who does not? They then asked some technical questions, and then rang off. In a short while Hartmanis got to see their paper. He then realized what they were doing, and realized the great result that he almost got. Oh well. We cannot get them all.

A Review

In the 1972 volume 3 issue of the Journal of Symbolic Logic there is a telling review on the LHS paper by Walter Savitch. In the same issue Alonzo Church also has a review of another paper–I point this out to show how long ago this was. Here is the review: (It is available via JSTOR if your library has access.)

This paper summarizes the known results on the tape complexity of context-free and context-sensitive languages. It also presents a number of new results in the area. The notions of tape complexity used are those introduced in the article reviewed above. The principal new results deal with context-free languages. A number of specific context-free languages are presented and their exact location in the tape complexity hierarchy is determined. It is shown that all context-free languages are ${(\log n)^{2}}$-tape recognizable. The proof of the ${(\log n)^{2}}$-tape bound is quite intricate, and this article gives only a sketch of the proof. The proof is, therefore, hard to read; however, the techniques are interesting and useful. The reader with perseverance will be rewarded for his effort. Walter Savitch

I love the end of the review: The proof is, therefore, hard to read; however, the techniques are interesting and useful. The reader with perseverance will be rewarded for his effort.

Savitch certainly was rewarded for his effort, a thesis, and a great theorem.

Open Problems

There are two points to make. First, sometimes the proof methods of a theorem are more important than the theorem. That means that we must read the proof itself to see if the method of proof can be used elsewhere. Many times the method of proof is standard and the actual theorem is the key advance. However, as Savitch’s theorem shows, sometimes the method is what we must understand if we are to make further progress.

Second, one of the biggest embarrassments of complexity theory, in my view, is the fact that Savitch’s theorem has not been improved in almost ${40}$ years. Nor has anyone proved that it is “tight”. This is one of the great open questions of complexity theory.

I have though how to improve his theorem. I find it hard to believe that it is the best result possible. But I have no good ideas to suggest, I have no approach, I am lost.

6 Comments leave one →
April 5, 2009 5:14 pm

Why is it embarrassing to complexity theory that Savitchs’s theorem has not been
improved? It is not that people have not tried hard. Reingold’s breakthrough was progress, no?

April 5, 2009 5:47 pm

I plan to talk about the undirected case, i.e. Reingold great result. But no improvement in almost 40 years? Even before Reingold there were many partial steps. But for the directed case there seems to be no ideas. None.

2. April 8, 2009 9:28 pm

Thanks so much for providing these fascinating glimpses into the early days of Complexity theory. These stories deserve to be told, on both the mathematical and human levels.

On that note, something you might explore more in further posts is paradigm shifts. In particular, I’m interested in how (it seems to me) American TCS came overall to shift away from a language/grammar/automaton focus to the resources and questions that we now consider of primary interest to Complexity.

Clearly there are some important papers we can all identify; but there are probably others many of us will miss (not to mention some subtle connections as described in this post); and even a detailed genealogy of papers doesn’t fully explain what it was like to be a researcher navigating one’s career amidst these developments. Your discussion of some of the early ‘buzz’ around randomized algorithms was a nice portrait of one such turning point.

3. November 26, 2018 5:09 pm

Thanks a lot for the interesting read. There are some results for special cases like Mangrove graphs from Allender and Lange 1998 (https://link.springer.com/article/10.1007/s002240000102) which uses only log^2n/loglogn space.