Space Is Easier Than Time
We really understand the complexity theory of space
Anil Ananthaswamy is the author of the book The Edge of Physics. He also consults for Britain’s New Scientist weekly magazine, and wrote an interesting article on space and time entitled, “Space vs time: One has to go—but which?” He started the article with this:
TO ISAAC NEWTON, space was the “sensorium of God,” the organ through which the deity surveyed His creation. It was absolute, unchanging, infinite. Flowing through it “equably without regard to anything external,” as Newton wrote in his great physical treatise, was another similarly absolute heavenly creation: time.
Today I wish to talk about what we know about space, not space as viewed by Newton, but space as used in complexity theory. For us complexity theorists, space is a measure of the cost of a computation of a Turing Machine (TM). It is the number of squares that a TM uses during a computation. It is not an “organ,” nor is it absolute and unchanging; it is simply the number of different cells written to by the computation. Meanwhile time is the number of steps that a TM takes during a computation.
I am currently teaching complexity theory at Tech, and realized that there is a huge gap between our understanding of space and time. I am starting to wonder if there should be a textbook that makes this viewpoint clearer, but that is another whole project.
Let’s just look at the contrast between what we know about space and time.
Models: For space the exact TM model used is relatively unimportant. The key is to have an input tape that is read-only and one working tape of length of handle inputs of length . In the remaining comments we will not worry about the function . It needs to grow at least as to avoid some technical issues, and in some cases one needs to be “constructible,” meaning that a TM on length- inputs can mark the -th cell as a boundary using space. But just think of as some reasonable function.
With time, however, it is necessary to have more than one working tape, and whether more tapes beyond two help save you time is unknown. Worse yet, it is possible that having two-dimensional planes or higher-dimensional spaces, or giving the tape a tree structure, or having some “random access” mechanism, allows you to compute more functions in less time, but nobody knows. For space complexity alone, all of these models are equivalent; 3D is no more powerful than 2D or 1D. In this sense computational space is already “holographic.”
Simulations: For space we have simple simulations of one model by another. For example, a space bounded TM with a hundred tapes can be simulated in the same space by another TM with just one tape. The proof of this is quite simple.
For deterministic time we have the beautiful amortized method of Frederick Hennie and Richard Stearns that shows that any number of tapes can be simulated by two tapes, and the time goes from to . This is one of the more complex simulations in complexity theory, and uses the notion of amortized cost before it was invented years later in the data structure community. The latter was done by Bob Tarjan—see here and here.
Diagonalization: For space we know that more space yields new languages, even if the growth is tiny.
Deterministic space is more powerful that provided that
For deterministic time, and even for nondeterministic time, there are no separation theorems that are even close to this. For example: we cannot prove that deterministic time is better than .
Determinism vs Nondeterminism: For space we know the following beautiful theorem due to Walter Savitch:
Nondeterminism helps at most a polynomial for space bounds.
Of course, he actually proved that any nondeterministic space TM that uses space can be simulated by a deterministic one that uses space . This amazing result shows that “guessing” for space is not very powerful. Indeed, consider:
The corresponding theorem for time would be that .
It is even worse. For space it is still open whether nondeterminism helps at all. That is, it might be that a nondeterministic TM with space could be simulated by a deterministic one with the same space.
Boolean Operations: For space we know the following theorem that was found independently by Neil immerman and Robert Szelpcsényi:
Nondeterministic space is closed under all boolean operations.
It is easy to show this for union and intersection, but the deep result is that it also works for complement.
This terrific results shows that for space guessing a path in a maze requires the same amount of space as proving there is no path. I discussed this ages ago in detail here. Whereas:
The corresponding theorem for time would be that .
Why is space so much “easier” than time? Will we ever understand time as well as space? I could go on and add even more examples, but will stop with the above.
Finally do you know that Frederick Hennie was the PhD advisor of the famous computer architect who (with James Rottsolk) co-founded what is now Cray, Inc.? No, that person was not Seymour Cray.