Skip to content

Space Is Easier Than Time

September 9, 2013


We really understand the complexity theory of space

anil

Anil Ananthaswamy is the author of the book The Edge of Physics. He also consults for Britain’s New Scientist weekly magazine, and wrote an interesting article on space and time entitled, “Space vs time: One has to go—but which?” He started the article with this:

TO ISAAC NEWTON, space was the “sensorium of God,” the organ through which the deity surveyed His creation. It was absolute, unchanging, infinite. Flowing through it “equably without regard to anything external,” as Newton wrote in his great physical treatise, was another similarly absolute heavenly creation: time.

Today I wish to talk about what we know about space, not space as viewed by Newton, but space as used in complexity theory. For us complexity theorists, space is a measure of the cost of a computation of a Turing Machine (TM). It is the number of squares that a TM uses during a computation. It is not an “organ,” nor is it absolute and unchanging; it is simply the number of different cells written to by the computation. Meanwhile time is the number of steps that a TM takes during a computation.

I am currently teaching complexity theory at Tech, and realized that there is a huge gap between our understanding of space and time. I am starting to wonder if there should be a textbook that makes this viewpoint clearer, but that is another whole project.

Let’s just look at the contrast between what we know about space and time.

The Differences

Models: For space the exact TM model used is relatively unimportant. The key is to have an input tape that is read-only and one working tape of length {S(n)} of handle inputs of length {n}. In the remaining comments we will not worry about the function {S(n)}. It needs to grow at least as {\log n} to avoid some technical issues, and in some cases one needs {S(n)} to be “constructible,” meaning that a TM on length-{n} inputs can mark the {S(n)}-th cell as a boundary using {S(n)} space. But just think of {S(n)} as some reasonable function.

With time, however, it is necessary to have more than one working tape, and whether more tapes beyond two help save you time is unknown. Worse yet, it is possible that having two-dimensional planes or higher-dimensional spaces, or giving the tape a tree structure, or having some “random access” mechanism, allows you to compute more functions in less time, but nobody knows. For space complexity alone, all of these models are equivalent; 3D is no more powerful than 2D or 1D. In this sense computational space is already “holographic.”


Simulations: For space we have simple simulations of one model by another. For example, a space bounded TM with a hundred tapes can be simulated in the same space by another TM with just one tape. The proof of this is quite simple.

For deterministic time we have the beautiful amortized method of Frederick Hennie and Richard Stearns that shows that any number of tapes can be simulated by two tapes, and the time goes from {T(n)} to {O(T(n)\log T(n))}. This is one of the more complex simulations in complexity theory, and uses the notion of amortized cost before it was invented years later in the data structure community. The latter was done by Bob Tarjan—see here and here.


Diagonalization: For space we know that more space yields new languages, even if the growth is tiny.

Theorem:
Deterministic space {S_{1}(n)} is more powerful that {S_{2}(n)} provided that

\displaystyle  \lim_{n \rightarrow \infty} \frac{S_{2}(n)}{S_{1}(n)} = 0.

For deterministic time, and even for nondeterministic time, there are no separation theorems that are even close to this. For example: we cannot prove that deterministic time {n\log\log n} is better than {n}.


Determinism vs Nondeterminism: For space we know the following beautiful theorem due to Walter Savitch:

Theorem:
Nondeterminism helps at most a polynomial for space bounds.

Of course, he actually proved that any nondeterministic space TM that uses {S(n)} space can be simulated by a deterministic one that uses space {O(S(n)^{2})}. This amazing result shows that “guessing” for space is not very powerful. Indeed, consider:

The corresponding theorem for time would be that {\mathsf{P}=\mathsf{NP}}.

It is even worse. For space it is still open whether nondeterminism helps at all. That is, it might be that a nondeterministic TM with space {S(n)} could be simulated by a deterministic one with the same space.


Boolean Operations: For space we know the following theorem that was found independently by Neil immerman and Robert Szelpcsényi:

Theorem:
Nondeterministic space {S(n)} is closed under all boolean operations.

It is easy to show this for union and intersection, but the deep result is that it also works for complement.
This terrific results shows that for space guessing a path in a maze requires the same amount of space as proving there is no path. I discussed this ages ago in detail here. Whereas:

The corresponding theorem for time would be that {\mathsf{NP}=\mathsf{co}\text{-}\mathsf{NP}}.


Open Problems

Why is space so much “easier” than time? Will we ever understand time as well as space? I could go on and add even more examples, but will stop with the above.

Finally do you know that Frederick Hennie was the PhD advisor of the famous computer architect who (with James Rottsolk) co-founded what is now Cray, Inc.? No, that person was not Seymour Cray.

About these ads
16 Comments leave one →
  1. September 10, 2013 2:48 am

    Link to the proof that space-bounded TM with hundred tapes can be simulated with the sample space with one-tape TM seems to be broken. Nice post. :-)

  2. September 10, 2013 3:03 am

    With this definition of computational space, there is no wonder it is “holographic”, because it has to do not with spatial properties, but with counting discrete places, or bits.

  3. September 10, 2013 6:34 am

    Definition  “Quantum systems engineering” is the restriction of quantum physics to the question: “What are the most efficient (classical) Turing Machines (TMs) that predict the output of quantum simulations?”

    By this definition, it is computationally natural to step outside the realm of absolute space (e.g., via greyhole dynamics) while continuing to respect the notion of absolute time (in its local geometric instantiation as symplectomorphic flow).

    Quantum systems engineering (QSE) thereby escapes the realm of absolute space in two respects:

    (1)  QSE escapes the too-rigid macroscopic Euclidean geometry of Cartesian state-space, by replacing it with the more-flexible Riemannian geometry of real manifolds.

    (2)  QSE escapes the too-rigid microscopic unitary metric of Hilbert state-spaces, by replacing it with the lower-dimension Kähler metric of tensor-product manifolds.

    These concrete steps are nowadays associated to considerable — and still-accelerating; even Moore-redoubling — gains in mathematical naturality, computational efficiency, and practical engineering scope, needless to say.

    Aside from replacing global definitions of “absolute time” with local definitions of “symplectomorphic flow”, how else might 21st century STEM enterprises seek to escape the “tether of time” (as it might be called)?

    Here I would like to commend some striking remarks of Neel Krishnaswami, in which he suggests that “canonicity criteria save true theorems from ex falso quodlibet.”

    Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to “untether” complexity theorists from generations of frustrating proof-search, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes, then this would make a terrific column for Gödel’s Lost Letter and P=NP. So perhaps there might someday be a Neel Krishnaswami GLL guest-column?

    In any event, Oliver Twist is hungry for more, please!

    And finally, Dick Lipton and Ken Regan, please accept the continuing appreciation and thanks (from me and many) for your outstanding sustainment of the enlightened and enlightening forum that is GLL.

  4. anon permalink
    September 10, 2013 10:53 am

    While we may understand the complexity better, we have almost no tools for proving space lower & upper bounds. In fact, do we know any natural decision problems with space complexity \Omega(n^2)?

  5. Bhupinder Singh Anand permalink
    September 10, 2013 11:57 am

    Since one of the more useful functions of a Turing Machine is to determine the values of number-theoretic functions (which is why Turing’s focus in his paper was on identifying computable numbers and not merely on calculating numbers) perhaps we should evaluate the space and time characteristics of a Turing machine in functional terms based on finitarily defined arithmetical properties (i.e., as opposed to non-finitarily defined set-theoretical properties).

    For instance, if F(x) is an algorithmically computable arithmetical function, and ALGSPACE(F, n) is the size of the smallest algorithm that computes the first n values F(1), F(2), …, F(n) of F(x), then ALGSPACE(F, n) is bounded as n -> infinity.

    So F(x) can be said to be well-behaved as to algorithmic space.

    However if G(x) is an algorithmically verifiable but not algorithmically computable arithmetical function, and ALGSPACE(G, n) is the size of the smallest algorithm that computes the first n values G(1), G(2), …, G(n) of G(x), then ALGSPACE(G, n) -> infinity as n -> infinity.

    So G(x) cannot be said to be well-behaved as to algorithmic space.

    What about time?

    If F(x) is an algorithmically computable arithmetical function, and ALGTIME(F, n) is the time taken by the smallest algorithm to compute the first n values F(1), F(2), …, F(n) of F(x), then ALGTIME(F, n) is also an algorithmically computable arithmetical function.

    So F(x) could be said to be also well-behaved as to algorithmic time.

    However if G(x) is an algorithmically verifiable but not an algorithmically computable arithmetical function, and ALGTIME(G, n) is the time taken by the smallest algorithm to compute the first n values G(1), G(2), …, G(n) of G(x), then ALGTIME(G, n) is also not definable by an algorithmically computable arithmetical function.

    So G(x) cannot be said to be well-behaved as to algorithmic time.

    That there are algorithmically verifiable arithmetical functions which are not algorithmically computable is a consequence of Cantor’s diagonal argument and the instantiational properties of Goedel’s Beta-function.

    See: http://foundationalperspectives.wordpress.com/2013/09/05/the-answer-that-hilbert-never-envisaged-for-the-first-of-his-twenty-three-problems/

    Regards

  6. September 11, 2013 11:23 am

    Ive been pondering on this dichotomy of space & time in TMs quite a bit over the years, possibly spurred in part by one of your earlier blogs on the subj. it is fascinating that TMs have a time/space tradeoff & continuum something like in real physics.

    yes a total review/rewrite of TM theory wrt perspective of space/time tradeoffs would be a fantastic reference and maybe a futuristic perspective as theory seems to be moving/developing in that direction. few people could write such a reference, you are probably one of the few. would *defn* buy that one!

    something about space/time seems to be tightly coupled to the issue of reversibility. if there is large time and low space on a computation, it also implies high reversing of squares, ie squares that are overwritten. TM reversibility seems to be the “hidden order” linking space and time [shades of bohms "implicate order"]. it hasnt been studied a lot in isolation, there are many famous theorems about space/time but very few about reversibility.

    another very interesting place this manifests is in “time/space tradeoffs on SAT” pioneered by von melkebeek & williams.

    yes I agree with you very much that it seems some basic theorems linking them are unknown and remaining to be discovered. also in a deep sense P=?NP is probably tightly linked to the area.

    a sample problem I have used to look into time/space tradeoff & interconnection/coupling is this one which looks at TM “run sequences”. it seems to have a deep link, but I havent written up the idea yet, just looking for some motivation/excuse.

    chee yap has a very great chapter on TM reversibility in his online ref, its one of the most comprehensive surveys Ive seen. also williams has an interesting paper on TM reversibility. there are also many isolated tcs.se questions on the subject. might try to collect all these refs someday in a blog but they are scattered.

    overall from a future pov it might seem our study of complexity theory in terms of Time or Space alone is confusingly/unnecessarily specialized, compartmentalized and reductionistic [but that is an overall problem with current science/math].

  7. John Sidles permalink
    September 12, 2013 4:55 am

    By what concrete methods do engineers seek to loosen the tethers of space, time, and dimensionality?

    To obtain a well-posed answer, let us define quantum systems engineering (QSE) as follows:

    Definition  “Quantum systems engineering” is the restriction of quantum physics to the question: “What are the most efficient (classical) Turing Machines (TMs) that predict the output of quantum simulations?”

    By this definition, it is computationally natural to step outside the realm of absolute space — via the lattice models of greyhole dynamics, for example — while continuing to respect the notion of absolute time (in its geometric instantiation as symplectomorphic flow). QSE thereby loosens the tethers of space and dimensionality as follows:

    (1)  QSE loosens the tethers of too-rigid macroscopic Euclidean state-space by replacing its Cartesian metric with the more-flexible Riemannian geometry of real manifolds.

    (2)  QSE loosens the tethers of too-large microscopic quantum state-space dimensionality, replacing its Hilbert metric with the lower-dimension Kähler metric of tensor-product manifolds.

    These concrete space-loosening and dimension-loosening steps are associated to gains in mathematical naturality, computational efficiency, and practical engineering scope that are considerable and still-accelerating, needless to say.

    Aside from the already-mentioned tactic of replacing global definitions of absolute time with local definitions of symplectomorphic flow, how else might 21st century STEM enterprises seek to loosen the tethers of time?

    Dick’s (terrific!) essay closes by asking “Why is space so much ‘easier’ than time?” Yes, the tethers of time seem somehow more hard-to-loosen and even hard-to-describe than the tethers of space and dimensionality … perhaps they are mathematically more fundamental?

    Here I would like to commend some striking remarks of Neel Krishnaswami, in which he suggests that “canonicity criteria save true theorems from ex falso quodlibet.”

    Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to untether complexity theorists from generations of wasted effort, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes, then this would make a terrific column for Gödel’s Lost Letter and P=NP. Perhaps there might be a Neel Krishnaswami GLL guest-column?

    Oliver Twist would like to hear some more from Pip, please!

    And finally, Dick Lipton and Ken Regan, please accept the continuing appreciation and thanks (from me and many) for your outstanding sustainment of the enlightened and enlightening forum that is GLL.

    • John Sidles permalink
      September 12, 2013 5:05 am

      Perhaps I had better mention too, that the study of quantum greyhole dynamics seeks to clarify those aspects of quantum black hole dynamics that can be concretely reduced to Turing Machine simulations of symplectomorphic flows.

    • Bhupinder Singh Anand permalink
      September 12, 2013 8:43 am

      “Here I would like to commend some striking remarks of Neel Krishnaswami, in which he suggests that “canonicity criteria save true theorems from ex falso quodlibet.” Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to untether complexity theorists from generations of wasted effort, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes…”

      The problem with deciding whether ‘canonicity criteria save true theorems from ex falso quodlibet’ is that you need to decide which definition of ‘truth’ you are using in your theorems.

      Contrary to our current paradigm, we actually use two—apparently contradictory, but perhaps complementary—definitions of ‘truth’ in our mathematical reasoning, without clearly distinguishing between them (and possibly sometimes interchangeably even within the same argument)!

      See my paper ‘Evidence-Based Interpretations of PA’ in the Proceedings of AISB/IACAP Turing 2012, Birmingham at:

      The first is a non-finitary, algorithmically verifiable, ‘truth’, which is reflected in the standard (so far accepted as canonical) model of PA.

      Gentzen has given a non-finitary proof of consistency for PA that indirectly justifies the Axiom Schema of Finite Induction in this model.

      It is a model beloved of Set Theorists, since it allows ‘undecidable’ arithmetical propositions and seemingly assures—as Goedel noted—that the ”formation of higher and higher types can be continued into the transfinite”.

      The second is a finitary, algorithmically computable, ‘truth’, which is reflected in the algorithmic (contender for canonicity) model of PA defined in the Birmingham paper.

      It is a model that Computationalists should love, since it yields a finitary proof of consistency for PA by giving a computationalist justification for the Axiom Schema of Finite Induction.

      It does not, moreover, admit troublesome ‘undecidable’ arithmetical propositions (of course, at the cost of not being able to comfortably conclude the existence of a contrafactual from a negation)!

      Deciding which ‘truth’ is to be preferred as ‘canonical’ therefore depends upon your perspective and intent.

      A human intelligence has the choice of declaring algorithmically verifiable ‘truth’ as ‘canonical’ when wearing the non-finitary hat and reasoning set-theoretically; and declaring algorithmically computable ‘truth’ as ‘canonical’ when wearing the finitary hat and reasoning arithmetically.

      A mechanical intelligence may have no choice but to declare only algorithmically computable ‘truth’ as ‘canonical’!

      Whether there is an intelligence that could view a different ‘truth’ as ‘canonical’ is an intriguing question.

      Regards.

  8. hillseverspent permalink
    September 12, 2013 4:51 pm

    Reblogged this on hillseverspent.

  9. Serge permalink
    September 12, 2013 5:38 pm

    Traveling far away always takes long time, whereas traveling for a long time needn’t take much space. Space may be reused, which makes it a much easier resource than time.

  10. Martin S permalink
    September 18, 2013 4:56 am

    Those of you who give credit to Immerman for the NL = co-NL theorem should read these:

    https://rjlipton.wordpress.com/2009/03/02/barrington-gets-simple/#comment-711

    http://rjlipton.wordpress.com/2009/09/27/surprises-in-mathematics-and-theory/#comment-1523

    The real credit should go to Jenner, Kirsig, and Lange.

    • September 18, 2013 1:04 pm

      Someone else I know (an American) also proved the collapse at the second level, a year or so earlier, but thought it was possibly a bug in a proof of something else he and his co-authors were after. A third person had an inkling at level 3. Anyway, I think all this credit stays at the second level.

  11. October 5, 2013 11:52 pm

    “….for space guessing a path in a maze requires the same amount of space as proving there is no path”

    What is a maze with path in time?

    Is it possible that NP=CoNP? (does not seem to affect the overall picture except pulling PH to NP)

    So is it feasible?

  12. Serge permalink
    October 28, 2013 6:02 am

    Hard disks tend to be filled up… but the processor is rarely used to its full capacity.
    Cities tend to be crowded… but too many people are left unemployed.
    Maybe that’s because space is in one, two, or three dimensions… but the ways of using time are of infinite dimension.

Trackbacks

  1. Space Is Easier Than Time | Rocketboom

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,255 other followers