Skip to content

How Good Is Your I-magination?

March 15, 2012


A possible physics approach to complexity questions

Paul Tripp played Mr. I. Magination. His character was the star of an early TV show for kids that ran live on CBS, and was on from 1949 to 1952. I recall—yes I watched it—that on the show Tripp would play a “magic” slide flute, then he and some children would board a train, and travel to Imagination Land. The show was directed some times by Yul Brynner, and guests included Richard Boone, Walter Matthau, Simon Oakland, Ted Tiller, and others.

Today Ken and I want you to let us play the magic flute and take us to an imaginary place. A place where some basic theorems from computational complexity are unsolved—even though they were proved decades ago.

In particular, a place where the class of polynomial time is not yet know to be different from primitive recursive time. Of course, these classes are vastly different, but that fact was not obvious instantly. It required the brilliant work of Juris Hartmanis and Richard Stearns to prove much stronger statements. Thanks to them we know that {\mathsf{P}} is not equal to {\mathsf{E}}, which means {2^{O(n)}} time.

But listen to the magic sound of flute, listen to Mr. I. Magination’s pleasant voice, close your eyes and imagine we do not know this fact: we do not know that polynomial time is weaker than primitive recursive time. Let’s agree to call the opposite—that they are equal—the {\Gamma} Hypothesis. Not a super name, but it will do for now. Let’s see what this means and how it is related to physics—into the magic tunnel we go.

Is This Playing According to Hoyle?

If you prefer not to think of magic, you can imagine a civilization in which physics and the information theory of communication have been developed far ahead of recursion theory and computational complexity.

Even as we are, another civilization might think our knowledge of computation is hopelessly primitive compared to that of physics. Our question is whether we can leverage the latter to help with the former—and how possibly could we do this?

For an example of what we seek, here’s a different take on the famous instance of Sir Fred Hoyle’s prediction in 1953 of a certain preferred basis for resonance of carbon-12 at an energy level of about 7.65 MeV. This has been retro-formulated as an example of the “anthropic principle,” on grounds that if the predicted eigenstate didn’t exist, then carbon could not have been formed in stars in amounts to allow life as we know it. However we agree with a historical review by Helge Kragh, who quotes Hoyle himself to this effect:

It is not so much that the Universe must be consistent with us as that we must be consistent with the Universe. The anthropic principle has the problem inverted, in my opinion.

Hence we pose the question this way: Can knowledge in 1953 be said to have favored a particular mathematical model of carbon to the extent that observation of nature indicated the truth of a theorem about its linear algebraic structure—a theorem that was subsequently proved?

Of course we want to know whether observations may say something about {\mathsf{P} = \mathsf{NP}}, but before we get to whether that is inconsistent with nature, let’s ask whether the idea is effective on things like {\Gamma} that we already know are inconsistent with mathematics.

Through The Tunnel

So we have gone through the magic tunnel, and we do not know whether {\Gamma} is true or false. Let’s look at what it says about the physical world. I want to know if this statement can be used to prove that some physics law is false. Even assuming this could be done, what would it mean?

The reason for studying this is: perhaps there is a possible physics proof that resolves {\Gamma}. Some possible outcomes:

  1. The statement {\Gamma} implies a physics law is false.

  2. The statement {\Gamma} implies a physical property that is very unlikely to hold.

  3. The statement {\Gamma} implies a physical property that is new and interesting.

Assume that {\Gamma} is true, then it would seem that many physical results might follow. Examples that come to mind are: perhaps we can say something about Black holes? Quantum foam? Chaotic system simulation? Many-body problem?

Let’s talk about two of these.

Chaotic Systems

Consider a system

\displaystyle  z \leftarrow f(z),

where {f(z)} is a polynomial. Start the iterations off at {z_0} and run the system for {T} time-steps. If {f(z)} is a chaotic map, then it is assumed, but not a theorem, that it cannot be simulated efficiently. Given our hypothesis {\Gamma} we can exactly calculate in polynomial time where {z_0} is after even {T = 2^n} steps. This seems surprising, but is unfortunately not a contradiction with any physical law that we know.

What we do have is that the inability to simulate chaotic systems rapidly is enough to separate complexity classes. It is too bad that we cannot make this into a full proof. Note, the reason we can simulate the system even given it is chaotic is simple: we compute each of the values

\displaystyle  f(z_0), f(f(z_0)), \dots

exactly. If the polynomial {f} is quadratic the number of bits required for this precise computation grows exponentially fast, but that is fine.

Many-Body Problems

The {n}-body problem is the study of the motion of {n} bodies under a physical force such as gravity. The usual statement, by non-experts, is that there is no “solution” for these systems. But that is not true. There are globally convergent series for these systems by Karl Sundman for {n = 3} and by Qiudong Wang for {n > 3}.

So why are they usually called unsolved? The answer is that these series converge globally, but very very slowly. In the real world they must be replaced by a variety of approximation methods. But with {\Gamma} we can assert that we can simulate in polynomial time where {n} bodies will be in the far future to within an exponentially small region.

NP-Hard Problems

Okay, {\Gamma} is false. But the rationale for these ideas is weaker statements that are still open. For example we still cannot resolve whether

\displaystyle  \mathsf{P} = \mathsf{PSPACE}.

Nor have we refuted the assertion that {\mathsf{E^{NP}}} can be solved with linear-sized Boolean circuits, where {\mathsf{E^{NP}}} means {2^{O(n)}} exponential time allowing queries to {\mathsf{SAT}} that themselves can be exponentially long. Assuming that either or both of these statements are true, would that entail some fantastical physics results of the type we have just discussed?

Scott Aaronson covered much of this ground in his excellent survey paper, “NP-Complete Problems and Physical Reality” (already linked above). He surveyed physical theories whose correctness implies devices capable of feasibly solving NP-hard problems, and concluded that belief in hardness is an argument against these theories. In particular, it argues that “there are no non-linear corrections to Schrödinger’s equation” in Nature, a subject which has been mentioned in a recent comment thread of our quantum-computing debate.

This is to say, from what we believe about complexity, there are some things we can disbelieve about nature. We are emphasizing a partial converse: from what we know about nature, what can we effectively rule out about complexity? These two aims are not symmetrical, for in complexity we want to prove things, but any knowledge is helpful.

Scott’s paper offers concrete forms of hardness such as {\mathsf{CLIQUE}} (not) being solvable for graphs with 100 million vertices in under {10^{80}} seconds, and notes that Kurt Gödel’s famous 1956 letter to John von Neumann asks about linear or quadratic time solvability of {\mathsf{SAT}}. We note that current frontiers are way below polynomial time or circuit size, and put our question even more concretely as follows:

Is the solvability of {m}-clause {\mathsf{3SAT}}, with input size {N = 3m\log_2 m}, by Boolean circuits of size {cN} and depth {c + \log_2 N} for some small fixed constant {c}, inconsistent with current knowledge of physics and observations of our neighborhood?

We can pose similar questions for other network computing models. We have tried with the carbon, chaos, and many-body examples to address observational physics, but our question naturally transits to the development of information structures and life on Earth. At least Hoyle called the latter “the largest problem of which we are aware” in an earlier part of the quotation used by Kragh, and Stuart Kauffman among others has applied Boolean networks to it.

Open Problems

Do concrete tabulations of combinatorial objects with certain properties, such as kinds of graphs or knots, enlighten us about the existence of concretely small circuits for substantial tasks? Is there a guided way through the depressingly large search spaces outlined in the slides that Scott Aaronson linked here, and more recently in item 5 here?

Staying with Scott, is there any relation to his “complexodynamics” framework here? We congratulate him and Robert Wood on winning the Alan T. Waterman Award. See also this recent post on complexity and logical depth by Charles Bennett on the Quantum Pontiff blog, and their congratulations to Scott.

Congratulations also to Judea Pearl for winning the ACM Turing Award. Note that he is also president of the Daniel Pearl Foundation.

3 Comments leave one →
  1. MrY permalink
    March 15, 2012 12:43 pm

    If we speak about anthropic principle, then we have a perfect computational model that outperform computers in many respect, specifically in image processing. And jumping ahead what is the complexity of finding theorems, laws of physics, etc. We have an experimental model capable of doing that, and (in my feeling) doing it efficiently.

  2. John Sidles permalink
    March 15, 2012 2:10 pm

    It’s mighty tough to argue against Steven Weinberg when (after lifelong study) his working hypothesis is that:

    Quantum mechanics is the way it is because any small change in quantum mechanics would lead to absurdities.”

    Moreover, it appears that Weinberg transmitted this conviction to at least one of his graduate students … John Preskill! … who in turn transmitted it to *his* graduate student … Daniel Gottesman! 🙂   😀   😆

    Supposing that we have the chutzpah to doubt the trinitarian doctrine of The Larger Hilbert Space, then a well-posed mathematical starting point for developing our skepticism — which can be undertaken as a purely numerical experiment — is to compute (in various nonlinear QM models) the Lie derivatives \mathcal{L}_X (\omega) and \mathcal{L}_X (g), where X is a quantum dynamical flow, \omega is the state-space’s symplectic form, and g is the state-space’s Riemannian metric.

    The physical point is that \mathcal{L}_X(\omega) \ne 0 is known to imply thermodynamic extravagance. Moreover, Abrams and Lloyd have argued in Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems (1998) that \mathcal{L}_X(g) \ne 0 can under some circumstances — but which circumstances exactly? — imply computational extravagance and/or causal extravagance.

    As it turns out — as Weinberg himself noted in “Weinberg replies”, PRL 63(10), p1115, 1989 — that it is straightforward to construct a large class of nonlinear QM models that rigorously satisfy \mathcal{L}_X (\omega) = 0 (such flows are called symplectomorphisms). Indeed, verifying that a computed dynamical flow X is a symplectomorphism provides a stringent validation criterion for numerical quantum simulations.

    It is harder to know a priori which (if any) nonlinear QM models satisfy \mathcal{L}_X (g) = 0 (such flows are called isometries), and in the event that the isometry condition is violated, to determine whether the computational and/or causal consequences are extravagant.

    As Gil Kalai noted in a post earlier this week, this belongs to a class of questions for which numerical experiments are (relatively) easy and (absolutely) fun!

Trackbacks

  1. Theorems From Physics? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading