How Good Is Your I-magination?
A possible physics approach to complexity questions
Paul Tripp played Mr. I. Magination. His character was the star of an early TV show for kids that ran live on CBS, and was on from 1949 to 1952. I recall—yes I watched it—that on the show Tripp would play a “magic” slide flute, then he and some children would board a train, and travel to Imagination Land. The show was directed some times by Yul Brynner, and guests included Richard Boone, Walter Matthau, Simon Oakland, Ted Tiller, and others.
Today Ken and I want you to let us play the magic flute and take us to an imaginary place. A place where some basic theorems from computational complexity are unsolved—even though they were proved decades ago.
In particular, a place where the class of polynomial time is not yet know to be different from primitive recursive time. Of course, these classes are vastly different, but that fact was not obvious instantly. It required the brilliant work of Juris Hartmanis and Richard Stearns to prove much stronger statements. Thanks to them we know that is not equal to , which means time.
But listen to the magic sound of flute, listen to Mr. I. Magination’s pleasant voice, close your eyes and imagine we do not know this fact: we do not know that polynomial time is weaker than primitive recursive time. Let’s agree to call the opposite—that they are equal—the Hypothesis. Not a super name, but it will do for now. Let’s see what this means and how it is related to physics—into the magic tunnel we go.
Is This Playing According to Hoyle?
If you prefer not to think of magic, you can imagine a civilization in which physics and the information theory of communication have been developed far ahead of recursion theory and computational complexity.
Even as we are, another civilization might think our knowledge of computation is hopelessly primitive compared to that of physics. Our question is whether we can leverage the latter to help with the former—and how possibly could we do this?
For an example of what we seek, here’s a different take on the famous instance of Sir Fred Hoyle’s prediction in 1953 of a certain preferred basis for resonance of carbon-12 at an energy level of about 7.65 MeV. This has been retro-formulated as an example of the “anthropic principle,” on grounds that if the predicted eigenstate didn’t exist, then carbon could not have been formed in stars in amounts to allow life as we know it. However we agree with a historical review by Helge Kragh, who quotes Hoyle himself to this effect:
It is not so much that the Universe must be consistent with us as that we must be consistent with the Universe. The anthropic principle has the problem inverted, in my opinion.
Hence we pose the question this way: Can knowledge in 1953 be said to have favored a particular mathematical model of carbon to the extent that observation of nature indicated the truth of a theorem about its linear algebraic structure—a theorem that was subsequently proved?
Of course we want to know whether observations may say something about , but before we get to whether that is inconsistent with nature, let’s ask whether the idea is effective on things like that we already know are inconsistent with mathematics.
Through The Tunnel
So we have gone through the magic tunnel, and we do not know whether is true or false. Let’s look at what it says about the physical world. I want to know if this statement can be used to prove that some physics law is false. Even assuming this could be done, what would it mean?
The reason for studying this is: perhaps there is a possible physics proof that resolves . Some possible outcomes:
- The statement implies a physics law is false.
- The statement implies a physical property that is very unlikely to hold.
- The statement implies a physical property that is new and interesting.
Assume that is true, then it would seem that many physical results might follow. Examples that come to mind are: perhaps we can say something about Black holes? Quantum foam? Chaotic system simulation? Many-body problem?
Let’s talk about two of these.
Consider a system
where is a polynomial. Start the iterations off at and run the system for time-steps. If is a chaotic map, then it is assumed, but not a theorem, that it cannot be simulated efficiently. Given our hypothesis we can exactly calculate in polynomial time where is after even steps. This seems surprising, but is unfortunately not a contradiction with any physical law that we know.
What we do have is that the inability to simulate chaotic systems rapidly is enough to separate complexity classes. It is too bad that we cannot make this into a full proof. Note, the reason we can simulate the system even given it is chaotic is simple: we compute each of the values
exactly. If the polynomial is quadratic the number of bits required for this precise computation grows exponentially fast, but that is fine.
The -body problem is the study of the motion of bodies under a physical force such as gravity. The usual statement, by non-experts, is that there is no “solution” for these systems. But that is not true. There are globally convergent series for these systems by Karl Sundman for and by Qiudong Wang for .
So why are they usually called unsolved? The answer is that these series converge globally, but very very slowly. In the real world they must be replaced by a variety of approximation methods. But with we can assert that we can simulate in polynomial time where bodies will be in the far future to within an exponentially small region.
Okay, is false. But the rationale for these ideas is weaker statements that are still open. For example we still cannot resolve whether
Nor have we refuted the assertion that can be solved with linear-sized Boolean circuits, where means exponential time allowing queries to that themselves can be exponentially long. Assuming that either or both of these statements are true, would that entail some fantastical physics results of the type we have just discussed?
Scott Aaronson covered much of this ground in his excellent survey paper, “NP-Complete Problems and Physical Reality” (already linked above). He surveyed physical theories whose correctness implies devices capable of feasibly solving NP-hard problems, and concluded that belief in hardness is an argument against these theories. In particular, it argues that “there are no non-linear corrections to Schrödinger’s equation” in Nature, a subject which has been mentioned in a recent comment thread of our quantum-computing debate.
This is to say, from what we believe about complexity, there are some things we can disbelieve about nature. We are emphasizing a partial converse: from what we know about nature, what can we effectively rule out about complexity? These two aims are not symmetrical, for in complexity we want to prove things, but any knowledge is helpful.
Scott’s paper offers concrete forms of hardness such as (not) being solvable for graphs with 100 million vertices in under seconds, and notes that Kurt Gödel’s famous 1956 letter to John von Neumann asks about linear or quadratic time solvability of . We note that current frontiers are way below polynomial time or circuit size, and put our question even more concretely as follows:
Is the solvability of -clause , with input size , by Boolean circuits of size and depth for some small fixed constant , inconsistent with current knowledge of physics and observations of our neighborhood?
We can pose similar questions for other network computing models. We have tried with the carbon, chaos, and many-body examples to address observational physics, but our question naturally transits to the development of information structures and life on Earth. At least Hoyle called the latter “the largest problem of which we are aware” in an earlier part of the quotation used by Kragh, and Stuart Kauffman among others has applied Boolean networks to it.
Do concrete tabulations of combinatorial objects with certain properties, such as kinds of graphs or knots, enlighten us about the existence of concretely small circuits for substantial tasks? Is there a guided way through the depressingly large search spaces outlined in the slides that Scott Aaronson linked here, and more recently in item 5 here?
Staying with Scott, is there any relation to his “complexodynamics” framework here? We congratulate him and Robert Wood on winning the Alan T. Waterman Award. See also this recent post on complexity and logical depth by Charles Bennett on the Quantum Pontiff blog, and their congratulations to Scott.