# Theorems From Physics?

* Can information physics extract proofs from reality? *

IEEE source |

Rolf Landauer was a physicist and computer engineer who spent most of his career at IBM north of New York City, becoming an IBM Fellow in 1969. According to his longtime colleague Charles Bennett, Landauer “did more than anyone else to establish the physics of information processing as a serious subject for scientific inquiry.” One was was his discovery and formulation of a principle connecting non-reversible computation steps and thermodynamic entropy, which according to Wikipedia’s article is widely accepted as a physical law.

Today Ken and I want to talk about the possible role of physical laws in generating proofs of complexity assertions, even itself.

Recently we have heard of interest connecting , plus the related topic of one-way functions, to Landauer’s principle itself. According to Bennett again, Landauer’s principle states:

[A]ny logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information bearing degrees of freedom of the information processing apparatus or its environment.

Bennett has defended this assertion against various objections. Last year a team from the École Normale Supérieure de Lyon, the University of Augsburg, and the University of Kaiserslautern gave empirical support by measuring the tiny amount of energy released as heat when an individual bit is erased. See their article in *Nature*. Real computers today are said to operate within three orders of magnitude of the energy-efficiency limit which Landauer’s bound imposes. So we can envision a new kind of impossible physical machine: one that can erase bits more coolly. The question is whether any complexity assertion has a side—true or false—that would enable a violation of this bound, thus ‘proving’ the other side.

## Physics and Math

The philosopher in us recoils dogmatically at the notion of such a “physical proof.” Complexity theory is part of mathematics, and mathematical theorems are not supposed to be *contingent*. This is a fancy philosophical term for propositions that are “true in some possible worlds and false in others.” In particular, the truth of a mathematical proposition is not supposed to depend on any empirical fact about our particular world.

Of course physical observations can sometimes aid in discovering proofs. They can help one guess which side—true or false—to try to prove. What we mean is something stronger: whether appeals to physical observations or laws can constitute a proof by themselves—or part of a proof, or some kind of certificate of a proof.

Our openness begins by coupling something Landauer himself was noted for saying,

Information is inevitably physical.

—with the following translation of what John Wheeler meant by “it from bit”:

Physics is ultimately informational.

Information is what we “do”—with theorems and proofs—and mathematical theorems underlie the most fundamental physical models. If information and physics are as tightly bound as they say, we ought to expect some “flow” in the other direction. The question remains, how?

We will not try here to evaluate particular papers we have seen or heard about, such as this or this by Alexandre de Castro of Brazil, or this on energy complexity by Feng Pan, Heng-liang Zhang, and Jie Qi of China. This paper by Yuriy Zayko of Russia reaches the opposite conclusion about from Landauer’s work, so they can’t all be right, while this by Armando Matos also treats Landauer’s principle and factoring. We invite comments from better-versed readers. Rather, we wish to examine the larger issue: can physics be used to prove mathematical theorems? Indeed.

## Physical Proofs?

Imagine that someone shows the following: If , then some physical principle is violated. Most likely this would be in the form of a *Gedankenexperiment*, but nevertheless it would be quite interesting. Yet I am at a loss to say what it would mean. Indeed the question is: “Is this a proof or not?”

Let’s call an argument that shows that *If some mathematical statement is true, then some physical principle is incorrect* a **Physics Proof**—say a PP for short. And let’s call a usual math proof a MP. Can we prove something about the relationship between PP’s and MP’s? Can we, for example, prove statements in math via PP’s? Even statements that we already know are correct? Can there exist a PP that shows that set theory is consistent? Does this violate the famous Incompleteness Theorem of Kurt Gödel?

We have discussed this in an earlier post, in which we also referenced papers by Scott Aaronson and Bennett himself. All this has left us still quite interested in the possibility that PP could exist, and we will try to give some new illustrations of the possibilities.

## A Trivial Example

Let’s look at a PP that shows that multiplication of natural numbers is commutative. Suppose that and are such numbers greater than zero. Consider the a box by of unit squares:

* | * | * | * |

* | * | * | * |

* | * | * | * |

Then its area is clearly . Now rotate the box:

* | * | * |

* | * | * |

* | * | * |

* | * | * |

The area is invariant under rotation—this is the physical principle that we are using. But now the area is . So we conclude that

Wow, what a surprise—if we were doing standup comedy we would not expect much more than tomatoes from this. But an interesting post with this example by Peter Cameron goes on to show that the formal proofs in Peano arithmetic and set theory also have their downsides. Our understanding is this method is used in some grade schools as a way to help students understand multiplication.

## Relationship to Relativity

We have previously told the history of the “no-cloning” theorem in quantum mechanics. A paper purporting to demonstrate faster-than-light (FTL) communication was found to rely on the assumption that an arbitrary pure binary quantum state can be duplicated by a quantum process. If one takes FTL communication to be a violation of physical law, this could be said to constitute a proof of the negation of the assumption. To be sure, a proof was soon found by several people using elementary linear algebra. Hence the no-cloning *theorem* itself is just a piece of mathematics. The question is whether it could have been said to be “already proved” by physics before the simple proof was found.

In quantum communication theory, it seems to be legitimate to argue a theorem based on its negation implying FTL communication. We keep intending to post in greater detail about a paper by Ulvi Yurtsever which we read as showing that if one could gain a nontrivial probabilistic prediction advantage against an unbiased quantum random source, then local causality would be violated—in particular, FTL communication would be possible. There are other potential examples on Philip Gibbs’ FTL page, currently maintained by physicist John Baez at UC Riverside.

## The Information Ratchet

Ken has thought about this recently upon reading some reviews of the book *Life’s Ratchet* by Peter Hoffman of Wayne State University. According to a review in *Nature*:

[The book] engagingly tells the story of how science has begun to realize the potential for matter to spontaneously construct complex processes, such as those inherent to living systems.

Our question is,

Can we possibly judge the differential impact on the speed of this ratchet between the truth and the falsity of , or of more-concrete algorithmic assertions?

If so, that is if we can quantify the impact, then by observing the speed of generative life processes in the lab, coupled with mapping out the early history of life on Earth, we might ascertain the (un)availability of certain concretely feasible approximative or exact algorithms empirically, in advance of possibly proving it.

## Relationship To Time Travel

For an even more speculative notion, perhaps an unfair one, suppose that we want to factor a large number. Assume it would take years on a laptop. Here is what we could do: We create a computer that can run for a thousand years. This would no doubt require some clever engineering: the machine probably would need to do self-repair and also use a renewable source of power. While it would be a difficult piece of engineering, it does not seem to violate any physical principles. Start it running on the factoring problem. Then jump into your time machine, go into the future, get the answer, and return. This means we could do a huge computation in seconds. Does this mean that we can “prove”:

If time-travel is possible, then many concrete “hard” factoring instances are easy?

I am very confused. I hope you are too, even before looking up literature on “closed timelike curves” and algorithms such as in section 8 of Scott’s survey. Note, taking the contrapositive yields that if you believe certain concrete instances of factoring are yea-hard in reality, then you’re saying time travel is impossible.

## Open Problems

What do you think? If someone found a PP of would that prove they are not equal? Would this win the Clay Prize? What would it really mean?

We note that Landauer’s principle did inspire theorems about reversible computation by Bennett and others; this and other stories are told in a lovely memoir by Bennett with Alan Fowler written in 2009 for the National Academy of Sciences. The ETOPIM Association awards an annual medal in Landauer’s honor.

### Trackbacks

- PROOF IS PHYSICAL | Some of Patrice Ayme's Thoughts
- Alpha Now, Omega Later : 3 | Inquiry Into Inquiry
- Alpha Now, Omega Later : 4 | Inquiry Into Inquiry
- Alpha Now, Omega Later : 6 | Inquiry Into Inquiry
- Boltzmann’s Constant and Undercounting | The Furloff
- Alpha Now, Omega Later : 7 | Inquiry Into Inquiry
- When Math and Physics collide…the anlytical expansion of negative one twelfth | The Furloff
- Physical Proof for P versus BPP - FAQs System
- HIGGS ML CONTEST! and lots more electrifying physics←→tcs fusion zeitgeist! | Turing Machine
- Guessing Conjectures | Gödel's Lost Letter and P=NP

Whoops—when lots is going on, sometimes it’s post-first-read-comments-later… Fabio Russ on Sunday posted a comment update with links to papers by Bennett and de Castro and Mlodinow-Brun and Hemaspaandra-Pasanen-Rothe.

… also, with a very interesting view on the dual nature of one-way functions which I would enjoy seeing discussed here by some more skilled physicists as me.

more skilled *than* me. Pardon my English. 🙂

Professors, Lipton and Regan, congratulations for exceptional text.

Every mathematical problem involving time, is necessarily thermodinâmic. Modernly, is well-known that the PvsNP question asks if a non-invertible function in polynomial time exists. To know this, we must know if there is some invertion that violates the second law of thermodynamics, because the time, in our world, is ruled by the entropy increase (which is known as “thermodynamic arrow of time”). If there is an invertion that violates the thermodynamic arrow of time, then a one-wayness condition arises.

or rather, Every mathematical problem involving time asymetry, is necessarily thermodynamic.

Very interesting post! I would like to comment on the section “Relationship To Time Travel.” Would this work the other way around, that is, proving mathematically that time travel is impossible in the physical world? Here is a “proof.” Take a hard computational problem, such as one that provably takes exponential time (they are known to exist by the Time Hierarchy Theorem). Start running it on a computer, with a large enough input so that it must provably take 100 years to finish the computation on that computer. Then jump in the distant future, read the result, and jump back to the present. This way you can solve the problem in a few minutes, assuming that time travel is possible. Since the task _provably_ takes at least 100 years, we proved that time travel is impossible.

Time complexity is expressed as a number of execution steps, regardless of any physical notion of time. The proportionality of the number of time units to this artificial measure isn’t guaranteed by the Time Hierarchy Theorem.

If we run the algorithm on a _fixed_ computer, where we know how much time each step takes, then the number of execution steps can be translated into actual running time. Therefore, we can make a claim that a particular task cannot be solved on this fixed computer faster than, say, 100 years. If time travel still allows us to solve it in minutes on the _same_ computer, then, apparently, we proved that time travel is impossible, as we obtained a contradiction.

OK, thank you Andras. Now I know why I don’t believe in time travel – at least to the future. Regarding the past, a good memory can help… 🙂

I don’t think that makes sense. As Serge said, “time complexity is expressed as a number of execution steps, regardless of any physical notion of time”. If you translate this into actual running time, first you need to answer: what is time? And what is time travel? To show how vague definitions are problematic, suppose there are two people now, one stays by the side of the computer watching it run, and the other goes to the future. Obviously, for the first person there is no violation whatsoever. But for the second one the problem was solved under minutes – doesn’t that mean instead that the time “outside” the person time travelling just accelerates to that person, so what would take 100 years takes minutes because the computer is much faster now?

There is another point I would like addressed. Can one think of a physical proof that wouldn’t provide a corresponding mathematical proof?

Invoking physical law of this kind seems to imply that we have models in place to interpret experiments and therefore sufficient mathematical description to make a corresponding mathematical proof.

In the case of the unit squares, we have a sufficient understanding of rotation in space to make the conjecture.

It should have something to do with one’s reductionism stance. Personally, I don’t think we understand fundamental processes in nature without some sort of reductionism. Although there are some tasks that a human being can perform better at than a computer, for instance, we don’t think this is a fundamental aspect of nature.

Consider this problem: My friend is a savant and when I list incredibly long semi-primes, he can quickly factor them in his head. He can’t (or won’t) tell me how he accomplishes this, however. This would strongly suggest (but not prove) that there is a classical factoring algorithm. However, I would not accept this as a physical proof until I understood how my friend’s brain functioned. Once I did understand how he was able to accomplish this feat, I would quickly be able to find a corresponding mathematical proof.

I think you’re allowed to accept this as a physical proof. Of course you’ve got to understand how your friend does the factoring for it to become a mathematical proof. But if he succeeds in this act of communication, he’ll have thus proved the theorem to you.

If you believe certain concrete instances of any problem are yea-hard in reality, then you’re simply saying any actual DTM needs more than, for example, polynomial number of steps in order to solve them.

Hence, you are saying nothing about time travel, orbiting black holes or traveling near light speed – physical situations where the time can behave strange, but without changing the fundamental fact of counting the number of steps into Computational Complexity, since time here is not physical one, it’s only an abstraction, of course.

Mathematics is free tautological imagination, ruled by axioms, and axioms about axioms, so on, endlessly. But Physics is constrained reality, whatever reality is: thereby, physical proofs can be only upon “the” Universe, whereas mathematical proofs can be upon “any” Universe.

Mathematical proofs don’t work in a universe where there are little pixies constantly erasing and adding stuff to proofs that people write in order to confuse us. In fact, how can we be sure we are not living in such a universe?

somewhat related to this subj, do you have any blogs on the transition point of SAT & its deep connections to thermodynamics? it amazes me how many theorists dont really seem much interested in this research whereas it seems quite significant…. hey its worth at least a single post eh? one researcher in particular stands out, Mezard….

but this post did just inspire me to collect a bunch of TCSphysics links I have hanging around (including a ref to this blog, the brilliant kalai/harrow debate on qm noise) and do a blog on it someday.

another key area to ponder in the very intimate yet still-mysterious connection between physics/TCS: entropy!

also, information and physics are tightly connected/intertwined in quantum mechanics, Wheeler is one of the proponents of that (“It from Bit”), and also… Bohm! but its verging on more metaphysical at times….. some various cutting-edge quantum experiments do attempt to delve into the nature of this “information”…..

There can’t be such a thing as a physical proof of a mathematics theorem. For one reason: how could you prove that Nature has respected only your axioms, but no additional principle?

For instance, Einstein never proved that Euclid’s Fifth Postulate was undecidable, but some great mathematicians – Gauss, Lobachevsky and Riemann – had already accomplished this task before him. Thus, all the required mathematical apparatus preexisted his discovery of General Relativity.

Even though no physics experiment will ever settle the PvsNP question, they could become a strong incentive to replace our good old Turing machines by a new, more realistic computing model. At best, all that could hopefully result in a formal proof of the undecidability of P=NP.

You might enjoy the 50th anniversary special of Dr. Who that aired this weekend. A key plot device involves the time travelling doctor doing exactly as you describe: setting a difficult computation in motion, and recovering the solution from a version of the computer hundreds of years in the future.

”Information is physical”. Always. Of course. However, we don’t know what physical is. We don’t know, what physics is, for sure. Some roll out the Quantum, and say:”here is physics: it from bit”. However, we are not certain of what the Quantum is (= we don’t know whether quantum theory is “complete” or not; ultimately it’s a Physical Problem, experimentally determined; Von Neumann thought he had a “formal” proof, but he was wrong).

Are there Physical Problems that are not Mathematical Problems? Or Physics Proofs that have no Mathematical Proofs? Well, at this point, there are. Take general fluid flow. Be it water inside a fluid, or a meteor going hypersonic, these Physical Problems exist, and have solutions, that the physical objects themselves are Physical Proofs. It is not clear that they have Mathematical Solutions, let alone Mathematical Proofs.

“mathematical theorems are not supposed to be contingent. This is a fancy philosophical term for propositions that are “true in some possible worlds and false in others.” In particular, the truth of a mathematical proposition is not supposed to depend on any empirical fact about our particular world.”

With all due respect, that’s theology. Conventional theology, but still theology. I can show that the proof that square root of two is irrational contains assumptions made on an empirical basis (along the lines of mn = nm, actually; similarly, the choice between Presburger arithmetic and Robinson, or Peano arithmetics can be viewed as empirically driven.)

However, what is an achieved mathematical proof? Just a neural arrangement. Similar neural arrangements in the minds of noble primates called mathematicians. Thus, a mathematical proof is a physical object constructed similarly in the minds of many. So a mathematical Proof is a Physical Proof, just as the fluid in a tube is a Proof of a Physical Problem, the flow problem. And similar tubes have similar “proofs”, once similar fluids similarly flow.

So any Mathematical Proof is a Physical Proof.

Well, not exactly. It’s rather the duplication of those neural arrangements from one brain to another that deserves to be called a proof. This is usually achieved by means of a text written in some human-readable language. But whenever you replace the neurons with transistors, you get what’s usually called a program. I suspect the difficulty of P?=NP to reside in what programmers call a deadlock, that is two concurrent processes which prevent each other from running. The novelty being here that the deadlock occurs in the head of the computer scientist!

1. Mathematical/Physical intuition is an positive example for time travel.

2. You are assuming here that time travel requires polynomial resources with time difference.

2.1 Photon travels in 0 self-time from birth to death, although on Earth there can pass any time.

2.2 Any massive particle requires exponential amount of energy approaching speed of light to accelerate itself in time, and therefore, even if you can reverse time, … to be useful one needs exponential resources, according to current physical model.

3. In most calculation the simple approaches to compute something diverging, therefore, it seems that physical reality is working close to singular point, so potentially time travel is possible even with polynomials resources.

Conclusion. Our current knowledge is very crude description of reality.

PS. The current funding strategy (and implemented social model) will keep this state for a long time like development of Rome empire stalled the advances of natural sciences for hundreds years. Although, achieved technological level allow for completely different social model in near future. We again have the competition between rational social psychology and dominatory scarce resource collection with second approach winning.

My initial reaction was that if one could follow your suggestion and come up with a physics proof that took the form of a thought experiment rather than an actual physical experiment, then it would effectively be a mathematical proof. Then it occurred to me that one would need to cast the thought experiment in such a way that one could be confident that the general set-up was consistent: if one set up a toy universe with physical laws that were contradictory, then any theorems one could prove in that universe would be of little use. So finally I come round to the following view. It seems to be very hard to set up a theoretical model of the actual physical universe and prove that it is consistent. However, we do have a model of the physical universe, namely … the physical universe itself. So experimental evidence gives us a kind of guarantee of consistency (I realize that a philosopher of science would regard that remark as extremely naive, as it wrongly suggests that experiments give us some kind of direct contact with reality — I don’t know whether a more sophisticated rephrasing would be possible). In that way, it seems at least possible that there could be a physical proof of a result that is very hard to prove mathematically.

There is one remark of yours that I disagree with, though it is not clear from the way you write that you agree with it either, since you are describing a common reaction that may not be your reaction. It is the idea that a theorem proved physically would be contingent. If one regards physical evidence as demonstrating the consistency of a model, that consistency is not contingent. What is contingent is the fact that we are lucky enough to inhabit a model that demonstrates its consistency.

Hi Tim,

I agree that the physical universe can prove its own consistency, but that seems to be all it can do from a mathematical viewpoint, doesn’t it? Because, as long as the universe hasn’t delivered the totality of its axioms to us poor humans, no physical proof will be rigorously convertible into a convincing mathematical proof…

Serge: The universe has been perceived to present naturally with some axioms. However, that was with yesterday’s physics, I claim there are new, stronger axioms. Just ignoring them does not mean they can be ignored while preserving veracity.

For example, one is free to do mathematics with (A and Non A) viewed as true. One can deduce lots of things from that. Actually anything whatsoever. But it does not make those deductions true.

Thank you for commenting, Serge. I don’t see any difference between what you said, and what I said. But maybe I miss something, and you can instruct me.

It’s true that the duplication is achieved by text. That is by neuronal, or more exactly, axonal communications. And that, one can in turn duplicate this with transistors. So far, so good.

However, one should pay attention to this: there is much more to the brain than axons. There are dendrites, but also astrocytes. This introduces a TOPOLOGICAL character to thinking absent in transistor networks. That’s also why there is more to human mathematics than just programming in a computer.

More to say, but I have to travel.

PA

Patrice,

There are certainly a few other things more to the brain! Some people even conjecture that the conscious part of the mind be a quantum computer… I find this hypothesis quite seducing, since it would imply that classical computers can only have a “subconscious” way of thinking. I don’t know where it leads, but I like it.

The difference with what you said in the first place lies in the necessary distinction between processes and programs. Neural arrangements aren’t proofs (programs), they’re thoughts (processes). Making this distinction’s vital to understanding what’s expected of a proof: it must be efficient as a communicating device.

My theory of mental deadlock seems likely – at first sight – to explain a lot of impossibilities in complexity theory, but I have to make sure it does work.

Serge: A lots of meta characteristics of the quantum and of consciousness are similar. And, moreover, the quantum makes biology possible (non locality itself is used for computing/finding the lowest energy solutions; see chlorophyll).

What I was pointing at was much more prosaic: there is a whole topological computing going in the brain… on top of, but entangled with, the axonal system. Without going quantum, that makes human brains completely different from transistor networks.

You tell me a proof is “expected” to be “efficient as a communicating device”. Well, efficient long range communication is precisely what axons do in the brain.

It is true that brain regions can shut down other brain regions (even without epilepsy; it actually happens routinely, everyday.)

If all brains are similarly limited, one obviously constrains what one can talk about. Including “proofs”.

Gowers: The best experiments give us the best kind of direct contact with reality that we have. It would be naïve to think otherwise (a naivety some philosophers have been culprit of). Such experiments are precisely designed by trying to eliminate all what could go wrong and all what could act as a screen between us and reality.

If a philosopher of science thinks otherwise, she/he ought to come up with a better thought experiment. To start with. Otherwise she/he would be just striking a pose.

The following natural sequence leads to a

prospectivetheorem-from-physics:Rigorous theoremComputing permanents exactly is #P-complete (Valiant)Permanent-of-Gaussians ConjectureIt is #P-hard to approximate the permanent of a matrix A of independent N(0, 1) Gaussian entries (Aaronson and Arkhipov)Permanent-of-Gaussians Sorting ConjectureApproximately sorting a list of Gaussian-entry matrices by permanent is computationally easier than approximately sorting that list by determinantNote 1The Permanent-of-Gaussians Sorting Conjecture follows from the Gil Kalai’s postulate that all physical processes (including BosonSampling experiments) can be simulated with classical resources.Note 2It is consistent with present mathematical knowledge that the Permanent-of-Gaussians Conjecture and the Permanent-of-Gaussians Sorting Conjecture both are correct.Note 3Since computing an nxn matrix determinant by the Coppersmith–Winograd algorithm requires O(n^2.3736) operations, the Permanent-of-Gaussians Sorting Conjecture requires that (approximate) sorting-by-permanent must be *very* fast.QuestionHow manyGödel’s Lost Letterreaders are endowed with sufficient faith by the GLL theorems-from-physics principle as to regard the Permanent-of-Gaussians Sorting Conjecture as plausible … sufficiently plausible as to test numerically?AnswerThat number is greater than zero!What does “sorting-by-permanent” mean?

Sorting-by-permanentmeans “given a list of matrices whose elements are i.i.d. zero-norm gaussian random samples, sort that list in order of ascending |permanent|^2.”A variety of non-parametric statistical measures assess the quality of a approximate sort: Spearman’s rho and Kendall’s tau are two of the most popular such measures.

The color-coding on

this on-line figureshows such an approximate sorting, for simulated Scattershot BosonSampling of 15 heralded photons emitted into 200 coherent modes, sorted by an entropic measure. The Spearman rho of the O(nPhoton^2) (easy-to-compute) entropic sorting relative to the (hard-to-compute) |permanent|^2 sorting is ~0.14. This is visible as the tendency of red (high-entropy rank) data points to appear in the upper-right (large permanent) quadrant of the data plot.A question to be asked later this week on MathOverflow — that was stimulated in large measure by this GLL column and by Gil Kalai’s visit to Seattle — is whether more sophisticated PTIME measures might yield even larger values of Spearman rho relative to |permanent|^2 .

The GLL-style physical “proof” that such sorting methods must exist is:

• Gaussian-matrix permanents are resistant to noise, and

• Permanents are physically measurable, and

• Noisy physical systems are efficiently simulable, and therefore

• PTIME algorithms for sorting-by-permanent must exist.

Needless to say, this “proof” provides little more than an incentive to seek to seek efficient ranking algorithms. Still it is well to keep in mind a passage from Borges’

Averröes’s SearchPerhaps understanding of quantum simulation algorithms is similarly nearby-yet-mysterious to us, as the notions of “comedy” and “tragedy” were nearby-yet-mysterious to Borges’ imagined Averröes. For younger researchers especially, this is a happy thought.

Best wishes for a happy holiday season are extended to all

Gödel’s Lost Letter and P=NP. And thank you, Dick Lipton and Ken Regan, for sustaining another wonderful year of GLL posts.“PTIME algorithms for sorting-by-permanent must exist.” If you think you found a proof, just go ahead and publish it.

Should be If you think you found a proof, for the statement “PTIME algorithms for sorting-by-permanent must exist.”, then go ahead and send to STOC or FOCS. It may be that you dont have the PTIME algortithm but proving such algorthims exist will be important.

The historical development of PTIME quantum simulation algorithms richly illustrates Richard Feynman’s maxim that “a very great deal more truth can become known than can be proven.”

A nice discussion of PTIME scaling in high-accuracy large-scale quantum simulation is given by Bochevarov

et al.in their recent surveyJaguar: A high-performance quantum chemistry software program with strengths in life and materials sciences(2013), in the section “Traditional methods” (p. 2115ff), which draws heavily on work (spanning decades) by luminaries that include the Nobelists Linus Pauling, Lars Onsager, Paul Dirac, Walter Kohn, Lu Jeu Sham, John Pople, Martin Karplus, Michael Levitt, and Arieh Warshel.It is striking that even after many decades of work by top-rank scientists, scarcely a single result in the Bochevarov

et al.computational chemistry survey can be rigorously proven, and yet (inexplicably) computational quantum simulation has for many decades sustained a Moore’s Law pace of exponential improvements in capability.An open questionWhat sustains this surprising (and growing) gap between Feynman-style “truth that has become known” versus “truth that is proved”? No one knows.A particularly valuable aspect (as it seems to me) of the Kalai/Harrow quantum computing debate in general — and of BosonSampling experiments and the PTIME simulation of those experiments in particular — is that these wonderful, beautiful, fundamental questions have (as Scott Aaronson has rightly said) “a grand total of zero known practical applications.”

In thought-provoking contrast, the pace of development of simulation tools in chemistry and materials science has been chronically slowed by trade secrets, publication restrictions, national security considerations, priority disputes, and patent litigation. In quantum computing research proper, the luminaries of early decades routinely filed patents prior to publishing in the open literature (John von Neumann, Charles Bennett, David DiVincenzo, Peter Shor, Daniel Gottesman, Ralph Devoe, Isaac Chuang, and Bruce Kane are examples), and even in the present decade, D-Wave has just announced its 100th

awardedUS patent.ConclusionThe Kalai/Harrow debate and Aaronson/Arkhipov BosonSampling provide the quantum computing community with vital “protected habitat” for the nurturing of young researchers and vigorously open scientific/mathematical discourse. We can all be grateful for this outstanding contribution to mathematical and scientific progress.Regarding which openness, this Holiday Season appreciation and thanks is extended to Gil Kalai, Aram Harrow, Scott Aaronson, and Alex Arkhipov (and to Dick and Ken at GLL).

Again you are throwing some open questions. Since you are claiming something, go send to STOC or FOCS and they will decide your approach.

Your “STOC or FOCS” suggestion is welcome, J … albeit this simulation framework will almost surely appear first in the applied physics/engineering journals.

Formally speaking, the entropic permanent-comparator derives from the pullback of (exponentially large-dimension) Hilbert-space dynamics onto a (polynomially small-dimension) unit-rank

secant varieties of Segre varieties of Veronese varieties.As the secant-rankrof the varietal state-space is increased, higher-order-quantum effects are modeled with increasing fidelity, such that exact Hilbert dynamics is recovered in the limit of exponentially large secant rank.It is natural to ask, “What simulation accuracies can be achieved with

polynomialstate-space rank?” That is a hard question. With present mathematical tools rather little can be proved rigorously, and yet practical experience suggests that low-rank varietal dynamics can be a wonderfully good approximation to large-dimension Hilbert dynamics.Definitely I (and many folks) survey each year’s STOC and FOCS presentations for proof technologies that afford a handle these tough questions … accepting that progress is slow and cumulative. As the sages say: “Though the messiah tarries, we are patient and believe his coming with a full heart.” Folks who await a rigorous understanding of quantum simulation efficiency especially must be patient.

When it comes to complexity theory, I don’t know what I am talking about, but I find intriguing the connection between P!=NP and phase transitions and renormalization group. I believe that Susan Coppersmith from U of Wisconsin (is she related to Don Coppersmith?) has written some papers about this

Regrettably, I think we can confidently say that no proof of P != NP will come out of that approach. The issue is that random problems can have all the statistical-physics hallmarks of exponential hardness, and yet there can be a polynomial-time algorithm for them. This is what happens with random XORSAT, which has many of the same statistical properties as random 3-SAT, and yet Gaussian elimination solves it in polynomial time. This came up in this blog in discussions of Deolalikar’s claimed proof a few years ago.

So you don’t think there is any connection between the “P to NP transition” and the transition of a fluid from laminar to turbulent behavior, or water from liquid to gas or ice, or a metallic alloy to its many phases

There would be a connection, if we somehow knew that the only way to solve SAT is through algorithms like backtracking or local search. Then hardness would be due to the bumpiness of the “landscape” as in spin glasses, exponentially many local optima with tall barriers between them, etc. The problem is there might be some miraculous algorithm that has nothing to do with local search. Gaussian elimination doesn’t move around in a landscape; instead, it globally redefines variables in a way that simplifies the problem.

k-XORSAT itself is very special as it is a random linear system, which is why Gaussian elimination works. Add any nonlinearity and that goes out of the window. There are several papers that show that nonlinearly perturbed XORSAT behaves now as regular SAT. Once a problem is nonlinear, one expects that only non-linear solvers will work to find solutions. An iterative nonlinear system as a dynamical system typically generates entropy and thus looses information. Ultimately entropy production (much like Boltzmann law) is what I believe the reason for P!=NP, so in this sense P!=NP is essentially the second law. Assume you had a *deterministic* solver in form a system of equations that evolves a trajectory in the phase space of the problem to the solution that is short (its length scales polynomially with problem size). For hard problems this trajectory would be chaotic and the dynamical system will have a positive topological entropy and thus computing such trajectories will become exponentially costly. For more details see our papers in Nature Phys 7, 966-970 (2011) and Scientific Reports 2, 755 (2012).

Hi Zoltan, of course I agree with the intuition “one expects that…” in your post, and I like your paper. But nonlinear problems can be linear ones in disguise, and moreover linearity is not the only reason a problem can be in P. While studying the dynamics of particular algorithms can give us a lot of intuition, proving that _no_ efficient algorithm can work is a much more distant goal.

One of my favorite examples is Max Flow. If we define the “neighborhood” of a solution as the flows we can get to by adding additional positive flows, it can be full of local optima, and the greedy algorithm gets stuck. But as soon as we allow a little bit of backtracking, adding negative flows to cancel out previous flows, the landscape becomes simpler, with a greedy algorithm leading easily to the optimum. This ability to reorganize the landscape should give us pause about problems like SAT; how do we really know that there isn’t a way to redefine the problem that makes the landscape enormously simpler?

It is critically important to distinguish between the objective landscape, the boolean functions as mathematical objects, and the syntactic landscape, the particular formal language that we are using as a propositional calculus to denote and compute with those objects. If we do hill-climbing, we must keep our feet on the objective territory, however much we must use syntactic maps to narrate the travelogue. (You should be thinking of manifolds here.) The object domain has a fixed structure but the conceptual clarity and computational efficiency of propositional calculi can very likely be improved indefinitely.

Hi Cris! I wasn’t implying that any nonlinearity will cause a problem to be hard, that is clearly not the case. Instead, my comments were about a possible mechanism by which hardness appears, namely entropy production (for which nonlinearity is only a _necessary_ ingredient). That is, given any deterministic solver/algorithm/dynamical system, there will be a class of problem instances on which the given solver will take exponential costs to find solutions, due to information loss/entropy production. If P!=NP is a fundamental law, then by studying how a deterministic algorithm fails when moving onto the class of aforementioned hard instances we might learn something about the mechanism by which hardness appears. One tends to talk about problems in P or NP, but I think we can learn more from studying the behavior of an algorithm when moving from easy instance classes to hard instance classes (for that algorithm). For if there is a fundamental mechanism behind P!=NP, it should show itself in the algorithm’s behavior. Thus, the behavior of the solver/algorithm that we introduced in the papers quoted above was used to help build an intuition about such possible mechanism. And, at least to us, entropy production seems to play a key role in this. Although, the second paper Sci. Rep. quoted above is mainly about Sudoku, there we describe this view a bit better (sorry for the plug, it is not for that purpose, but there I describe these ideas a bit better). I think before any proof comes along for P vs NP, one needs to have an intuition about what is going on. Take Boltzmann’s H-theorem (that I mentioned in my previous comment): although the molecular motion is Hamiltonian, time reversible, when aggregated to the level of the ensemble, a quantity what is now called information entropy cannot decrease. This was known since Boltzmann, but it took over 100 years before Cedric Villani produced a rigorous proof (2010 Fields), connecting the entropy production to the so-called molecular chaos as a mechanism for information loss (essentially billiard-chaos generated by the nonlinearity of molecular collisions). I think it is more plausible to show that if P=NP then the second law would be violated (by some gedanken experiment, as Ken describes), then providing a direct proof to the P vs NP.

The comment should be: If you think you found a proof for the statement “PTIME algorithms for sorting-by-permanent must exist.”, then go ahead and send to STOC or FOCS. It may be that you dont have the PTIME algorithm but proving such algorthims exist will be important.

Information could be physical is debatable but computation is physical may be less controversial. When you compute you change something to show that computation. Even a No Operation instruction does something not to change anything. So in an universe that does computation, no operation is never a feasibility and hence you have entropy and all the rest that follows. Even an universe with no entropy gives rise to the concept of multiverse that changes something. So does my BS have any meaning?

Here is what I gather from some correspondence with David Deutsch on these things: When analyzing the complexity of any computation involving time travel, one must take the cost of building the time machine into account as well. Therefore such a “quick” solution would likely not be “cheap” at all.

The biggest problem I have with computation with closed timelike curves is that it requires a (possibly unrealistic) 100% guarantee that nothing will go wrong with the “boring” part of the computation (I mean the centuries upon centuries during which the job is actually done, before sending a message back to the present). See our paper http://link.springer.com/article/10.1007/s11047-012-9337-6 for details.

Dear Drs. Lipton and Regan,

thank you for quoting my papers in this important topic, and congratulations on your singular article.

To elucidate some points of my work, I (yesterday) wrote “A note on ‘Theorems From Physics?’ by Richard Lipton and Kenneth Regan” that is available in

vixra.org/abs/1312.0003

What kind of information process is scientific inquiry?

What kinds of information process are involved the various types of inference — abductive, deductive, inductive — that go make up scientific inquiry?

What kinds of information process are computation and proof?

Many types of deductive inference, including many kinds of computation and proof, don’t really change our state of information so much as increase the clarity of that information. Do we have any way of quantifying clarity in the way we give a measure to information?

In fact, P!=NP looks very close to being a law of physics much more than one of mathematics. The absence of a proof, the impossibility to build a model of P=NP, the absence of mathematical consequences outside complexity theory… all these facts speak in the same direction.

Thus, we might have P!=NP in physics while P=NP still holds in mathematics. Indeed, no mathematical fact seems to go counter the existence of a poly-time Turing machine that solved SAT, although the physics won’t allow us to exhibit any of them.

When theory and practice disagree so strongly, that means it’s high time to change the theory.

The theoretical change could be quite straightforward: “It is forbidden to speak of the existence of an algorithm. Instead, each algorithm is endowed with a probability of being found.”

This is quite similar to what once happened in set theory, when the set of all the things having some property was declared meaningless, thus ruling out the contradictory “set of all sets”. Instead, it was only allowed to speak of the subset – of an already-known set – of all the things having some property.

Likewise, you can allow one to speak only of the probability to find, say for example, a polynomial algorithm for factoring – which is close to zero – or for sorting – which is close to one. This being said, you no longer have to wonder if a polynomial algorithm for SAT exists or not, since the question is meaningless. Of course, one may wonder if this probability is zero or not… but such a matter can be decided by physics only.

The GLL/PvsNP theme “Theorems from Physics” is developed as a fictional narrative in the

Shtetl Optimizedcomment“Trapdoor BosonSampling: Twenty years after Scattershot BosonSampling.”The “Trapdoor BosonSampling” narrative is consciously modeled upon post-WWII fictional narratives incorporating theorems-from-physics that include Wernher von Braun’s

The Mars Project(1953), aTime Magazinetheme issue by Simon Ramo and Dean Wooldridge titled “Electronics: the New Age” (1957), Norbert Wiener’sThe Tempter(1959), and especially Leo Szilard’s techno-fableThe Voice of the Dolphins(1961).The narrative themes of von Braun, Wiener, and Szilard — space travel, control theory, networked information, and globalization — all became 20th century realities with aggregate transformational effect.

Similarly, the combined narrative themes of “Trapdoor BosonSampling” — scalable quantum technologies and efficient dynamical simulation of those technologies, united by rigorous theorems in complexity theory — all are destined to become 21th century realities with comparable transformational effect.

In the spirit of Borges’ cognitive-science story

Averroës’s Search, whose central teaching is “what we seek is never very far away,” the “Trapdoor BosonSampling” narrative is constructed to be as optimistically enlightened as feasible and yet entirely consonant with established physics and rigorous mathematics.In the sustained hope that what we seek is never very far away, best wishes for a Happy Holiday season are extended to all GLL readers.

— Robert Musil •

The Man Without QualitiesI agree with this. However, whenever there’s the slightest suspicion that a problem might be settled by hardcore physics, then at least you shouldn’t expect to solve it within arithmetic only. Most likely, some additional axioms will be required.

But is complexity theory, in its totality, a genuine part of mathematics after all? This is the question! I agree for all that concerns provable complexity, but I’m much less sure about the undecidable statements – I’d rather situate them inside physics. So much so that I ventured to suggest that, maybe in another universe, the hardest theorems aren’t the same as in ours.

And of course, the above point of view is possible only because computer science is the only science that can be viewed both as part of mathematics and as part of physics. Unlike the other scientific fields, there’s no theory between the facts and their modeling – computer science is its own mathematical theory! That’s why it needs such developments as quantum computing or thermodynamics: they’re eventually providing it with a theory that isn’t purely mathematical.

Pip says: “Let’s call an argument that shows that If some mathematical statement X is true, then some physical principle Y is incorrect…Does this violate the famous Incompleteness Theorem of Kurt Gödel?”

Should one be able to prove paraconsistency, Gödel’s theorem about not being able to prove the consistency of a theory could lose its force.

Just some additional thoughts about the Landauer Limit

http://thefurloff.com/2013/12/10/boltzmanns-constant-and-undercounting/

A follow up on the previous post…point being we shouldn’t place to much value on claims the specific relationship between energy, entropy and information without a complete physical theory that includes gravity.

http://thefurloff.com/2013/12/14/follow-up-notes-testing-for-extra-dimensions/

One last post on this topic since it is related to the question about proofs from physics.

http://thefurloff.com/2013/12/15/philosophy-reduces-to-statistical-mechanics/

In a way, the relation between “physics space” and “information space” is one of the topics I address in my work on inquiry driven systems. Here is a pertinent place in medias res, from which point one may happily sample both backwards and forwards:

• Prospects for Inquiry Driven Systems • Unabridgements

The 2013 gravity comprehension/definition is the greatest science feat since the early 1920s.

Learn what natural gravity is scientifically:

Think of the consequences re classical science of this comprehension of gravity…

איך נברא היקום יש מאין

Origin And Nature of the Universe, the greatest science feat since the early 1920s.

New Science 2013 versus classical science

Classical Science Is Anticipated/Replaced By The 2013 Gravity Comprehension !!!

http://universe-life.com/2014/02/24/gravity/

Attn classical science hierarchy, including Darwin and Einstein…

“I hope that now you understand what gravity is and why it is the monotheism of the universe…DH”

=================================

Gravity is the natural selection of self-attraction by the elementary particles of an evolving system on their cyclic course towards the self-replication of the system. Period

( Gravitons are the elementary particles of the universe. RNA nucleotides genes and serotonin are the elementary particles of Earth life)

כח המשיכה

כח המשיכה הוא הבחירה הטבעית להיצמדות הדדית של חלקיקי היסוד של מערכת מתפתחת במהלך התפתחותה המחזורית לעבר שיכפולה. נקודה

( הגרוויטון הוא חלקיק היסוד של היקום. הגנים, הנוקלאוטידים של חומצה ריבונוקלאית והסרוטונין הם החלקיקים היסודיים של חיי כדור הארץ)

Dov Henis(comments from 22nd century)

http://universe-life.com/2013/11/14/subverting-organized-religious-science/

http://universe-life.com/2013/09/03/the-shortest-grand-unified-theory/

PS: Note, again:

– Classical Science Is Anticipated/Replaced By The 2013 Gravity Comprehension !!!

– Think of the consequences re classical science of this comprehension of gravity…

DH

נ.ב. הבנת מהות כח המשיכה מספקת בסיס הגיוני מפשט/צפוי/מתקן לכל מגזרי ורכיבי המדע הקלסי

יש פה אי- ניצול של הזדמנות/אפשרות של ישראל להדיח באלגנטיות מתורבתת את ארה”ב מעמדתה בעולם כמוליכה/המקבעת של עדר ה”מדענים/מדע” באמצעות האיגוד המקצועי האמריקאי הדתי, ולתפוס את עמדת ההולכה/פיתוח/הובלה של המדע 2013 החדש המשתדרג, ולהפוך את המדע האמריקאי לגרורה של המדע הישראלי. אי-ניצול זה הוא מחדל מטומטם /עלוב/מביש…

דה