# Halting Is Poly-Time Quantum Provable

*How does this intersect David Deutsch’s thoughts in 1985?*

Composite crop from homepages |

Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen (JNVWY) have just posted a paper titled . The title means that multiple provers sharing quantum entanglement, given any Turing machine and string accepted by , can convince a polynomial-time bounded verifier with high probability that . The time is polynomial in regardless of how long takes to halt on .

Today we applaud this work and try to convey some short way of apprehending it.

Yoking a classic undecidable problem to a polynomial-time task is not the only surprise. The proof refutes a conjecture in classical functional analysis that had apparently been widely believed. Thus this story continues the theme of surprises and possibly working the wrong way on conjectures, as we also just mentioned in our previous post. The new work subsumes a major paper last year showing that contains nondeterministic double exponential time (), which proves it different from its classical counterpart , which László Babai, Lance Fortnow, and Carsten Lund proved equal to .

The developments have been covered by Scott Aaronson here, Lance here, Boaz Barak here, and in a personal way by Vidick here. The new paper weights in at 165 pages. We will give our own snap-summary and try to add a little from the side.

## A Halting Attempt to Explain

The refuted conjecture was made by the Fields Medalist Alain Connes in a context having no overt involvement of quantum mechanics. In these 2013 eight–lecture course notes on the conjecture, the word “quantum” appears only once, to say on page 2 of lecture 1:

Other very recent discoveries include the fact that Connes’ embedding conjecture is related to an important problem in Quantum Information Theory, the so-called Tsirelson’s problem…

The problem of Boris Tsirelson ultimately harks back to the theorem of John Bell about correlations that are physically realizable using quantum entanglement but not by any classical physical system. In the CHSH game form of Bell’s theorem, our old friends Alice and Bob can win the game over 85% of the time using quantum, only 75% otherwise. They can get this with just one pair of entangled qubits per trial. Tsirelson proved that the 85% (to wit, ) is optimal. In extensions of these games to larger-size cases, the question becomes: what are the gaps between quantum and classical?

Whether there is a gap of more than a fixed then feeds into interactive protocols. We can have parties trying to prove these gaps using their own entanglement. Where it gets tricky is when you allow Alice and Bob to use larger and larger quantum states and ask, can they achieve the gap with some large enough state? The limiting behavior of the gaps is complex. What JNVWY proved is that this becomes like a halting problem. Not just a halting problem, *the* Halting Problem. Yet two quantum provers, working for a given gap that is achievable, can prove this to a polynomial-time classical verifier. This is the magic of the theorem.

The reduction from halting to the problem about limits and gaps comes before introducing two-prover systems, as is reflected by JNVWY and also in the wonderful introduction of a 2017 paper by William Slofstra which they reference. In advance of saying more about it, we’ll remark that the new work may provide a new dictionary for translating between (i) issues of finite/infinite precision and other continuous matters, and (ii) possible evolutions of a system of finite size in discrete steps of time and size, where both are unbounded but (in positive cases) finite.

## A Flashback?

The results strikes Dick and me as shedding new light on a principle stated by David Deutsch in a 1985 paper:

Every finitely realisable physical system can be perfectly simulated by a universal model computing machine operating by finite means.

I was a student at Oxford alongside Deutsch in 1984–1985, and I remember more going on than the searchable record seems to reflect. Deutsch believed that his model of a quantum computer could solve the Halting problem in finite time. He gave at least one talk at the Oxford Mathematical Institute on that claim. As far as I know the claim stayed local to Oxford and generated intense discussion led by Robin Gandy, Roger Penrose, and (if I recall) Angus Macintyre and at least one other person who was versed in random physical processes.

My recollection is that the nub of the technical argument turned on a property of infinite random sequences that, when hashed out, made some associated predicates decidable, so that Deutsch’s functions were classically total computable after all. Thus the hypercomputation claim was withdrawn.

Now, however, I wonder whether the two-prover system constitutes the kind of “machine” that Deutsch was *intuitively* thinking of. As I recall, his claim was not deemed wrong from first principles but from how theorems about random sequences interacted with machine model definitions. The theory of interactive provers as computational systems was then in its infancy. Could Deutsch have had some inkling of it?

## Open Problems

Again we congratulate JNVWY on this achievement of a long-term research goal. Looking at the past, does it relate to the discussion of hypercomputation stemming from the 1980s? We mean a stronger connection than treated here or in this 2018 paper. Is it much different from ones where “the mystery … vanishes when the level of precision is explicitly taken into account” (quoting this). Looking ahead, are there any connection to the physical issues of infinity in finite time that we recently discussed here?

**Updates 1/17:** Gil Kalai has a post with background on further conjectures impacted by (the failure of) Connes’s conjecture and on quantum prover systems, plus a plethora of links.

A new article in *Nature* includes the following quotations:

- Alain Connes, after saying he was not (yet) able to digest all the material involved in the proof: “It is amazing that the [embedding] problem went so deep and I never foresaw that!”
- Tony Cubitt, University College of London: “What’s amazing is that quantum complexity theory has been the key to the proof.”
- Joseph Fitzsimons, Horizon Quantum Computing: “I thought it would turn out to be one of those complexity-theory questions that might take 100 years to answer.”

The article and comments on Scott’s blog include interpretations that seem to oppose rather than support Deutsch’s principle on the finiteness of nature. The tandem of two unlimited provers may not qualify as a “finite machine.”

There are comments below querying whether the theorem is in first-order arithmetic or how strong a choice axiom it may need.

[added to first paragraph in second section, added updates]

### Trackbacks

- Shtetl-Optimized » Blog Archive » MIP*=RE
- Amazing: Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP* = RE and thus disproved Connes 1976 Embedding Conjecture, and provided a negative answer to Tsirelson’s problem. | Combinatorics and more
- Amazing: Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP* = RE and thus disproved Connes 1976 Embedding Conjecture, and provided a negative answer to Tsirelson’s problem. | Combinatorics and more
- Interaction + Entanglement = Efficient Proofs of Halting | Quantum Frontiers
- Interaction + Entanglement = Efficient Proofs of Halting |
- Cleverer Automata Exist | Gödel's Lost Letter and P=NP

Does it depend on AC or is it in HA?

I don’t think the results invoke choice, unless full AC is needed for the topological parts (but I don’t think so); I trust HA = Heyting Arithmetic?

How many quantifiers does the problem $RE\subseteq MIP^*$ have? Is this in $\Sigma^1_2$ and I am not sure if the proof is qualified as constructive?

Seriously would it qualify as a proof? Has any logician or model theorist looked at this? I am pretty sure it will pass CS eyes.

Why does the comment not appear?

If the verifier has to use non-constructive axioms the proof does not have merit I would think. Does anyone know anything about this? Did GLL have a look at this?

Why GLL is quiet commenting?

Thanks . for your continued interest. We may do a followup post that could include this among various aspects. These things take time. You are welcome to give your own estimations.

This is formalized theological mathematics.

This is indeed theological mathematics proving at most two Gods suffices to convince you of divine intervention.

You people have done Quantum mechanics physics in electromagnetics in reality ! I done Quantum mechanics physics in electromagnetics propulsion systems Their is Quantum 132 multi dimensional universes in warp dimensional bubbles fields to access them by using saucer spacecraft by matching each negitive warp dimensional electromagnetics field frequencies to access 132 multi dimensional universes with saucer spacecrafts that can across galaxies and multi dimensional universes in seconds by warping time and space and traveling faster than speed of light! These people are still infant stage trying understanding Quantum mechanics physics in electromagnetics !

Since this is a QC blog, this may be of only limited interest, but I point out that in a Koopman approach to Classical Mechanics, there is a natural extension of CM (let us call it CM+, in which the Poisson bracket is used to construct derivations that act on phase space) that with the introduction of Gibbs equilibrium states has an equivalent measurement theory to that of QM. In logic terms CM+ is essentially just the algebra of Hilbert space projections. The extensions required are no more than the mathematics of integral and other transforms that is routinely used in classical signal analysis. The elevator pitch for this is that by uncovering what have hitherto been “hidden observables”, thereby making the step from CM to CM+, we can unify the classical and quantum pictures of measurement. Moreover, by looking at this unification from the classical side, we find that we can unify “collapse” and “no-collapse” interpretations of QM. I am very much working in an algebraic QM/QFT approach, so this might be only for anyone more on the physics side, but arXiv:1901.00526, “An Algebraic Approach to Koopman Classical Mechanics”, has just been accepted by Annals of Physics (in the substantially rewritten v5). For an earlier, less developed discussion that mostly adopts a QFT approach, see “Classical states, quantum field measurement”, arXiv:1709.06711 (DOI in Physica Scripta there). However, an algebraic unification of this kind somewhat flies in the face of the persistent claim that quantum computation is different from what can be achieved classically, which the MIP*=RE result perpetuates.

Interesting comment, thanks. To judge your implications for quantum versus classical, an account of Shor’s Theorem should have priority over MIP* = RE, IMHO.

Side note: this is an instance where the ambiguous usage of “blog” to mean “a post on a blog” creates confusion. “Blog” is short for “weblog” which means only the container. We are not a “QC weblog” and so this comment is highly germane. The usage may come down to whether old ship captains would have called their entries in the ship’s log “logs”.

“An Algebraic Approach to Koopman Classical Mechanics” is more structured as an argument that CM+ and QM are equally capable of modeling experimental raw data, because both are Hilbert space formalisms, with noncommutative measurement algebras. There *is* one significant difference, that QM can be understood to be an analytic form of CM+, because the evolution operator of QM (the Hamiltonian) has a spectrum that is bounded below, whereas the evolution operator of CM+ (the Liouvillian) has an unbounded spectrum: as far as I have so far become aware, however, models of actual measurements and their results can be made independent of this difference, which I think of as a matter of coordinatization.

One of the aims I most had in mind is to avoid having to address every possible case (because there are very many, as many attempts to interpret QM in terms of just CM have discovered to their cost: but CM+ is not CM as it has mostly been understood, so that we can show that the measurement algebras of CM+ include those of QM as subalgebras): I’d rather not have to discuss the details of Shor’s and of every other algorithm! This unification has no impact on what experimental realizations we can or cannot construct for QC algorithms (so the QC industry is safe!), but it does give us the possibility of discussing such realizations in CM+ terms. I’d like to emphasize, also, that I think the unification of “collapse” and “no-collapse” interpretations of QM may be more significant than the unification of QM and CM+, because there is a lot to be said in favor of working with the analytic QM formalism.

I am not a mathematician. So my question may sound naive and I beg your pardon for that.

Does this result has any implication for cryptography based algorithm for digital signature and hashing algorithm. Generally if we encrypt a message with a public key it is told that it is computationally impossible , that is in polynomial time, to break it without knowing the corresponding private key.

In other words the recent blockchain methodology would not be so secure as it is believed to be.