Quantum Refutations and Reproofs
One of Gil Kalai’s conjectures refuted but refurbished
Niels Henrik Abel is famous for proving the impossibility of solving the quintic equation by radicals, in 1823. Finding roots of polynomials had occupied mathematicians for centuries, but unsolvability had scant effort and few tools until the late 1700’s. Abel developed tools of algebra, supplied a step overlooked by Paolo Ruffini (whose voluminous work he did not know), and focused his proof into a mere six journal pages.
Today our guest poster Gil Kalai leads us in congratulating Endre Szemerédi, who on May 22 will officially receive the 2012 prize named for Abel. He then revisits his “Conjecture C” from his first post in this series, in response to a draft paper by our other guest poster Aram Harrow with Steve Flammia.
Szemerédi’s prize is great news for Discrete Mathematics and Theoretical Computer Science, areas for which he is best known, and this blog has featured his terrific work here and here. The award rivals the Nobel Peace Prize in funds and brings the same handshake from the King of Norway.
Gil offers the analogy that Abel’s theorem showed why a particular old technology, namely solution by radicals, could not scale upward beyond the case of degree 4. The group-theoretic technology that superseded it, particularly as formulated by Évariste Galois, changed the face of mathematics. Indeed Abelian groups are at the heart of Peter Shor’s quantum algorithm. Not only did the work by Abel and Galois pre-date the proofs against trisecting angles, duplicating cubes, and squaring circles, it made them possible.
Refutations and Revisions
Gil’s analogy is not perfect, because quantum computing is hardly an “old” technology, and because currently there is no compelling new positive theory to supersede it. Working toward such a theory is difficult, and there are places where it might be tilting against the power stations of quantum mechanics itself. In this regard, Aram and Steve’s paper provides a concrete counter-example to a logical extension of Gil’s conjecture for the larger quantum theory, in a way that casts doubt on the original.
The refutation and revision of conjectures is a big part of the process described by Imre Lakatos in his book Proofs and Refutations, which was previously discussed in this blog. Here, the conjectures are physics conjectures, related to technological capability, and the “proof” and “reproof” process refers to confronting formal mathematical models with (counter-)examples and various checks by observations of Nature.
After two sections by Aram and Steve explaining their paper and its significance, Gil assesses the effect on his original Conjecture C and re-assesses its motivation. The latter is reinforced by a line of research begun in 1980 with the following question by Sir Anthony Leggett, who won the Nobel Prize in Physics in 2003:
How far do experiments on the so called “macroscopic quantum systems” such as superfluids and superconductors test the hypothesis that the linear Schrödinger equation may be extrapolated to arbitrary complex systems?
Leggett’s “disconnectivity measure” in his 1980 paper, “Macroscopic Quantum Systems and the Quantum Theory of Measurement,” was an early attempt to define rigorously a parameter that distinguishes complicated quantum states.
In this light, Gil formulates two revisions of his conjecture that stay true to his original intents while avoiding the refutation. Then I (Ken) review lively comments that continue to further the debate in previous posts in our series.
Aram Harrow, with Steve Flammia
Recall that Gil defined an entanglement measure (there called ) on a quantum state in a particular standard manner, where signifies a possibly-mixed state. The statement of Conjecture C then reads,
There is a fixed constant , possibly , such that for states produced by feasible -qubit quantum computers,
Here the technical meaning of “feasible” depends on which models of noisy quantum computers reflect the true state and capability of technology, and is hard for both sides to pin down. We can, however, still refute the conjecture by finding states that by consensus ought to be feasible—or at least to which the barriers stated by Kalai do not apply—for which is large.
Our point of attack is that there is nothing in the definition of or in the motivation expressed for the conjecture that requires to be an -fold aggregate of binary systems. Quantum systems that represent bits, such as up/down or left/right spin, are most commonly treated, but are not exclusive to Nature. One can equally well define basic ternary systems, or 4-fold or 5-fold or -fold, not even mandating that be prime. Ternary systems are called qutrits, while those for general are called qudits. The definition of -qudit mixed states allows to be defined the same way, and we get the same conjecture statement.
Call that Conjecture C’. As Gil agrees, our note shows unconditionally that Conjecture C’ is false, for any as low as .
Theorem 1 There exist intuitively feasible -qudit states on a 2-dimensional grid for which .
It is important to note that with we cannot simply declare that we have a system on qubits, because we cannot assume a decomposition of a qudit state via tensor products of qubit states. Indeed when the construction in our note is attempted with qubits, the resulting states have However, our construction speaks against both the ingredients and the purpose of the original Conjecture C.
What the Conjecture is Driving At
Conjectures of this kind, as Steve and I see it, are attempts at what Scott Aaronson calls a “Sure/Shor separator.” By his definition that would distinguish states we’ve definitely already seen how to produce from the sort of states one would require in any quantum computer achieving an exponential speedup over (believed) classical methods. It represents an admirable attempt to formulate QC skepticism in a rigorous and testable way.
However, we believe that our counterexamples are significant not especially because they refute Conjecture C, but because they do so while side-stepping Gil’s main points about quantum error correction failing. More generally, we think that it’s telling that it’s so hard to come up with a sensible version of Conjecture C. In our view, this is because quantum computers harness phenomena, such as entanglement and interference, that are already ubiquitous. Nature makes them relatively hard to control, but it is also hard to focus sensibly on what about the control itself is difficult. The formulations of Conjecture C and related obstacles instead find themselves asserting the difficulty of creating rather than controlling. Of course they are trying to get at the difficulty of creating the kinds of states needed for controlling, but the formulations still wind up trying to block the creation of phenomena that “just come naturally.”
In our view, the situation is similar to ones in classical computing. A modern data center exists in a state of matter radically unlike anything ever seen in pre-industrial times. But if you have to quantify this with a crude observable, then it’s hard to come up with anything that wasn’t already seen in much simpler technology, like light bulbs. Our note can be thought of as showing that Conjecture C refers to a correlation measure that is high not only for full-scale quantum computers, but even for the quantum equivalent of light bulbs—technology that is non-trivial, but by no means complex.
Gil Again: Revising Conjecture C
One of the difficult aspects of my project is to supply mathematical engines for the conjectures, which were initially expressed in informal English terms and with physical intuition. For example, in Conjecture 4 we need to define “highly entangled qubits” and “error-synchronization” formally. This crucial technical part of the project, which is the most time-consuming, witnessed much backtracking. It happened with initial formulations for Conjecture 4 that failed when extended from qubits to qudits, which was indeed a reason for me to dismiss them and look for a more robust one, and this has guided me with other conjectures.
Aram and Steve’s example suffices to look for another formal way to express the idea behind Conjecture C. While rooted in quantum computer skepticism, Conjecture C expresses a common aim to find a dividing line between physical quantum states in the the pre- and post-universal quantum computers eras. When Aram’s grandchildren will ask him, “Grandpa, how was the world before quantum computers?” he could reply: “I hardly remember, but thanks to Gil we had some conjectures regarding the old days”—and the grandchildren will burst in laughter about the old days of difficult entanglements.
Conjecture C expresses the idea that “complicated pure states” cannot be approached by noisy quantum computers. More specifically, the conjecture asserts that quantum states that can be realistically created by quantum computers are “-local” where is bounded (and perhaps is even quite small). But to formally define -locality is a tricky business. (Joe Fitzsimons’ 2-locality suggestions in comments beginning here and extending a long way down are related to this issue.)
We can be guided by the motivation stated on the first page of the paper by Anthony Leggett mentioned above, for his “disconnectivity measure” which intends to distinguish two kinds of quantum states:
Familiar “macroscopic quantum phenomena” such as flux quantization and the Josephson effect [correspond to states having very low] disconnectivity, while the states important to a discussion of the quantum theory of measurement have a very high value of this property.
Leggett has stayed active with this line of work in the past decade, and it may be informative to develop further the relation to his problems of quantum measurement and problems in quantum computation. In this general regard, let me discuss possible new mathematical engines for the censorship conjecture.
Conjecture C For Codes
Error-correcting codes are wonderful mathematical objects, and thinking about codes, is always great. Quantum error-correcting codes will either play a prominent role in building universal quantum computers or in explaining why universal quantum computers cannot be built, whichever comes first. The map I try to draw is especially clear for codes:
Conjecture C for codes: For some (small) constant , pure states representing quantum error correcting codes capable of correcting -many errors cannot be feasibly approximated by noisy quantum computers.
Like in the original version of Conjecture C our notion of approximation is based on qubit errors. Conjecture 1 in the original post asserts that for every quantum error-correcting code we can only achieve a cloud of states, rather than essentially a Dirac delta function, even if we use many qubits for encoding. The expected qubit errors of the noisy state compared to the intended state can still be a small constant. Conjecture C for codes asserts that when the code corrects many errors than this cloud will not even concentrate near a single code word. Here “many” may well be three or even two.
Conjecture D for Depth
Conjecture C for codes deals only with special types of quantum states. What can describe general pure states that cannot be approximated?
Conjecture D: For some (small) constant , pure states on qubits that can be approximated by noisy quantum computers can be approximated by depth- quantum circuits.
Here we adopt the ordinary description of quantum circuits where in each round some gates on disjoint sets of one or two qubits are performed. Unlike the old Conjecture C which did not exclude cluster states, and thus could not serve as a Sure/Shor separator in Scott Aaronson’s strict sense, the new Conjecture D may well represent such a separator in the strict sense that it does not allow efficient factoring. It deviates from the direction of earlier versions of Conjecture C since it is based on computational theoretic terms.
The new Conjecture D gives a poetic justice to bounded depth circuits. In classical computation, bounded-depth circuits of polynomial size give a mathematically fascinating yet pathetically weak computational class. In quantum computation this may be a viable borderline between reality and dream.
In the Comments
The comments section of the “Quantum Super-PAC” post has seen an extremely lively discussion, for which we profusely thank all those taking part. We regret that currently we can give only the barest enumeration of some highlights—we envision a later summary of what has been learned.
Discussion of a possible refutation of Gil’s conjectures via 2-local properties started in earnest with this comment by Joe Fitzsimons. See Gil’s replies here and here, and further exchanges beginning next.
John Sidles outlined a mathematical approach to the conjectures beginning here. Hal Swyers moved to clarify the physics involved in the discussions. Then John Preskill reviewed the goings-on, including 2-locality and the subject of Lindblad evolution as used by Gil and discussed extensively above, and continued here to head a new thread. Swyers picked up on questions about the size of controllable systems here and in a second part here. Gil outlined a reply recently here.
Meanwhile, Gil rejoined a previous post’s discussion of the rate of error with a comment in the “Super-PAC” post here. Alexander Vlasov re-opened the question of whether the conjectures don’t already violate linearity. Sidles raised a concrete example related to earlier comments by Mikhail Katkov here. Then Gil related offline discussions with David Deutsch here.
Gil has recently reviewed the debate on his own blog. He and Jim Blair also mentioned some new papers and articles beginning here. On the technological side, Steve Flammia noted on the Quantum Pontiff blog that ion-trap technology has taken a big leap upward in scale for processes that seem hard to simulate classically, though the processes would need to be controlled more to support universal quantum computation.
Propose a version for Conjecture C or D, or explain why such a conjecture is misguided to start with.