The Quantum Super-PAC
What unlimited contributions might buy you in power
John Preskill is no stranger to high finance in physics. In 1997 he made a famous bet against Stephen Hawking and Kip Thorne relating to the black hole information paradox. Hawking conceded the bet in 2004, but Thorne has yet to agree. The prize was an encyclopedia of the winner’s choice, which for Preskill was Total Baseball: The Ultimate Baseball Encyclopedia.
Today we present the third of three installments of Aram Harrow’s initial response to Gil Kalai’s argument against the feasibility of building large-scale quantum computers. We also survey reactions to a short paper that Preskill wrote in response to the arguments earlier this month, and note a writeup by Harrow and Steve Flammia about Kalai’s “Conjecture C” and related matters.
Preskill was joined by several other physicists including Leonard Susskind, who wrote a popular book The Black Hole War on the episode and its science. Hawking’s own theory that black holes eventually evaporate is now generally accepted, but sharpened the perception that information falling into the hole is lost. Preskill’s side argued that unitarity in quantum mechanics entails that all information is preserved even in this case. In a lecture posted on his website in 2010, Hawking gave “what I think is the resolution”:
Information is not lost, but it is not returned in a useful way. It is like burning an encyclopedia. Information is not lost, but it is very hard to read… I gave John a baseball encyclopedia, [but] maybe I should have just given him the ashes.
That might have been appropriate in cricket, where The Ashes series are contested between England and Australia for a trophy that really holds ashes. We may ask, what is the computational complexity of decoding information from a black hole? The complexity doesn’t stop the hero of a children’s book written by Hawking with his daughter Lucy from being resurrected out of a black hole.
Susskind subtitled his book as a battle “to make the world safe for quantum mechanics.” Some feel that quantum computers are in exactly the same battle, as even reflected in comments on the original post and responses one and two. In this third response, Aram Harrow tests the principles by allowing unlimited resources to build QC’s in outer space with all due shielding, in ways at least one Republican presidential candidate would entertain. Since probable approximate correctness (PAC) is often good enough for QC’s, we can call them Super-PACs.
Aram Harrow Part 3: Throwing (Imaginary) Money at the Problem
For this final part, suppose we approach quantum computing with an attitude of extreme pessimism. Suppose that anything non-classical that can be correlated, probably is, and in the way that we’d least like. Might Gil Kalai’s conjectures hold in this case, and prevent the construction of a large-scale quantum computer? I will argue that even in this case, by using unlimited resources we could build a quantum computer that would address even the most dire pessimism.
One approach would be to put the quantum computer in a place where there is very little noise other than what we introduce ourselves, like interstellar space. Then the individual components could be placed far apart, so that we are very unlikely to introduce multi-qubit errors by our inadvertent actions on shadow qubits.
For example, we might construct a linear optics quantum computer with each gate a kilometer apart. Correlations between noise processes on different qubits should be essentially zero, except when qubits are interacting. The threshold theorem does allow for correlated errors among interacting qubits—this is a standard assumption.
This approach would not eliminate the usual sources of noise. Each gate would be subject to control errors, and thus two-qubit gates would introduce correlated errors on the pair of qubits they act on. Presumably there would also be storage errors, as photons get absorbed by interstellar dust, as well as timing and other types of errors.
So what does this achieve? Let us recall how the crux of the second response involved “shadow qubits” as the main agents possibly interacting with the “computation qubits” and amplifying errors as the computation progresses.
Keeping Shadow Qubits in the Dark
One thing that I can be confident about suppressing here is the possibility of shadow qubits creating maliciously correlated decoherence. Shadow qubits can live on the individual circuit elements, such as beam splitters and photon sources. But they can’t really follow the photons from gate to gate, because even an infinitesimal change in direction is likely to make them miss the target.
But what about beam splitters that maliciously remember every instruction given to them, and choose their errors to foil the computation? Let’s allow them to do whatever they want to our qubits, as long as the per-gate error rate is below the current threshold allowance of . This includes a beam splitter biding its time for steps and striking only when it judges that the time is right. But when is the time right?
To defeat a QECC, the beam-splitter needs to save its powder until its confederates all introduce errors at the same time. But it can’t really communicate with them. If our qubits lived on some superconductor then we’d have photons and phonons flying all over that different circuit elements could conceivably use to communicate. In space, though, there’s no real mechanism that circuit elements can use to coordinate.
Of course, there is something flying across the void between far-flung optical elements: the photons we use for the computation. But they can’t carry too much extra information, because adding information to them would cause errors, and would violate the constraint that the single-qubit error rate is . We are protected, again, by the linearity of quantum mechanics.
A similar situation is found in blind quantum computation (see Joe Fitzsimons’ comments for more on this), or this scheme for using randomness to code against adversarial channel noise. In each case, we can hide the computation, or the message, from the environment. For blind computation, this enables something like homomorphic encryption, while for the channel coding scenario, this means that even though there exists an adversarially chosen low-weight error that could corrupt the data, an adversary without knowledge of the random seed is unlikely to guess it.
Let Us Ignore Extreme Models
Of course, even these approaches can’t rule out extreme pessimism. If our universe began at a single point, then there are no truly causally independent points. This has been proposed as a loophole to Bell’s theorem: if the two parties are entangled from birth, then even if they believe they are choosing their measurements independently at random, they might nevertheless be unconsciously guided by a cosmic conspiracy. One variant of this idea is called super-determinism. Another possibility is that we are living in a simulated universe, whose rules naturally are subject to change at any time; e.g. our simulators could themselves trip over the power cord. Such levels of conspiracy could also derail arguments for classical computing and all of science, and I don’t think Gil intends to go there.
Let’s turn to a non-extreme area where Gil and I are closer to agreeing.
One fascinating open problem is to determine the power of quantum computers that do not perform fault-tolerant gates. On the one hand, this is an important practical problem, since experimentalists are currently building 5-10-qubit quantum computers, and perhaps soon we can imagine quantum computers with 100 or so qubits. These computers are too small or hard-to-control for FTQC using the constructions proposed so far, but still we’d like to do something interesting with them.
One fascinating, and under-theorized, example of this comes from Markus Greiner’s lab, which has put tens of thousands of atoms into optical lattices, where they can be used to simulate quantum systems such as antiferromagnetic spin chains. These sizes are far beyond what we know how to simulate classically, but on the other hand, the qubits are too noisy, and insufficiently addressable, to allow for general-purpose quantum computing. How can we meaningfully define a universal quantum computer without giving it the ability to correct errors?
Pretty much all attempts to do so have resulted in models equivalent to log-depth quantum computing, or in classically-simulatable models. But these results don’t give a full answer to the theoretical question of how to define non-fault-tolerant QC, nor to the practical question of what we can do with 100 qubits that experience a noise rate of 5% per gate.
This relates to some of the open questions that Gil mentions. He refers to noisy cluster states and noisy anyons. But we can also consider noisy adiabatic evolution, noisy linear optics, noisy IQP, or the NMR-inspired DQC1 model. Each is a fascinating candidate for classes intermediate between BPP and BQP.
Finally, we are starting to discover a few settings in which quantum error-correction is known not to apply. For example, suppose we do a Grover search using an oracle that fails (by doing nothing) with probability . Against this type of error, quantum error-correcting codes are worthless, and the Grover speedup vanishes. More generally, techniques from quantum information often yield quadratic improvements in estimating physical parameters, but in many cases, these disappear in the presence of a constant rate of noise.
Quantum information processing is about more than just building better computers, and as we move forward, it’s going to be important to understand just how much of a problem noise is. Especially while much of our field is in a theoretical phase, we need more people like Gil to play “Eve’s advocate” and push forward increasingly sophisticated theories of how noise can foil us.
Gil’s Third Rejoinder: Imaginary computers and Woodstock
A quiet intergalactic setting will not remain quiet once we put our device there.
More specifically, Aram wrote: “One approach would be to put the quantum computer in a place where there is very little noise other than what we introduce ourselves.” Right! This is an important distinction: the noise I am talking about all along is the noise we (or the quantum device) introduce and not some harmless background noise.
Aram’s suggestion to choose a quiet intergalactic place reminds me of Woodstock. I have a “Woodstock number” of 2, since Jeff Kahn with whom I wrote some of my best papers was actually in Woodstock. Woodstock was a place with very little noise, but when you run a concert for a million people in this noiseless place, it will be noisy—in principle!—no matter how much money you throw at it.
Generally speaking, I do not think that imaginary unrealistic quantum computers can be used to “fight” my conjectures. After all, we agree that QM allows to express the idea of a universal quantum computer. If you give me an unrealistic QC that violates, say, the strong principle of noise, then to me this says that the strong principle of noise is a principle which explains why your computer is indeed unrealistic.
Moving to the other parts of Aram’s third post, I am glad that he is closer to agreeing with me regarding intermediate models, so let me try to check the boundaries of our agreement. Of course, the readers are reminded that in places we agree, and even in places we disagree, we can both be wrong. Aram raised the theoretical question of how to define non-fault-tolerant QC. My conjectures aim to do precisely this, to define boundaries for behavior of QC’s in settings that do not allow FT. Aram asserted that 100 qubits are too few to apply current FT schemes, which raises the interesting practical question of how quantum computers with 100 or fewer qubits will behave.
Here again let me suggest the same conjectures: The errors (i.e., information leaks) for every pair of entangled qubits will be substantially positively correlated, and, for every error correcting code one might try for partial FT, encoded qubits will look like a mixture of a cloud of code words—in addition to standard noise. Thus, errors will systematically depend on the evolution and final state. For such QC’s, trying to create complicated states will lead to error-synchronization, and both correlation and error rate will scale up. Small QC’s demonstrate that the issue is not really scalable quantum computers but universal quantum computers. My conjectures allow quantum computers to scale but predict that their behavior with many qubits will be similar to their behavior with few qubits.
I will express my thanks to Aram for his wonderful partnership in our debate at a later occasion, but let me now thank him especially for his concluding sentence above—about the need for “Eve’s advocates” to push forward challenging theories of noise while the field is still in a young theoretical stage.
Intermezzo by Aram, Before Preskill’s Act
An even more “imaginary” quantum computer sets up a view of the real world that connects to Preskill’s paper, and provides another thought experiment that refutes Gil’s conjectures. Imagine a large quantum computer with a high rate of noise. The experimenter attempts to create entanglement between qubits, and can indeed apply the entangling operations, but this entanglement almost immediately disappears because of noise from the environment. So far uncontroversial.
But what does it mean that the entanglement disappears because of noise? The Schrödinger equation says that the state of the entire universe changes unitarily. How can entanglement disappear in such a model? This is an old problem in the interpretation of quantum mechanics, similar to the measurement problem, but somewhat easier to resolve. The key is a principle commonly called “going to the church of the larger Hilbert space”:
Any noisy quantum process can be modeled as a unitary interaction with the environment, followed by discarding [formally, tracing-out] the qubits of the environment.
That is to say, unitary evolution only appears noisy because it involves systems, such as photons heading away at the speed of light, that are out of our control. This picture offers a way to resolve the problem of noise in quantum computers, albeit one that won’t yield any practical computational speedups. We simply redefine what we call the “computer” to include all of the qubits in the environment that interact with it. This gives us a quantum computer that is definitely in a pure, highly entangled state, performing calculations that we have no idea how to simulate in sub-exponential time.
What does this have to do with Gil’s conjectures? It proves that for them to possibly hold, they will need to distinguish between the qubits we care about in our quantum computer, and the qubits in the environment that we don’t care about, can’t control very precisely, and may not even have a good model for. These are important practical distinctions, but they are not physical ones. I don’t think that anyone would believe in a model of decoherence that applies to qubits we care about, and not qubits that we don’t care about. But this is what Gil’s conjectures need.
Preskill’s Paper and Further Exchanges
John Preskill’s initial comment on the first item in this series picked up the Hilbert-space principle:
Gil says that while he is skeptical of quantum computing he is not skeptical of quantum mechanics. Therefore, I presume he takes for granted that a quantum computer (the “system”) and its environment (the “bath”) can be described by some Hamiltonian, and that the joint evolution of the system and bath is determined by solving the time-dependent Schroedinger equation. If Gil’s conjectures are true, then, no matter how we engineer the system, the Hamiltonian and/or the initial quantum state of the joint system must impose noise correlations that overwhelm fault-tolerant quantum protocols as the system scales up.
Preskill’s paper proves a theorem that establishes general conditions under which such a Hamiltonian allows fault-tolerant quantum computing. It extends to a more general setting a result proved in a 2006 paper of his with Dorit Aharonov and Alexei Kitaev. Preskill noted in the comment that his conditions are not necessary for FTQC, and followup opinion is that their failure would not constitute tangible evidence against FTQC in general.
The same comment already prefigured Kalai’s reaction that such Hamiltonian noise models are not physically realizable, saying “that could well be, though I would like to understand better his reasons for holding that opinion.” In a followup, Preskill noted that his conditions place no burden on the environment except that its qubits can be cooled. While conceding that this may not be realistic, he continued, “It would be helpful if Robert [Alicki] or Gil could exhibit families of Hamiltonians which allow fault-tolerant classical computing but disallow fault-tolerant quantum computing. Such examples would help to clarify where the disagreement lies.” This harks back to the classical/quantum aspect of Aram’s first response.
One point of disagreement later in that thread turns out to be the “freshness” of additional ancilla qubits required by FTQC during its operation. Comments by John Sidles and Aram further down portrayed this and related issues as practical engineering obstacles rather than fundamental physical limitations, and Sidles ended the thread with the point that assessing the latter requires tighter modeling and greater mathematical rigor on both sides, along lines that Preskill’s paper itself exemplifies.
Gil created a long agenda of 25 issues raised for further pursuit, including several related to Preskill’s paper. He later added a 26th issue (thermodynamics) which led to a detailed comment by Dave Bacon, and further discussions.
Gil and Aram began further discussion of the second response here where they were joined by Klas Märkstrom and commenter “matt”, and this may be refined in a further post. The comments by Joe Fitzsimons referenced by Aram above followed. Gil affirmed the Hamiltonian-Schrödinger part of Preskill’s comment, and began replying to the rest, as the discussion continued in that thread.
Getting back to Preskill’s paper, Sidles began considering concrete realizations of FTQC here, where he tied the question of whether Preskill’s conditions can apply in the real world to other open issues in quantum field theories and relations to relativity, and later here with his own skeptical conjecture about ohmic noise. The latter drew a direct reply last week from Preskill, who argued that the conditions in his paper did not entail those in the other John’s conjecture. A ringside commenter asked for general description to understand why “ohm” should be a mantra here.
Finally, Gil opened a reply to Joe Fitzsimons’ attempt to refute his conjectures by considering “blind computation,” and after discussion on its feasibility and general matters, this drew a response from Fitzsimons affirming its nature and citing an experiment. Gil then responded four days ago to queries from Boaz Barak, and that thread is ongoing.
Censorship and Qudits?
Steve Flammia and Aram have also written a note that approaches Gil Kalai’s Conjecture C from an extra-dimensional angle. They reason that the logic behind the conjecture should extend to generalizations of qubits with -many basis states for , called qudits. Then they demonstrate that such an extended conjecture is violated for for some states that reasonably comply with the conjecture’s condition that the states be plausibly creatable by experiments. Finally they do so for qubits, with , arguably remaining within the realm of plausibility. The note has further discussion of these points.
Aram also contributes a remark that “the pessimistic scenario of my previous post actually goes too far. In practice, when errors are correlated, they become easier to correct with techniques like composite pulses, spin echo and decoherence-free subspaces.” In what theoretical scenarios do these methods apply?
What might distinguish an engineering obstacle that cannot be overcome from a theoretical impossibility?
[fixed conflation of commenter "matt" and Matt Leifer; they are !=]