Perpetual Motion of The 21st Century?
Are quantum errors incorrigible? Discussion between Gil Kalai and Aram Harrow
Gil Kalai and Aram Harrow are world experts on mathematical frameworks for quantum computation. They hold opposing opinions on whether or not quantum computers are possible.
Today and in at least one succeeding post, Gil and Aram will discuss the possibility of building large-scale quantum computers.
Quantum computers provide a 21st Century field for the kind of debate first led by Albert Einstein about the reach of quantum theory. One thought experiment by which Einstein tried to contravene the Uncertainty Principle can be described as having asserted that quantum theory implies the creation of perpetual motion machines, which are impossible machines. In a later attempt, after initial puzzlement, Niels Bohr pointed out that Einstein himself had neglected to correct for gravity’s effect on time in general relativity.
Perpetual motion machines were the dream of many inventors over the centuries—and why not? Having a machine that could create useful work but consume no fuel would change the world. Alas advances in our understanding of physics have ruled them out: there is indeed no free lunch. The designs at right look like birds-of-a-feather, but the rightmost was designed in 1960 by Hermann Bondi to illustrate Bohr’s correction above.
The guest discussions here between Gil Kalai and Aram Harrow address the fundamental question:
Are quantum computers feasible? Or are their underlying models defeated by some fundamental physical laws?
Those like Royal Society co-founder John Wilkins who in 1670 wrote of perpetual motion machines did not know of the second law of thermodynamics. We, Dick and Ken, would like to think that if blogs like GLL were around centuries ago there might have been a more penetrating discussion than even the Royal Society could foster. We are here now, and we are very honored that Gil and Aram wish to use GLL as a location to discuss this interesting, important, and wonderful question. We believe in the win-win that either we will have wonderful quantum computers, or we will learn some new laws of nature, particularly about information.
For a roadmap, Gil and Aram will alternate thesis-response in these posts, talking about quantum error-correction and fault tolerance. However, we also invite you, the reader, to take part in the debate sparked by Gil’s paper, “How quantum computers fail: Quantum codes, correlations in physical systems, and noise accumulation.”Perhaps they and we will react to comments. We thank them greatly, and have worked to make the issues even more accessible.
Guest Post: Gil Kalai
The discovery by Peter Shor of the famous quantum algorithm for fast integer factoring gave a strong reason to be skeptical about quantum computers (QC’s), along with an even stronger reason for wanting to build them. Shor is also the pioneer of quantum error-correction and quantum fault-tolerance, which give good reasons to believe that QC’s can be built. Other researchers have focused on this very issue, and the physics community is filled with work on many approaches to building practical QC’s.
In my (Gil’s) part of the world, Michael Ben-Or is a world leader in theoretical computer science with major contributions in cryptography, complexity, randomization, distributed algorithms, and quantum computation. Among the famous notions associated with Michael’s work before he turned quantum are non-cryptographic fault-tolerance, multi-prover interactive proofs, and algebraic decision trees. Dorit Aharonov is one of the great quantum computation researchers in the world and she has studied, among other things, fault tolerance, adiabatic computation, lattice problems, computation of Jones polynomials, and quantum Hamiltonian complexity.
Aharonov and Ben-Or proved in the mid-1990s (along with other groups) the threshold theorem which allows fault tolerant quantum computation (FTQC) at least in theory. The following photo shows them on the road in Jerusalem in 2005 with me at left, and on the right Robert Alicki, a famous quantum physicist from Gdansk, Poland, known for work on quantum dynamical systems.
Alicki is perhaps the only physicist engaged in long-term research skeptical towards quantum computers and error-correction. Over the years he has produced several papers and critiques under this program, coming from several different directions: some based on thermodynamics, others based on various issues in modeling noisy quantum evolutions.
Conjectures on noisy QC’s and error-correction
I suppose readers here are familiar with the basic concepts of quantum computers: qubits, basis states as members of , superposition, entanglement, interference. My comments in the first round of discussions are based on several (related) papers of mine, mainly the one linked above (alternate link). A more technical paper is “When noise accumulates.” Here are slides from a related lecture at Caltech’s Institute for Quantum Information, and an earlier, more-detailed, survey. The feasibility of building quantum computers that can out-perform digital computers is one of the most fascinating and clear-cut scientific problems of our time. The main concern is that quantum systems are inherently noisy. Roughly what this means for QC’s is that the internal states of quantum registers may vary unpredictably outside the range that allows the algorithm to continue.
First consider a single classical bit with some probability of being flipped when read. For any we can improve the odds of correct reading above by making and sending enough separate copies . In case of any flips the reader will take the majority value, and this works provided the error events on the different bits are independent. For strings of bits there are error correcting codes that achieve the same guarantee more efficiently than making copies, and that can also cope with limited kinds of correlated errors such as “burst noise” which affects consecutive bits.
For quantum systems there are special obstacles, such as the inability to make exact copies of quantum states in general. Nevertheless, much of the theory of error-correction has been carried over, and the famous threshold theorem shows that fault-tolerant quantum computation (FTQC) is possible if certain conditions are met. The most-emphasized condition sets a threshold for the absolute rate of error, one still orders of magnitude more stringent than what current technology achieves but approachable. One issue raised here, however, is whether the errors have sufficient independence for these schemes to work or correlations limited to what they can handle. I will now go on to describe my conjectures regarding how noisy quantum computers really behave.
Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding.
Conjecture 1 does not obstruct classical error correction as described above. The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words. The conjecture asserts that, for a realistic implementation of quantum error-correction, there is no way around it. Conjecture 1 reflects a strong conjectural interpretation of the principle that quantum systems are inherently noisy:
Conjecture 2 (The strong principle of noise): Quantum systems are inherently noisy with respect to every Hilbert space used in their description.
Conjecture 3: A noisy quantum computer is subject to noise in which error events for two substantially entangled qubits have a substantial positive correlation.
Conjecture 4: In any quantum computer at a highly entangled state there will be a strong effect of error synchronization.
Standard circuit or machine models of QC’s divide the computation into discrete cycles, between which one can identify “fresh noise” apart from the accumulated effect of previous noise. The threshold theorem does entail that (when the noise rate is under the threshold) for FTQC to fail, these conjectures must hold for the fresh noise. A QC model in which fresh noise shows these effects differs sharply from the assumptions underlying standard models. I proved that a strong form of Conjecture 3, where “entanglement” is replaced by a certain notion of “emergent entanglement,” implies Conjecture 4.
Conjectured Limit on Entanglement
The papers argue a few other conjectures regarding how noisy quantum computers behave. One describes noisy quantum evolutions that do not enact quantum fault tolerance, which we skip here. The most quantitative one is called Conjecture C in the technical paper on noise, C for censorship because it concerns what types of (highly entangled) quantum states cannot be reached at all by such noisy QC’s.
Consider a QC with a set of qubits. Given a subset of qubits, consider the convex hull of all states that for some factor into a tensor product of a state on some of the qubits and a state on the other qubits. For a state on , define as the trace distance between and . For a state of all the qubits, define .
Conjecture C: There is a polynomial (perhaps even a quadratic polynomial) such that for any QC on qubits, which describes a state (which need not be pure), .
Here QC can be regarded as a quantum circuit given initial state .
Interpreting and Testing the Conjectures
The strong interpretation is that the conjectures hold globally, for any quantum dynamical system on which a QC can be based. The medium interpretation says they hold for processes currently observed in nature, but human artifice can create systems in which they are false, thus allowing computationally superior QC’s to be built via FTQC. The weak interpretation is that they only make a sharp distinction between two kinds of QC models, one supporting FTQC and the other not, and that the former kind can be built artificially and also does represent some quantum processes that occur naturally.
I tend to believe in the strong interpretation, namely, that the conjectures are always true. The weaker interpretations can be used to discuss (as we do below) specific proposals for implementing quantum computation. There are quite a few suggestions on how to build quantum computers based on qubits and gates, and also some suggestions based on computationally equivalent but physically quite different methods.
Nevertheless, I do not expect a common physical reason why my conjectures should apply for each proposed realization of a QC. Hence the conjectures should be examined, either based on detailed modeling, or based on experimentation, on a case-by-case basis. Note that they are not about some mysterious breakdown that occurs when you try to scale quantum computers to a large number of qubits. Conjecture 3 is about the two-qubit behavior of a quantum computer with any number of qubits, and it can be checked (as can the other conjectures) on quantum computers with a rather small number of qubits.
One prominent proposal under which the conjectures can be tested is measurement-based QC employing cluster states. Cluster states can be regarded as code words in a certain quantum error-correcting code. Once you prepare such states, universal quantum-computing can be achieved by a certain measurement of the state. Conjecture 1 asserts that noisy quantum states created in the laboratory will involve a mixture of the intended state with other cluster states.
Question 1: Will such noisy cluster states still support universal quantum-computing?
A second proposal is topological quantum computing. Non-abelian anyons that can support universal quantum-computing can also be regarded as codewords in a certain quantum error-correcting code. Similar to before, the conjecture asserts that when we create such states in the laboratory (in a process that does not apply quantum fault-tolerance) we achieve a mixture of intended codewords with unintended codewords.
Question 2: Will such noisy anyons be useful for universal quantum-computing?
For these two proposals the special physical gadgets are supposed to be constructible by “ordinary” experimental quantum physics that does not involve quantum fault-tolerance, so they are an especially appealing testbed for my conjectures where all three interpretations can apply.
Why I Believe My Conjectures
Let me explain why I think that my conjectures are correct—also mindful of this nice post by Shmuel Weinberger on what “a conjecture” means for a mathematician. I regard it as implausible (see below) that universal quantum computers are realistic, and I think that the issue of noise is indeed the main issue. The strong principle of noise underlying Conjecture 1 strikes me as the right way to approach noise in quantum systems to begin with. The two-qubit conjecture proposes the simplest dividing line that I can think of between noise that allows fault tolerance and noise that does not. The conjecture regarding error-synchronization also captures, in my opinion, a very basic obstacle to quantum fault-tolerance. There is an argument from first principles that since error-correction is possible classically and Nature is really quantum, then error-correction must be possible quantumly. But it strikes me as conflating the settings after-the-fact. In any case, my conjectures allow classical error-correction and fault tolerance. And, finally, as far as I can see, my conjectures on the behavior of noise do not violate any principle of quantum mechanics.
As an aside, let me briefly say why I tend to regard universal quantum computers as unrealistic. An explanation for why universal quantum computers are unrealistic may require some change in physics theory of quantum decoherence. On the other hand, universal quantum computers will be physical devices that are able to simulate arbitrary quantum evolutions, where the word “simulate” is understood in the strong sense that the computer will actually create an identical quantum state to the state created by the evolution it simulates, and the word “arbitrary” is understood in the strong sense that it applies to every quantum evolution we can imagine as long as it obeys the rules of quantum mechanics. As such, quantum computers propose a major change in physical reality.
Aram Harrow: A Short Response
Although Peter Shor has already been featured on this blog for his famous factoring algorithm, I want to mention an arguably deeper contribution of his to quantum information. After demonstrating that -bit numbers could be factored in time, Shor pointed out that this was possible even with noisy gates, as long as each gate’s noise was (This observation is not totally obvious, and rests on the fact that quantum computers, unlike analog computers, cannot magnify small errors in their amplitudes.) Shor made this point to argue that factoring can be achieved with resources that are genuinely only polynomial, even when counting time, number of processors, energy and precision. When proposing new models of computation, it’s important to not to fall into the trap of analog computing, where seemingly innocuous assumptions dramatically change the power of the model.
While requiring noise to scale as might be theoretically reasonable, it’s not very encouraging if we hope to ever build a large-scale quantum computer. In the mid 1990’s, many disbelieved that quantum decoherence could ever be significantly reduced. Shor (and others) responded to this by developing the theory of quantum error correcting codes (QECC), which protect data in a manner analogous to classical codes. This requires overcoming several difficulties, such as the no-cloning theorem (which prevents redundant encodings), the fact that measurements cause disturbance, and the continuous range of possible errors.
Later, Shor (and Aharonov and Ben-Or, and others) extended QECCs to protect dynamic computations, so that fault-tolerant quantum computing (FTQC) could be achieved in the presence of a sufficiently low, but constant, rate of errors. To be sure, this makes assumptions such as independence that Gil is questioning.
QECC and FTQC are more than an answer to a technical objection; together they describe a potentially new phase of matter. In my opinion, they represent the deepest discovery in quantum mechanics since Bell’s Theorem. And we have in part the criticism of the quantum computing skeptics to thank for these breakthroughs! I hope the conversation between Gil’s skepticism and the optimism of people like me will also lead to useful results.
In a later post, I’ll respond in detail about why I believe that the emperor is fully dressed, and large-scale FTQC is possible, not only in theory, but realistically in the not-too-distant future. But by way of preview, I’ll outline my arguments briefly here.
Response Road Map
- Any argument that FTQC is impossible must also deal with the fact that classical computing is evidently possible. Just as we know that any vs proof must avoid working relative to every oracle, we can argue that any proof of quantum computing’s impossibility must somehow distinguish quantum computers from classical computers. This rules out most models of maliciously correlated errors.
- The key assumption of FTQC is (approximately) independent errors. Conversely, Gil’s skepticism is based on error models that may have low single-qubit error rates, but are highly correlated even across large distances. While this possibility can’t be definitively ruled out until we build a working large-scale quantum computer, I’ll give both theoretical and experimental evidence that such error models don’t occur in nature.
- Current routes to building quantum computers, such as ion traps and superconductors, nevertheless suffer from correlated errors. I think these correlations aren’t too bad, but they definitely exist. However, I’ll propose a thought-experiment implementation of a quantum computer, which is not meant to be practical, but where correlated errors are highly implausible.
What are your thoughts on this matter? Please try to be as clear as possible, and if you refer to specific issues raised here this will be especially good. Also, solve Questions 1 and 2.
[fixed intro's conflation of two Einstein-Bohr interchanges]