Flying Machines of the 21st Century?
First of three responses by Aram Harrow
Dave Bacon began the blog The Quantum Pontiff in September 2003. Thus he was among the earliest voices promoting the theory of quantum computation, and explaining it brilliantly in ways non-experts can understand. He now works at Google in the Seattle area, while his blog is staffed by “A College of Quantum Cardinals”: Charlie Bennett, Steve Flammia, and our second debate participant, Aram Harrow.
Today Aram begins a three-part rebuttal to Gil Kalai’s post with conjectures about entangled noise as an impediment to building quantum computers.
Why is classical computation possible?
To quote Bacon in an earlier post: “When we dig down into the bow[e]ls of our computers we find that in their most basic form, these computers are made up of parts which are noisy or have uncertainties arising from quantum theory.” Thus if Gil’s concern about entangled errors is actual, why are they not defeating these parts?
Cristopher Moore was the first of several to say that a better analogy than perpetual motion is heavier-than-air flying machines, which were also thought by some to be infeasible on first principles. This all goes to say that if you take a laptop computer running Red Hat on a jet with quantum-level navigation components, nihil obstat—no trenchant problem needed to be overcome.
This part discusses differences between quantum and classical error correction. One key difference is that qubits can suffer “de-phasing” noise which effectively turns them into classical bits. Bacon answered Robert Alicki, the “patron” for Gil’s post, on this point before, but last year noted his own “de-phasing” by leaving quantum computing research to work in software engineering. He still occasionally posts as “Pontifex Praetorium” on his old blog, but more often can be found at his new blog, Pontiff++. Let us give the floor again to our debaters.
First Response by Aram Harrow
There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer.
My defense of quantum computing, unlike Gil’s, is not based so much on original thinking, but rather the consensus picture from the field of quantum information. Other quantum bloggers, such as Scott Aaronson and Dave Bacon, have argued similar points (and see also the excellent comment discussion on this series’ opening post), but I am not explicitly speaking for anyone else. My main contribution is in specifically responding to Gil’s conjectures.
For background, I like Daniel Gottesman’s explications of QECC (quantum error correcting codes) and FTQC (fault-tolerant quantum computing). Gil has advanced conjectures to argue that FTQC will fail in real systems—because errors either follow the computation, becoming maliciously correlated with the error-correction, or because they correlate for other intrinsic reasons. By all means, see his original post, his paper, or his more technical paper.)
My response is in three parts—only the first part appears in this post:
- The classical fault-tolerance test.
- Nature is subtle, but not malicious.
- Throwing (imaginary) money at the problem.
The Classical Fault-Tolerance Test
If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing.
At first glance, this seems absurd. Classical computing seems so much easier! The conventional wisdom on classical fault-tolerance is that von Neumann and others invented the theory back when classical computers used relays, vacuum tubes and other unreliable components, but now the bit error rates are so low that we don’t need to use their awkward circuits for iterated majority voting.
In fact, every bi-stable flip-flop in a classical computer is implementing a repetition code, where is comparable to the number of electrons flowing through a single wire. Viewed in this lens, a classical computer is already implementing all of the principles of fault-tolerant computing: it keeps data encoded at all times, never fully decoding (which would correspond to storing a bit in a single electron), and performs gates directly on encoded data. It is also constantly error-correcting, by using the non-linearity of the transistors to perform majority voting de facto, and doing so at a rate significantly faster than the GHz clock speed (in other words, classical computers spend most of their “time” correcting errors.)
Fault-tolerant quantum computing follows all of these principles as well, with one big exception: there is no quantum analogue of the repetition code. (Why not? The natural candidate of not only suffers the flaws inherent to analog computing, but also requires nonlinear encoding and decoding, violating a fundamental principle of quantum mechanics.) Instead, quantum errors come in two types:
- X, or bit-flip, errors that exchange and ; and
- Z, or phase, errors that send to , while leaving unchanged.
Classical computers of course have only X errors. In classical computers repetition codes are used everywhere, and the codes self-correct much more quickly than the gate time for logical bits. To improve reliability in a classical computer, it’s sufficient just to make the wires and insulators fatter, or the voltages higher. In quantum computers the lack of a repetition code is a big deal. It not only complicates achieving reliability, but puts more onus on active error correction. We need to enlarge the error-correcting code, which is not just a homogeneous collection of qubits, but a highly structured object that needs active error correction.
How a QECC Copes
The natural quantum version of a repetition code is the linear map sending . This would protect against X errors (error rate would become ) while increasing the vulnerability to Z errors (error rate would become ). A dual version would protect against Z errors while increasing vulnerability to X errors. Only by combining these do we obtain codes such as the 9-qubit Shor code that can correct any single-qubit error, thus transforming error rate on the physical qubits to error rate on the encoded qubit.
By encoding each of these 9 qubits in 9 more, we get an 81-qubit code whose error rate is . And if a small enough constant (like 0.3), then by iterating, we can get error rate with poly(log()) qubits, albeit with some ugly constants. If our decoding is also subject to error, as in FTQC, then we need to be smaller, usually to , depending on the model. But the same threshold phenomenon occurs, assuming independent errors.
(An aside about passive error-correction. We don’t know how to do it in fewer than 4 dimensions. See arXiv:1103.1885 for an argument that passive self-correction in 3-d cannot use a scale-invariant code (such as something repetition-like); on the other hand, at least one 3-d system is known with some moderate self-correcting properties. Known no-go theorems in 2-d are stronger, and in 1-d even passive classical memory is ruled out, although examples such as Toom’s rule almost qualify.)
On the other hand, the lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers, but rather explain the enormous difference in the practical difficulty of constructing quantum and classical computers. In other words, the differences between classical and quantum computing mean a large difference in practical difficulty, but could not lead to one being possible and the other fundamentally impossible.
Where is the Objection Strictly Quantum?
The first reason I’m skeptical about Gil’s conjectures is that they don’t appear to make use of these two differences between classical and quantum computing. Rather Gil questions the independence assumption of errors. But if highly correlated errors routinely struck computers, then they would also be a problem for classical computers.
Quantum mechanics describes everything, including classical computers. If quantum computers suffer massively correlated errors with non-negligible probability, then so must classical computers, be they Pentiums or abacuses (or DNA, or human memory).
If electrons hop from wire to wire not one-by-one, but a trillion at a time, then those correlated bit-flip errors would defeat a classical repetition code, just like they’d defeat various quantum coding schemes. To distinguish these cases, you would have to have that Z errors (the ones that only quantum computers care about) are highly correlated, while X errors are not.
It is important to separate the issue of correlation from the issue about single-qubit error rate. If Z errors occur at rate and X errors occur at rate , then the threshold theorem relies on the probability of simultaneous Z (resp. X) errors decaying like (resp. ). It’s not crucial for fault-tolerance that or , only that are both small. In experiments, we observe that is often much smaller than , and that (up to our abilities to measure this). The former fact means that superficially it seems harder to protect quantum data, since the first thing we notice is that nasty high rate. But can with effort be lowered, and what ultimately matters is that are sufficiently small.
To reconcile Gil’s conjectures with classical computing as well as experiment, we would need that are all small, but is inevitably large. This is like theories of aether, in which dynamics have a preferred direction. How can Gil make this work?
One possibility is a dramatic rewrite of the rules of quantum mechanics, which is not his intent. Another is to argue that in any physical system, however X and Z are defined (because their definitions are as arbitrary as the definitions of the x,y,z coordinate axes), at least one of X or Z will have . Basically, Nature would have to maliciously stick her fingers into every computation in unpredictable ways that foil every attempt at quantum computing, but have not yet been detected by exquisitely precise tests of quantum mechanics. I find this unlikely.
Short Reply by Gil Kalai
The classical error-correction test is indeed at the heart of matters, and understanding the enormous difference in constructing classical and quantum memory and computing is indeed a fundamental question. I am glad that this is also Aram’s view.
Aram’s objections to my conjectures are based on intuitive grounds and not on formal grounds, as my conjectures have no consequences for classical error correction.
In a crucial passage above, Aram wrote: “The lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers… [they] explain the enormous difference in the practical difficulty of constructing quantum and classical computers\dots, but could not lead to one being possible and the other fundamentally impossible.”
Aram is correct that the repetition code is important. But my conjectures are required for drawing the picture of why it is important. What is it that distinguishes the repetition code from others, more sophisticated and more efficient error-correcting codes? Why is it that in nature and in classical computers, repetition codes are used everywhere? The answer that I propose is that when you consider the repetition code as a quantum code that corrects X-errors, there is no difference between the noise given by the standard noise models and the noise described by Conjecture 1. Therefore, the repetition code allows classical memory from noisy quantum systems even under Conjecture 1.
Conjecture 1 does not involve any preferred direction or symmetry breaking. The conjecture excludes the possibility of quantum error correction codes but allows the possibility of classical error correction via the repetition code.
Incidentally, the lack of a quantum analog of the repetition code and the unique properties of the majority function were the jumping-off points in my first paper on quantum computers in 2005.
Does maintaining a physical difference between the classical model with repetition/copy and the quantum model entail positing an artificial scale limit on the applicability of quantum mechanics itself?
Can one give a physical theory compatible with such inherently-correlated errors? Critical systems experience fluctuations with equal probability at all length scales. Could one of these produce the right correlations in Z errors, while having exponentially small probability of experiencing highly correlated X errors?