Aram Harrow and Gil Kalai debate “Conjecture 1″
William Wootters and Wojciech Zurek were office-mates in 1979 as graduate students at U.T. Austin in John Wheeler’s group. A paper by Nick Herbert of the Fundamental Fysiks Group crossed their desks with an idea toward harnessing quantum mechanics for faster-than-light communication. Fascinated though skeptical, they realized the scheme made an implicit assumption. It assumed the ability to make a perfect copy of an arbitrary quantum state , i.e., a qubit—except Wootters didn’t coin that term until 1995. Independently of Dennis Dieks in 1982, they proved that only special—not general—qubit states can be cloned. The name “No-Cloning Theorem” was supplied by Wheeler.
Today we continue the debate between Gil Kalai and Aram Harrow on scalable quantum computing, focusing on how ideas about repetition impact Gil’s “Conjecture 1″ from the series’ initial post.
The debate was expected to conclude last spring, but ideas and material to sustain it have developed in preprints with refutations, new queries, copious comments sometimes preceded by private exchanges, and further discussions on the participants’ own fora. We note especially that John Preskill has posted a full paper, “Sufficient condition on noise correlations for scalable quantum computing,” extending work we previously referenced. Today’s post was to be the conclusion, but as it passed 20 LaTeX pages we declared this part of it a fourth “round” instead. We apologize that some of it is necessarily “entangled” with the yet-to-appear conclusion, which will follow shortly.
In his first response post, Aram described a classical error-correction test, and then began an approach against Gil’s Conjecture 1 using repetition codes. In this post they discuss these two matters further. We first present three different ways of repeating things, the last a thought experiment by Gil that set the initial terms of Aram’s analysis. Then comes Aram’s detailed modeling and critique of Conjecture 1, followed by a response by Gil. Finally we present Gil’s summation on the issue of why classical computers are possible.
Rep-A, Rep-B, and Rep-C
Given an object , the simplest fault-tolerance idea is to make independent copies: . That way if fewer than of them are corrupted, the majority vote of the others will preserve the original. Let’s call this building-block idea Rep-A.
When the object represents a probability distribution (classical or quantum) then things become more involved. If is a classical biased coin or a qubit what does it mean to repeat ? What is the precise object we are repeating? Here, the no-cloning theorem (which also has a classical counterpart) is of some relevance.
One can do Rep-B: Suppose is a qubit with unknown value . We can use entanglement to create
Nature allows this as a general quantum operation. This idea extends to more complicated classical and quantum error correcting codes.
In Gil’s thought experiment is a classical biased coin . The Rep-C idea is to represent this by . The “decoding” of a state of bits gives to `1′ its expected weight over the bits. Aram will subsequently discuss the quantum version.
We will see how these ideas play out in arguments over Gil’s Conjecture 1, which was originally stated as follows:
Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding.
Gil: a classical analog of a quantum computer
Aram’s analysis of Conjecture 1 builds on this thought experiment offered by Gil. This is an example to show that had we lived in a classical noisy world where the classical analog of Conjecture 1 holds for the noisy microscopic scale, noiseless classical memory and computation were nevertheless possible.
The classical analog of Conjecture 1 (let’s call it Conjecture 1′) asserts that when we use Rep-C to encode a classical random bit we get, with a substantial probability that is bounded away from 0, states that decode into codewords of the form where is distributed in a neighborhood of . (Conjecture 1′ is not true in general, but what we see in a minute is that it is not harmful to the repetition code.)
The point is that Rep-C, even under a noise that satisfies Conjecture 1′, allows noiseless classical information and classical computation. For that you represent `0′ by when is greater than and `1′ by when is smaller than .
Aram: Conjecture 1 and the repetition code
A central part of Gil’s skepticism is his conjecture 1, which essentially asserts that quantum error-correcting codes cannot work as advertised. Here is my restatement of that conjecture.
There exist a universal constant with the following properties. Let be an encoding map (specifically, a quantum operation) that maps one qubit to qubits. Let be a pure single-qubit state (the “intended state”) and a procedure that, in the absence of noise, would prepare the state . Let be the effect of in the presence of physically realistic noise. Then there exists a distribution satisfying
There are a number of questions about this conjecture that one might raise:
- Why the noise process ? I added this to make the conjecture more realistic. As we will see in the example, it may be that outputs only states that are extremely unlikely to be physically realized (even without any extra skeptical assumptions), so without , the conjecture would not have a chance.
- What is ? If is completely general (and can depend arbitrarily on and ) then the statement of the conjecture becomes vacuously true. What I have in mind is that will be independent de-polarizing noise; that is, each qubit is replaced with the maximally mixed state with probability 1/100 and left alone with probability 99/100. This roughly corresponds to the sort of noise model that optimists (like me) posit, and that standard proofs of the FT Threshold Theorem assume.
- Why is a quantum operation? Such codes are the only ones used in proofs of the Threshold Theorem, and it is hard to imagine FTQC without them (although see below for some discussions of relaxed versions of this). Still this restriction should only make the conjecture easier to prove/harder to refute.
- What right do I have to put words into Gil’s mouth? This is a fair point. :) Gil might prefer to formulate the details of his conjecture differently. Of course, if he does, I’ll respond to that, and round and round we’ll go.
One consequence of being a quantum operation is that it commutes with taking the expectation over . If we then define the map (which itself could plausibly be a quantum operation, such as de-polarizing noise with strength ) by
then the state asserted in Conjecture 1 can be rewritten more simply as
By contrast, the conventional/optimistic view is that states of the form can be prepared, without . Thus Conjecture 1 is tantamount to asserting that noise will inevitably strike before a state is safely encoded. Of course this would be true if we followed the naive strategy of preparing outside of an error-correcting code, and then performing the encoding map; since we cannot perform this encoding arbitrarily quickly, there will always be a constant amount of noise that occurs before the state is protected. (Note that FT computing, whether classical or quantum, does not use this strategy, which is a point that is overlooked by Robert Alicki’s paper quant-ph/0411008.) But Conjecture 1 asserts that the same effect will plague any protocol .
Two repetition codes
Let us attempt to apply Conjecture 1 to the simplest code: the repetition code. We have to be careful because in fact there are at least two ways to define the repetition code as a quantum code. The one that I believe makes the most sense is to add qubits in the state, and perform controlled-NOT gates from the input qubit onto each of the new qubits. This will map
where and . Extending this to a quantum operation, we obtain
This “Rep-B” idea is only one interpretation of the repetition code. It can correct up to bit-flip errors, but is vulnerable to even a single phase error. (A standard theorem of quantum error correction states that it suffices to consider only these two types of errors.) (source for image)
The other idea, cloning, is simply to copy the input state times, viz.:
This cloning map is not used in any FT scheme, classical or quantum, probably because it does not lend itself well to error-correction. For any error-correcting code, we would like a procedure to project a state (i.e. density matrix) onto the linear subspace of valid code states. Operationally this corresponds to measuring whether a (detectable) error has occurred. For Rep-B, this corresponds to checking whether our state is in .
For Rep-C, this operation is trivialized: since is the maximally mixed state on qubits, then any state is compatible with this. Thus, we can never look at an -qubit state and say that it is not a valid encoding of some state under Rep-C. This weakness stems in part from the fact that Rep-C is a nonlinear map, while decoding or error-detecting is necessarily linear. As a result, I believe that (contrary to Gil’s arguments) Rep-C does not make sense even for classical FT, unless it is redefined until it essentially becomes Rep-B (e.g. the domain is restricted to be the states and it is extended linearly to all states).
Classical error correction
It seems worth reviewing here how classical FT computing is workable using Rep-B.
- Each logical bit is represented by physical bits which are ideally in the state or , but in practice will merely be close (in typical Hamming distance) to these states.
- Fresh logical bits are introduced to the computation by initializing blocks of physical bits to the state .
- If the computation is randomized, then instead we can initialize a block of physical bits to the state .
- Gates are performed transversally, to minimize the propagation of errors. That is, to store the AND of logical bits and in a logical bit , we assume that these bits are stored in physical bits , and we then set for each . In this way, errors will be magnified by at most a constant multiple.
- In-between each gate, we will do error correction to attempt to restore each bit to a valid code state. Physically, this is achieved by a bistable flip-flop (in RAM) or a ferromagnetic material (in a hard drive). The former is “active” in that it requires a continuous supply of power, while the latter is “passive” in that it will correct errors simply by being in contact with a room-temperature thermal bath (and samples of magnetite from the sea floor have demonstrated that this sort of memory can last for at least millions of years).
(For Rep-C, steps 1, 2 and 4 are the same, but steps 3 and 5 fail.) One not-so-obvious point is that is never performed. Rather, we develop schemes to directly prepare states such as .
FT quantum computing works in essentially the same way as the above strategy, except that the repetition code is replaced by a code that can correct a large number of both bit-flip and phase errors.
Applying conjecture 1 to the repetition code
What happens if we apply Conjecture 1 to Rep-B? First, observe that after encoding, the states become extremely sensitive to phase errors, since a single phase error on any qubit will affect the phase of the encoded qubit. Thus, outputs of can effectively be assumed to be classical mixtures of and . Indeed, applying to the output will also ensure this.
But what if we have “pre-encoding” noise that is, say, a de-polarizing channel of strength ? Then attempting to prepare will inevitably result in a state of the form
This means that if we attempt to initialize a bit in a RAM or a hard drive to a ’0′ state then there will be a probability that it instead yields ’1′. The situation is even more absurd when one considers other versions of a repetition code. Consider an abacus with beads consisting of atoms, each of which can be either slid to the left or the right. Conjecture 1 would imply that if we attempt to slide a bead on the abacus to the left using our best possible technology, then we will inevitably accidentally place it on the right with probability .
This should be enough to dispose of Conjecture 1. After all, it is stated as applying to all codes. If an exception is built in for Rep-B (which is the key code behind classical computing), then it starts to look rather camel-shaped.
Indeed, the conjecture cannot even be repaired by saying that “for all , there exists a for which is hard to prepare.” Because for the repetition code, preparing the encoded state is simply a matter of flipping a coin with a very precise bias, and then preparing many qubits in a way controlled by this coin flip. Again this is something that could have already been achieved with an abacus.
What about applying Conjecture 1 to Rep-C? This is the situation that Gil discusses when he says that Conjecture 1 is compatible with classical computing. Indeed for Rep-C, noise before encoding behaves pretty much the same way as noise after encoding. So, Conjecture 1 is compatible with . Can we say that if Conjecture 1 doesn’t hold for every code, it at least holds for some code? Perhaps, but I don’t view Rep-C as particularly representative of error-correcting codes, because if we use any code other than the repetition code, then only the “-B” framework makes sense. Moreover, given that Rep-C cannot distinguish between pre-encoding noise and i.i.d. post-encoding noise, it seems hard to use it as evidence in favor of Conjecture 1.
I believe that Conjecture 1 is likely to fail for most interesting maps . In general, if a system has a two-dimensional topologically protected subspace, then mixing within this subspace is exponentially suppressed. Like Conjecture C, I believe it is too broad. But there may well be more specific classes of states that noise effectively prevents us from creating.
Gil: Conjectures 1 and Aram’s classical error correction test
A review of Conjecture 1
Conjecture 1 asserts that encoded quantum qubits are noisy. What do we mean by a noiseless encoded qubit? If is an encoding of with qubits we require that can be determined from up to an error that we cannot distinguish from zero in probability that we cannot distinguish from 1. Quantum error correction achieves this (for every ) when we let the number of qubits used in the encoding grows. (For this formulation you may need to rely also on the Kitaev-Solovay theorem.) Conjecture 1 asserts that this is not possible.
The case of the repetition code
In the context of quantum fault tolerance and the threshold theorem the only repetition code which is relevant is Rep-B. I always regarded Conjecture 1 for the repetition code as obviously true. Indeed, it is correct that when or determines uniquely, but for any other value of there is a whole (one-dimensional) “cloud” of states that we cannot distinguish from . So the repetition code supports having a stable ground state and a stable excited state but does not support noiseless superposition of these states. This example indeed shows (and I thank Aram for clarifying it) that when I say “the probability” in Conjecture 1 you have either to consider the worst case or to take also at random in computing the probability. Aram claims that Conjecture 1 is violated by Rep-B for every . Let’s look at that.
Aram’s formalization of Conjecture 1
A large part of my work with many backtracks and difficulties was to translate conjectures from English to mathematics. This is not a simple matter at all. (This is an opportunity to mention Greg Kuperberg who found various loopholes in some of my early attempts.) So it feels nice to see Aram trying to formalize Conjecture 1, and overall, Aram mathematical formulation of Conjecture 1 is reasonable except that there is one term which is left in English:
“… instead prepares a state close to
What does “prepare instead” mean? The way I would formalize this English part, which agrees with what Conjecture 1 is all about, is that with probability larger than the encoding for after the noise and the encoding of after the noise cannot be distinguished. This is not Aram’s interpretation. Aram’s interpretation of Conjecture 1 is simply incorrect. If I understand correctly what Aram says he would regard also the trivial code which encodes every by as a counterexample to Conjecture 1. Again, this is incorrect.
Let me respond to Aram’s challenge and try myself to formulate Conjecture 1 (it can either stand alone or complement the missing part in Aram’s formulation).
Conjecture 1 (no quantum error-correction): Denote by the trace-distance. There exists with the following property. Consider a realistic device which encode a single qubit with qubits. Suppose that denotes the encoded state of the single-qubit state . Then there are two states and so that and
The strong principle of noise (Conjecture 2), which is more general than Conjecture 1, asserts that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. Both of these conjectures require some formal description but are interesting and lead to fruitful insights even without such a description. Let me mention Aram’s second thought experiment which was a very original attempt at a counterexample for conjecture 2. Rather than trying to hide a noiseless evolution inside a large noisy one, Aram suggested to enlarge a noisy evolution into a larger noiseless system. Looking carefully at this suggestion the conclusion (see this comment by Aram concluding a long discussion) was that it cannot work.
Conjectures 1 and 2 do not describe noise models but can be seen as requirements from yet hypothetical realistic noise model. In the next post, I will describe a Hamiltonian noise model which is meant to express formally the idea of “no quantum FT” and to support Conjectures 1-4. We will also mention a suggestion regarding the rate of noise which is relevant to both Conjectures 1 and 2. When we talk about “what is a qubit” we assume that we can manipulate the state of the qubit in a fairly general way and we need to relate the rate of noise with the qubit’s evolution.
Aram’s classical error-correction test
Why do we have (essentially) noiseless classical memory? Why are classical computers possible? This is indeed the heart of the matter. Here is my point of view.
We start with the strong principle of noise that says that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. A special case of this general principle is Conjecture 1 which asserts that qubits must be noisy; For any encoded qubit we have a substantial mixture of the intended codeword with a “cloud” of unintended codewords.
How does Conjecture 1 allow for noiseless bits? We now need the following repetition principle:
Let and be two distinct probability distributions. Represent ’0′ by a sequence of samples from and ’1′ by a sequence of independent draws from . If is large enough then ’0′ and ’1′ are noiseless bits. (This is true even if in the presence of additional noise of various kind and even if the independence assumption on the samples is relaxed in various ways.)
It follows that a device able to create large samples from two distinct probability distributions enables noiseless bits, and and when we (or nature) control such a device then noiseless classical computation is possible.
Aram’s analogy with 2-SAT and 3-SAT
Aram started his three-post reply last winter with the following analogy: “If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing.”
This is correct. Proving that 3-SAT needs exponential time requires an argument that does not apply to 2-SAT and XOR-SAT. Today, after many years, we still do not have such an argument. But this does not mean that we should believe that 3-SAT behaves like 2-SAT. Indeed, while the proof is a still far, we have strong reasons to believe that 3-SAT is hard; a world where 3-SAT is easy presents a very different reality than ours. Similarly, proving that fault-tolerance quantum computing is impossible will require more experimental efforts and theoretical progress. But the fundamental differences, as those described in the previous debate post, between a world with fault-tolerant quantum computing and our present reality give fair reasons to believe and certainly to explore the possibility that quantum fault tolerance is not possible.
The distinction between classical and quantum error-correction is a major issue whether you are skeptical about the feasibility of universal quantum computers or not. In my opinion, the repetition principle in a reality that obeys Conjectures 1 and 2 draws an appealing picture for the distinction between classical error-correction and quantum error-correction. Aram proposed a different explanation which relies on both the repetition code and on the difficulty to find in 3D quantum forms of passive memories and some commentators offered other explanations. I find these explanations less satisfactory.
How to model the memory of digital computers?
Is Rep-B or Rep-C more appropriate?
Here is an interesting question that came up as a by-product of my discussion above with Aram on the repetition code. Using Aram’s terminology the question is, “Is the microscopic structure of digital memory better described by Rep-B or by something like Rep-C?” When we think about the memory of a digital computer as a vast array of electrons with up spins and down spins where `0′ is represented by (say) 0.01 fraction of up spins and `1′ is represented by a 0.99 fraction of up spins then the Rep-B description suggests a very tight fluctuation for the number of up spins. The Rep-C description is more relaxed and allows the fraction to fluctuate in an entire interval like [0.009,0.012]. Of course, the digital memory implement a repetition of a specific state and not of a general state, so the question is if we should consider the state of the digital computer repressenting ’0′ as subject to small independent noise or as where is some fixed state near which is itself subject to noise.
There is a more general issue here. I expect that in various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. Of course, such a concentration is not required for quantum error-correction. I tend regard this feature as unrealistic and therefore such models, in my opinion, leave out a crucial aspect of noise which may or may not allow fault tolerance. See this comment. It will be interesting to understand whether Preskill’s recent assumptions on correlation imply concentration for the number of errors.
Is the microscopic structure of digital memory better described by Rep-B or by Rep-C? Do the fluctuations in the number of up spins in digital computers memory behave like the square root of the number of spins? Does the no-cloning theorem have any bearing on these questions?
[amended formula for revised Conjecture 1]