Nature Does Not Conspire
Second response by Aram Harrow
Albert Einstein was a great observer of science, as well as doer of science. Most of his quotations as well as theories have proved their staying power over the past century. We, Dick and Ken, feel that some of them are better when understood narrowly within science rather than taken broadly as they usually are.
Today our guest poster Aram Harrow opens his second response to Gil Kalai’s conjectures of impediments to quantum computation by applying one of Einstein’s quotations in such a focused manner.
The quotation, and Einstein’s own clarification of it, are conveyed in this article on him and the mathematician Oswald Veblen, regarding the latter’s request in 1930 to inscribe the quotation on a plaque in the new Princeton mathematics building:
Einstein consented to the inscription of the phrase: “God is subtle, but he is not malicious,” even though he wrote to Veblen that he had meant to say “Nature conceals her mystery by means of her essential grandeur not by her cunning.”
The narrow meaning is that in physics and other natural sciences, solutions may be hard to find but they will not evade theory in perverse ways. Even in cases like the -body problem where closed-form solutions are unavailable and behavior may become unstable, the underlying elements are smooth enough that one’s expectations will not be wantonly deceived. This goes especially in a classical geometric theory like relativity—whether Einstein thought this of quantum we don’t know.
By contrast, those who work in number theory must cope with having expectations deceived all the time. For instance, the Mertens conjecture is true for the first zillions of cases but known be false, though a concrete counterexample is yet to be constructed. (See also this about expectations.) Nastiness may be as Platonic and existential as niceness—in math. If Max Tegmark is right that “all structures that exist mathematically also exist physically,” then Nature must get arbitrarily nasty somewhere. So Einstein and Tegmark cannot both be right, though we could live in a nice “pocket” of Tegmark’s cosmos.
Anyway Gil and Aram are talking about our world, and we give the matter back to Aram. This is Part 2 of his three-part response, and addresses Gil’s contention that quantum errors are subject to correlations that defeat the independence assumptions of fault-tolerant quantum computation (FTQC). There will be a Part 3 by Aram and then (at least) a longer rebuttal by Gil.
Second Response by Aram Harrow: Nature is Subtle, But Not Malicious
The possibility of a high rate of correlated errors seems reasonable in many ways. Claire Mathieu points out that independence of random events is the sort of thing that often gets blithely assumed, when actually it’s a major assumption. Independence is attractive because it lets us extrapolate from easily observable probabilities to infinitesimally small probabilities, but for this reason it is also dangerous.
Analyzing FTQC is more complicated than analyzing error correction, but the big picture is similar. If an error-correcting code on qubits corrects errors, then independent error rate will be transformed into an error rate that looks like . If this is an improvement, then we can iterate this process via concatenation and drive the effective error rate down. In all cases, our goal is to apply Chernoff-type bounds that guarantee an overall low error rate.
But what if errors are not independent? Then the answer still depends on how likely we are to encounter clusters of simultaneous errors. Gil points out (e.g. Lemma 1 of this 2008 arXiv paper) that if errors are pairwise correlated, then the probability of errors can be significantly higher. But Chernoff-type bounds still hold. Indeed mere pairwise, or -wise, correlation of errors is not enough to derail FTQC. Gil mentions that one of his models of correlations corresponds to the Curie-Weiss model, which describes classical magnets. But here too, the chance of errors decays exponentially with (see Lemma 1 of these notes).
Fundamentally this is because the basic interactions in physics involve two or three particles, for instance when a photon bounces off an electron. Here is the standard model describing all known interactions. To get interactions with more particles, you need to go to higher levels of perturbation theory, but as you do so, amplitudes go down exponentially. This is also what we observe in experiments, when we entangle 7 atomic nuclei in NMR or 14 electronic states in trapped ions. Of course, one can imagine dangerously correlated noise models. One such model is that with probability nothing happens, and with probability , the user trips over the power cord, and all bits simultaneously fail. But are these more likely in quantum computers? Let’s look at the two ways that Gil claims these are more likely.
Does Entanglement Cause Correlated Errors?
Quantum computing (probably) requires producing large entangled states to be interesting. Gil suggests that this entanglement may be self-limiting, by increasing the rate at which correlated errors occur. This is one of the routes he proposes for generating highly correlated errors in quantum computers.
The key reason I regard this as unlikely is that quantum mechanics is linear, in the same way that stochastic maps act linearly on probability distributions. This means that there is no physical process that can distinguish an arbitrary entangled state from an arbitrary non-entangled state, just as no test can determine whether a probability distribution is correlated, given only a single sample. Specific states can be distinguished, but there is no “entanglement” or “correlation” observable that can be measured. In particular, when “noise” is modeled in the usual manner as a trace-preserving completely-positive linear map, then linearity forbids noise depending on whether the host system is entangled.
More generally, error processes are defined in terms of Kraus operators that do not depend on the state being acted on. The way this works is that the state is a vector, and the Kraus operators are matrices that act on this vector. Each matrix-vector product is a possible outcome, with probability given by its squared length.
For example, suppose our qubit is stored in the electronic state of a trapped ion. Spontaneously emission is a type of error with two Kraus operators: one that sends the excited state to the ground state, and one that merely reduces the amplitude of the excited state. These can be represented by 2×2 matrices. These act as a physical process on any state, entangled or otherwise. For the Kraus operators to change as a function of entanglement, quantum mechanics would have to become nonlinear, which would mean utter catastrophe, comparable to what would result from being able to act nonlinearly on the probability distribution of the universe.
How Classical Synchrony Emerges Linearly
But don’t we observe synchronization in real-world systems? Gil gives an example of metronomes: if you have a bunch of mechanical metronomes sitting on a table, then the weak coupling they experience via the common tabletop will cause them to eventually synchronize. There are also quantum analogues in which spin qubits interact with a common bath of bosonic modes. But for the metronomes, what we are really observing is the result of pairwise interactions together with dissipation.
In fact, the process is similar to the error-correction procedure for a repetition code. The differential equation has the form
These dynamics will synchronize the metronomes, and thus will generate correlations. But this process has nothing to do with the question of whether the metronomes were correlated in the first place. No physical process will make the term larger if the state is correlated/entangled.
Moreover, these types of correlations can be handled by FTQC. What it means for the error rate to be, say, , is that our control is 1000 times faster/stronger than the error processes. So even if the system would synchronize if left alone, sufficiently fast control can still prevent this by acting before multi-qubit correlations can build up. This argument is developed in detail in a paper by Aharonov, Kitaev and Preskill.
One subtlety concerns the semi-classical approximation. For example, suppose we apply a laser to the trapped ion. If it is the right frequency, the laser will cause X rotations. This can be viewed as applying a controllable Kraus operator. But now the Kraus operator still depends on the state of something; namely, the classical system controlling the laser. This appears nonlinear, but really the true Kraus operators involve two-body interactions between the ion and individual photons in the laser. Each such interaction is very weak, and when we add up the many photons comprising the laser beam, we get what looks like a Kraus operator acting only on the ion, but with parameters that depend on the settings of the laser. There is fundamentally no nonlinearity, and more important, nothing that is even approximately nonlinear in the state of the ion, which is what is relevant to the quantum computer.
Does Computation Cause Correlated Errors?
Another route to correlated errors is by piggy-backing on our computation. For example, suppose that we control our ion-trap quantum computer by blasting it with lasers. The lasers are macroscopic objects, and if there were other ions, or systems that behaved like ions, lurking near the ions we use for qubits, then these would be similarly affected by the lasers. If we were unlucky, these “shadow qubits” might interact with our computational qubits in ways that caused errors, and now these errors would exhibit complicated correlation patterns that would depend on the history of laser pulses we have used. Thus, even though there is no direct way for errors to depend on whether our states are entangled or not, errors could depend on shadow qubits, whose entanglement/correlation could be produced at the same rate as entanglement/correlation in our quantum computer.
This type of correlated noise is possible, and is indeed devastating for some types of error-correction.
But general situations are better behaved. For ion traps, we really have a single ion floating in space, with other ions far away. There are other quantum systems besides the electron, such as the nucleus, or the electrons in the inner shell, or modes of the electromagnetic field. They all look different from the qubits we are using for the computation, and they respond to different frequencies, have different interactions, and so on. The issue is whether these ambient factors can induce correlated errors into the computation qubits.
Shadow Errors Cannot Set the Pace
The errors may be correlated among themselves in various malicious ways. For example, if we move the ion to bring it closer to another ion, then the nucleus (whose spin could be called a shadow qubit) moves right along with the electron that we are using for our computation. But the point is that these effects can be brought within the modeling, and thus dealt with.
In fact, experimental physicists have spent a long time studying decoherence, and a large amount of it is due to what you could call shadow qubits. In many processes, it’s understood how this works, and often there are known methods of reducing decoherence with pulses that decouple shadow and computational qubits. So shadow qubits are something we need to be aware of, but in practice, when people look at implementations, they usually are aware of them.
Could shadow qubits generate the kinds of highly correlated errors that are dangerous for quantum computing? It seems unlikely. In general, they may cause extra two-qubit errors, but for them to track the computation precisely enough to cause multi-qubit errors, their task would be as difficult as that of the computation itself. This may be true of the stock market or international espionage, but atoms and photons are not intelligent or malicious enough to defeat us in this way.
In fact, nobody knows how to even design a hypothetical Hamiltonian for an environment in which single-qubit error rates are below, say, , but correlated decoherence dooms fault-tolerant quantum computing. A weaker goal would be to design such a Hamiltonian in which FTQC may be possible, but our proofs fail; see Preskill’s comment for thoughts along these lines. One reason for this may be that it is hard to know what a computer is actually doing.
Finally, to bring in a point of my first response, shadow qubits are already present in classical computers. For one instance, 0s and 1s produce different heating patterns in classical computer chips. This heat, in turn, might cause errors in the circuitry. But for the heat to cause bit flips in just the right way to overcome software error correction would require an incredibly unlikely conspiracy of events—a conspiracy that doesn’t have a plausible mechanism, and that isn’t observed in practice.
Brief Rejoinder by Gil Kalai on Correlated Errors and Entanglement
Correlated errors are not caused by entanglement but by the entire computing process leading also to the entanglement. Just like in Aram’s example with lasers, when we isolate all other ingredients and examine how the error operation relate to the entanglement of the qubits the relation is nonlinear, but, there is nothing about it that violates QM linearity.
Let me propose a thought experiment. It is so imaginary that I ask you to close your eyes.
Imagine a quantum mechanical world where in order to prepare or emulate a quantum state you need to create a special machine. Each state requires a different kind of machine. Of course, there are some quantum states you can design but cannot prepare at all, and for others it may take decades to build the machine.
The states you build are not perfect. Compared to the ideal pure state you intended to build they are noisy. The noise depends on the machine. And now since the machine is specific to the state that you want to build, there is some systematic relation between the state and the noise. You would like to understand and find the laws for this relation. There is no reason to believe that noise will be the same for all prepared state (or “linear” as people refer to such independence). Of course, it is not the state that causes the noise. It is the common properties for all the machines needed to create this specific state that lead to systematic relations between the state and the noise.
And now open your eyes. This is the world we live in.
Possible relations between the created state and the noise are part of our reality. When we will build universal quantum computers we will change this reality, but our attempts to build them should be based on this reality. My conjectures attempt to draw such systematic relation between states and noise.
Updates From Comments
- Joe Fitzsimmons noted that particle interactions are known to great detail, and those found in nature obey a restriction on their Hamiltonians called 2-locality, so as to limit the scope for correlated noise. Kalai agreed but noted that the latter applies only to cases where the states along the evolution themselves obey 2-locality.
- Commenter matt pointed out that Kalai’s conjecture 1 would be vacuously true if it concerned the problem of encoding an initially unprotected single qubit, as opposed to a full FTQC scheme with data qubits initialized, operated on, and measured always from within the protection of an error-correction code. Gil clarified that his conjecture states that even with fault-tolerant initialization schemes, the rate of logical errors will remain above some universal constant.
- Back in the first post’s comments, Robert Alicki gave a detailed argument on how the environment could maliciously interfere with a computation. This included remarks on Kraus operators that Aram acknowledged but do not seem to refute the defense given above. The key point going further than Aram’s sections above on shadow effects is that the environment can add friction to the system’s dynamics. This confers stability on classical systems, at the cost of heat dissipation and irreversibility, but quantum systems cannot afford to pay this piper. Aram, however, replied that quantum error correction works as friction in this beneficial manner while avoiding the problems.
Further discussion in this long comment thread led John Preskill to adjudicate some of the contentions based on mathematical modeling. This shows that cooling considerations do not single out difficulties for quantum over classical computation, and that some of Alicki’s objections fail to apply to non-Markov noise models. He then referenced his first comment with 6-page paper giving a less-restrictive Hamiltonian noise model under which a threshold theorem conditionally supporting FTQC can be proven. He allowed it may not be realistic, but it contradicts the objections in as far as they apply to it. The same was observed for a case where a long computation could drive the environmental “bath” into a highly adversarial state.
- In reply to Preskill’s first comment, Cristopher Moore allowed that “issues of correlated noise are subtle and important,” but called out Kalai’s conjectures as lacking sufficient motivation or detail to see why the errors should conspire against QC’s, and failing to define how noise as treated as separate from the computation’s physical process when it is really part of that process.
- Meanwhile from the ringside, Boaz Barak asked for an update on what is currently obstructing progress on building QC’s. This drew detailed replies by Joe and Aram.
- Gil showed how the “roadmap” has expanded as a result of these discussions, and tried to answer Aram’s open problem about constructing even a single physical system where (some of) his conjectures hold. Although the initial attempt was headed off by Matt, this led to interesting further exchanges about possible concrete physical tests in that thread, and a separate discussion beginning here.
To you, dear reader, what appears to be the core of the dispute? Is there enough specific mathematical modeling of how errors could correlate in harmful ways? Or do the high-level arguments by Aram and others suffice to reason that quantum computations themselves—in at least some reasonable systems supporting them—can out-pace any special error-causing effects, so that the standard conditions for fault tolerance assuming low dependence confer enough protection for them?