Skip to content

Flying Machines of the 21st Century?

February 6, 2012


First of three responses by Aram Harrow

Dave Bacon began the blog The Quantum Pontiff in September 2003. Thus he was among the earliest voices promoting the theory of quantum computation, and explaining it brilliantly in ways non-experts can understand. He now works at Google in the Seattle area, while his blog is staffed by “A College of Quantum Cardinals”: Charlie Bennett, Steve Flammia, and our second debate participant, Aram Harrow.

Today Aram begins a three-part rebuttal to Gil Kalai’s post with conjectures about entangled noise as an impediment to building quantum computers.

He has chosen Bacon as “patron saint” for this first part. In Bacon’s own three-part catechism on quantum error correction, he scribed that the cardinal question is:

Why is classical computation possible?

To quote Bacon in an earlier post: “When we dig down into the bow[e]ls of our computers we find that in their most basic form, these computers are made up of parts which are noisy or have uncertainties arising from quantum theory.” Thus if Gil’s concern about entangled errors is actual, why are they not defeating these parts?

Cristopher Moore was the first of several to say that a better analogy than perpetual motion is heavier-than-air flying machines, which were also thought by some to be infeasible on first principles. This all goes to say that if you take a laptop computer running Red Hat on a jet with quantum-level navigation components, nihil obstat—no trenchant problem needed to be overcome.

This part discusses differences between quantum and classical error correction. One key difference is that qubits can suffer “de-phasing” noise which effectively turns them into classical bits. Bacon answered Robert Alicki, the “patron” for Gil’s post, on this point before, but last year noted his own “de-phasing” by leaving quantum computing research to work in software engineering. He still occasionally posts as “Pontifex Praetorium” on his old blog, but more often can be found at his new blog, Pontiff++. Let us give the floor again to our debaters.

First Response by Aram Harrow

There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer.

My defense of quantum computing, unlike Gil’s, is not based so much on original thinking, but rather the consensus picture from the field of quantum information. Other quantum bloggers, such as Scott Aaronson and Dave Bacon, have argued similar points (and see also the excellent comment discussion on this series’ opening post), but I am not explicitly speaking for anyone else. My main contribution is in specifically responding to Gil’s conjectures.

For background, I like Daniel Gottesman’s explications of QECC (quantum error correcting codes) and FTQC (fault-tolerant quantum computing). Gil has advanced conjectures to argue that FTQC will fail in real systems—because errors either follow the computation, becoming maliciously correlated with the error-correction, or because they correlate for other intrinsic reasons. By all means, see his original post, his paper, or his more technical paper.)

My response is in three parts—only the first part appears in this post:

  1. The classical fault-tolerance test.

  2. Nature is subtle, but not malicious.
  3. Throwing (imaginary) money at the problem.

The Classical Fault-Tolerance Test

If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing.

At first glance, this seems absurd. Classical computing seems so much easier! The conventional wisdom on classical fault-tolerance is that von Neumann and others invented the theory back when classical computers used relays, vacuum tubes and other unreliable components, but now the bit error rates are so low that we don’t need to use their awkward circuits for iterated majority voting.

In fact, every bi-stable flip-flop in a classical computer is implementing a {1\rightarrow n} repetition code, where {n} is comparable to the number of electrons flowing through a single wire. Viewed in this lens, a classical computer is already implementing all of the principles of fault-tolerant computing: it keeps data encoded at all times, never fully decoding (which would correspond to storing a bit in a single electron), and performs gates directly on encoded data. It is also constantly error-correcting, by using the non-linearity of the transistors to perform majority voting de facto, and doing so at a rate significantly faster than the GHz clock speed (in other words, classical computers spend most of their “time” correcting errors.)

Fault-tolerant quantum computing follows all of these principles as well, with one big exception: there is no quantum analogue of the repetition code. (Why not? The natural candidate of {|\psi\rangle \mapsto |\psi\rangle\otimes |\psi\rangle\otimes |\psi\rangle} not only suffers the flaws inherent to analog computing, but also requires nonlinear encoding and decoding, violating a fundamental principle of quantum mechanics.) Instead, quantum errors come in two types:

  • X, or bit-flip, errors that exchange {|0\rangle} and {|1\rangle}; and
  • Z, or phase, errors that send {|1\rangle} to {-|1\rangle}, while leaving {|0\rangle} unchanged.

Classical computers of course have only X errors. In classical computers repetition codes are used everywhere, and the codes self-correct much more quickly than the gate time for logical bits. To improve reliability in a classical computer, it’s sufficient just to make the wires and insulators fatter, or the voltages higher. In quantum computers the lack of a repetition code is a big deal. It not only complicates achieving reliability, but puts more onus on active error correction. We need to enlarge the error-correcting code, which is not just a homogeneous collection of qubits, but a highly structured object that needs active error correction.

How a QECC Copes

The natural quantum version of a repetition code is the linear map sending {a|0\rangle + b|1\rangle \mapsto a|000\rangle + b|111\rangle}. This would protect against X errors (error rate {p} would become {3p^2+p^3}) while increasing the vulnerability to Z errors (error rate {p} would become {3p+p^3}). A dual version would protect against Z errors while increasing vulnerability to X errors. Only by combining these do we obtain codes such as the 9-qubit Shor code that can correct any single-qubit error, thus transforming error rate {p} on the physical qubits to error rate {O(p^2)} on the encoded qubit.

By encoding each of these 9 qubits in 9 more, we get an 81-qubit code whose error rate is {O(p^4)}. And if {p} a small enough constant (like 0.3), then by iterating, we can get error rate {\epsilon} with poly(log({1/\epsilon})) qubits, albeit with some ugly constants. If our decoding is also subject to error, as in FTQC, then we need {p} to be smaller, usually {10^{-6}} to {10^{-2}}, depending on the model. But the same threshold phenomenon occurs, assuming independent errors.

(An aside about passive error-correction. We don’t know how to do it in fewer than 4 dimensions. See arXiv:1103.1885 for an argument that passive self-correction in 3-d cannot use a scale-invariant code (such as something repetition-like); on the other hand, at least one 3-d system is known with some moderate self-correcting properties. Known no-go theorems in 2-d are stronger, and in 1-d even passive classical memory is ruled out, although examples such as Toom’s rule almost qualify.)

On the other hand, the lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers, but rather explain the enormous difference in the practical difficulty of constructing quantum and classical computers. In other words, the differences between classical and quantum computing mean a large difference in practical difficulty, but could not lead to one being possible and the other fundamentally impossible.

Where is the Objection Strictly Quantum?

The first reason I’m skeptical about Gil’s conjectures is that they don’t appear to make use of these two differences between classical and quantum computing. Rather Gil questions the independence assumption of errors. But if highly correlated errors routinely struck computers, then they would also be a problem for classical computers.

Quantum mechanics describes everything, including classical computers. If quantum computers suffer massively correlated errors with non-negligible probability, then so must classical computers, be they Pentiums or abacuses (or DNA, or human memory).

If electrons hop from wire to wire not one-by-one, but a trillion at a time, then those correlated bit-flip errors would defeat a classical repetition code, just like they’d defeat various quantum coding schemes. To distinguish these cases, you would have to have that Z errors (the ones that only quantum computers care about) are highly correlated, while X errors are not.

It is important to separate the issue of correlation from the issue about single-qubit error rate. If Z errors occur at rate {p_Z} and X errors occur at rate {p_X}, then the threshold theorem relies on the probability of {k} simultaneous Z (resp. X) errors decaying like {\tilde{p}_Z^k} (resp. {\tilde p_X^k}). It’s not crucial for fault-tolerance that {\tilde p_X=p_X} or {\tilde p_Z=p_Z}, only that {\tilde p_X, \tilde p_Z} are both small. In experiments, we observe that {p_X} is often much smaller than {p_Z}, and that {p_X \approx \tilde p_X, p_Z\approx \tilde p_Z} (up to our abilities to measure this). The former fact means that superficially it seems harder to protect quantum data, since the first thing we notice is that nasty high {p_Z} rate. But {p_Z} can with effort be lowered, and what ultimately matters is that {\tilde p_X,\tilde p_Z} are sufficiently small.

To reconcile Gil’s conjectures with classical computing as well as experiment, we would need that {p_X, p_Z,\tilde p_X} are all small, but {\tilde p_Z} is inevitably large. This is like theories of aether, in which dynamics have a preferred direction. How can Gil make this work?

One possibility is a dramatic rewrite of the rules of quantum mechanics, which is not his intent. Another is to argue that in any physical system, however X and Z are defined (because their definitions are as arbitrary as the definitions of the x,y,z coordinate axes), at least one of X or Z will have {\tilde p \gg p}. Basically, Nature would have to maliciously stick her fingers into every computation in unpredictable ways that foil every attempt at quantum computing, but have not yet been detected by exquisitely precise tests of quantum mechanics. I find this unlikely.

{\S}

Short Reply by Gil Kalai

The classical error-correction test is indeed at the heart of matters, and understanding the enormous difference in constructing classical and quantum memory and computing is indeed a fundamental question. I am glad that this is also Aram’s view.

Aram’s objections to my conjectures are based on intuitive grounds and not on formal grounds, as my conjectures have no consequences for classical error correction.

In a crucial passage above, Aram wrote: “The lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers… [they] explain the enormous difference in the practical difficulty of constructing quantum and classical computers\dots, but could not lead to one being possible and the other fundamentally impossible.”

Aram is correct that the repetition code is important. But my conjectures are required for drawing the picture of why it is important. What is it that distinguishes the repetition code from others, more sophisticated and more efficient error-correcting codes? Why is it that in nature and in classical computers, repetition codes are used everywhere? The answer that I propose is that when you consider the repetition code as a quantum code that corrects X-errors, there is no difference between the noise given by the standard noise models and the noise described by Conjecture 1. Therefore, the repetition code allows classical memory from noisy quantum systems even under Conjecture 1.

Conjecture 1 does not involve any preferred direction or symmetry breaking. The conjecture excludes the possibility of quantum error correction codes but allows the possibility of classical error correction via the repetition code.

Incidentally, the lack of a quantum analog of the repetition code and the unique properties of the majority function were the jumping-off points in my first paper on quantum computers in 2005.

Open Problems

Does maintaining a physical difference between the classical model with repetition/copy and the quantum model entail positing an artificial scale limit on the applicability of quantum mechanics itself?

Can one give a physical theory compatible with such inherently-correlated errors? Critical systems experience fluctuations with equal probability at all length scales. Could one of these produce the right correlations in Z errors, while having exponentially small probability of experiencing highly correlated X errors?

Update: We note that John Preskill contributed a comment in the first post, importantly with a link to a 6-page note addressing the noise correlation problem in mathematical detail.

117 Comments leave one →
  1. Boaz Barak permalink
    February 6, 2012 10:07 pm

    Aram, to a nonexpert like me, a natural model for quantum noise seems to be that at every step every qubit is measured with some probability p.
    Does such a model make sense? Do error correcting codes help for it?

    • February 6, 2012 10:47 pm

      Yes, this model makes sense, and it describes a large amount of physical noise. Applying a Z gate with probability 50% is equivalent to measuring a qubit, so I talked about Z errors in my post, but that’s equivalent to the way you described it.

      Physicists often talk about T1 and T2 noise (although these are far from the only kinds). T1 is the rate at which qubits relax to thermal equilibrium, and T2 is the rate at which they are effectively measured by the environment. Normally T2 is much faster than T1.

      T1, T2 or any other kind of noise are all handled by FTQC, as long as their strength is below a universal constant (0.01 to 0.000001) and errors are independent.

      • Steve Flammia permalink
        February 7, 2012 2:33 am

        To be a bit more precise, Aram, FTQC is possible as long as the errors aren’t too strongly correlated. See Preskill’s note from the previous thread here, and references therein. (I know you know this, but given the venue, it pays to reiterate that we don’t really need independence of the errors.)

  2. Louise S permalink
    February 7, 2012 12:37 am

    Whose open problems are these? Gil’s, Aram’s, or yours?

    “Can one give a physical theory compatible with such inherently-correlated errors? Critical systems experience fluctuations with equal probability at all length scales. Could one of these produce the right correlations in Z errors, while having exponentially small probability of experiencing highly correlated X errors?”

    Isn’t it Gil’s conjecture that *every* physical system must have correlated errors? Are you saying that we don’t know of *any* system that works this way? Is it really plausible that there are strong correlations in the random errors on Sol and on Alpha Centauri?

    • February 7, 2012 12:43 am

      The first one is mine, trying to address the issue I see raised by Aram’s references to “abacuses”.

      The second, which you quote, is Aram’s as a challenge to Gil to show one physical system that meets (approximately) his postulates.

  3. February 7, 2012 5:37 am

    There are two things I find troubling about Gil’s response.

    The first is that the treatment of the evolution of a classical computation is treated classically, rather than quantum mechanically. This may seem totally reasonable, but in fact it glosses over an important issue. To transition between two computational basis states, there must be some continuous evolution of the system which transitions it between the two states, which necessarily is not diagonal in the computational basis, and hence does not commute with dephasing noise. Importantly this means that correlated Z noise leads to correlated X errors in the resultant state. Using a classical treatment with infinitely quick gates glosses over this issue.

    The second is the implicit assumption that we don’t know what the noise should look like, and hence are free to conjecture whatever we please about it so long as it allows for classical computation. This, however, is not the case. The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to. The 2-local restriction on Hamiltonians found in nature significantly reduces the scope for correlated noise.

    • Steve Flammia permalink
      February 7, 2012 11:36 am

      Aram touches on exactly your second point at the end, where he mentions the “exquisitely precise tests of quantum mechanics” (the g-2 experiments.) Like you, I am also a bit baffled as to why we should extrapolate from known physics (as I think Gil intends) to the conclusion that noise is always correlated in such a way as to prevent FTQC.

      The locality of physical interactions is something Aram will get to in more detail in an upcoming post.

    • February 7, 2012 12:24 pm

      Joe Fitzsimons writes: There are two things I find troubling about Gil’s response …

      Here Joe utters yet another Great Truth on a GLL topic that already has encompassed many such truths … because these same two aspects are precisely what is exhilarating about Gil’s response.

      Specifically, Gil’s conjectures — and more broadly, innumerable lessons-learned in recent decades from QM / QC / QIT researcn — illuminate for us two Exhilarating Great Truths:

      Exhilarating Great Truth I: The STEM community has not yet achieved a firm grasp of the physical principles even of classical computation (per the survey below).

      Exhilarating Great Truth II: The STEM community has not yet achieved a firm grasp of the algorithmic principles even of classical computation.

      In particular, with regard to EGT-II, much has been conjectured yet little has been proved regarding the limits to simulating quantum physics with classical resources; yet meanwhile empirical quantum simulation capabilities are accelerating at a wonderfully exhilarating Moore’s Law pace, with no foreseeable end to this acceleration yet in view.

      Thus EGT-II amounts to a growing and intrinsically creative resource of broad applicability and tremendous power, coming to us in an era in which the world stands in urgent need of such creative resources.

      There’s nothing troubling about that.   :)

    • February 9, 2012 2:35 pm

      Dear Joe, These are two excellent points. I would like you to clarify your first point. In my short response I tried to explain (briefly) how repetition codes + conjecture 1 suggest an explanation for why classical memory is possible. I can elaborate but first I would like to better understnd the point in your first paragraph. Can you elaborate?

      “The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to. The 2-local restriction on Hamiltonians found in nature significantly reduces the scope for correlated noise.”

      Excellent point! I will try to relate to it along with Aram’s open question. The 2-local hamiltonian restriction indeed significantly reduce the scope for correlated noise, but
      remember that this applies to cases where the states along the evolution also satisfit 2-local restriction. In any case we will come back to this point in detail.

      • February 13, 2012 4:39 am

        Hi Gil,

        Sorry for the delay in responding. The point I was trying to make in the first paragraph was that even a classical computer is at some fundamental level operating quantum mechanically, and that in passing between computational basis states the system must pass through non-classical states, albeit for very short times. This is simply because the operation causing the transition is not instantaneous and the associated (possibly time dependent) Hamiltonian must not commute with the computational basis. Because of this, it is not clear that the type of noise you conjecture should not affect classical computation. It seems to me that it is necessary to treat even the classical systems quantum mechanically to get an accurate picture of what is and isn’t possible.

      • Gil Kalai permalink
        February 13, 2012 3:13 pm

        Dear Joe, thanks, I agree with you, and I tried to relate to this point in my last remark. It seems like an important issue and it would be nice to explore it more.

  4. matt permalink
    February 7, 2012 8:05 am

    There is one aspect of Gil’s “conjecture 1″ that is not fully clarified. He writes: “The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words.” I can take this to mean that Gil conjectures that any physical device which takes a single qubit input in some particular form, say as the state of a spin, encodes it into some larger number of qubits, then waits some length of time, and finally decodes back into the original form, will necessarily have some non-zero probability of an error, no matter how many qubits are used to encode it. If this is what he means, then _no_ researcher in quantum information would disagree with him, because the first steps of the encoding process, where the information is as yet stored in only a few qubits and so is not well-protected, will have some non-zero error rate depending upon physical details of the system. The important claim made by the theory of fault tolerance is that for sufficiently good gates one can encode the single qubit and decode it an arbitrary time later (with the number of qubits used in the code depending upon the time you choose to wait) with the error probability being bounded above by some constant independent of the length of time that you wait.

    Gil’s statement of his conjecture “In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding” actually seems to me to fit with the interpretation above.

    • February 10, 2012 4:56 am

      Hi Matt, no, QFT allows to create encoded protected states where the probability for error in the encoded qubit is as small as we wish. (In fact, exponentially small in the number n of qubits uesd in the encoding). My conjecture is that you will always get a substantial mixture with indesired codewords.

      • matt permalink
        February 11, 2012 6:21 pm

        Hi Gil, I think you misread my point (or maybe I misread your conjecture…my point was precisely to clarify your conjecture). QFT (“quantum fault tolerance” for those who might otherwise read that as “quantum field theory”) allows one indeed to make the probability of error in the encoded qubit as small as desired, as you say. However, if what you are asking (which is what I thought your original conjecture asked about) is whether one can take a given state encoded in just a single qubit (or any O(1) qubits), then encode it into a large number of qubits, and then decode it back into a single qubit, then the answer is that this cannot be made as small as desired, because when the state is still encoded in just a small number of qubits it is not protected by any fault tolerant machinery. This would also be true classically: if the output of a classical circuit is just a single bit, then the probability that you get an error is going to be at least as big as the probability that the very last gate in the circuit makes an error. The protection only holds when the state is encoded in many qubits. This is why I mentioned on a long time limit: FT’s (classical or quantum FT) claim really is that one can keep the probability of error for such a circuit bounded by some small number no matter how deep the circuit is.

      • February 11, 2012 6:32 pm

        A similar mistake is made in this paper:

        http://arxiv.org/abs/quant-ph/0411008

      • February 11, 2012 11:28 pm

        Hi Matt and Aram,

        My conjecture is about encoding a single qubit with n qubits. Quantum fault tolerance allows one indeed to make the probability of error in the encoded qubit as small as desired, and my conjecture asserts that in every realistic implementation of quantum error correction the error will be significant.

        I dont know what led to your interpretation, and if you or Aram are still uncertain about what Conjecture 1 is. In any case, let’s try to make sure that the conjecture is as clear as possible.

        An example: Suppose you encode 2 qubits with a toric code which involves n qubits. (Or 6 qubits with Kitaev’s 4D codes). Standard noise models tells you that as the number of qubits used in the encoding grows the encoded 2 states approach a single 2-qubit state (or 6-qubit). My conjecture asserts that we will get a “cloud” of encoded states and that no matter how many qubits will be used the probability of error in the encoded qubit will remain substantial.

  5. anon permalink
    February 7, 2012 8:45 am

    Doesn’t Dave Bacon work at UW, not Google?

    • Steve Flammia permalink
      February 7, 2012 11:28 am

      Dave Bacon left academia and works at Google now.

      • February 18, 2012 5:09 pm

        Confirmed :) Also I don’t remember ever seeing that picture of me, but it’s definitely me because I’ve got my Berkeley license neckless on. Enjoying the debate from afar, in between cranking out Java code.

      • February 18, 2012 5:14 pm

        Dave, do you ever have days in which all of your code simultaneously develops bugs, thereby foiling your attempts at error correction?

      • February 18, 2012 5:35 pm

        Happened to me in C++ :-). With code from an earlier iteration of our CSE250: Data Structures in C++ course, that is.

        Dave, the pic is from the qubit06 conference. Thanks (to all) for the interest!

      • February 18, 2012 6:15 pm

        It is notable that all-or-nothing error correction is fundamentally characteristic of everyday classical informatic technologies like cell-phones, blu-ray discs, magnetic disc drives, and satellite communication channels do — either they curate their data with near-perfect fidelity, or they destroy it with near-irreversible finality.

        All-or-nothing error correction similarly is ubiquitous in the biological world. For precisely so long as an organism’s internal error-correcting mechanisms are working effectively (at the genomic level and at the neural level), we call it that organism “alive” … otherwise, not.

        Moreover, this error-corrective distinction between “living” and “nonliving” becomes indistinct for organisms that lack self-repair capability, and thus are dependent upon the repair capabilities of their hosts (e.g., retroviruses). And conversely, it is thought that mammalian genomes are no bigger than they are, precisely because their error correction mechanisms do not scale to genomes larger than $\sim 10^9\ \text{bp}$ with sufficient fidelity to avert fatal genetic drift.

        Two salient points of these observations are: (1) error-correction capability is ubiquitous in classical information-processing systems, and (2) coupled error dynamics and all-or-nothing system failure modes are classically common.

        What lessons-learned for QM/QC/QIT can be derived from this classical ubiquity? Perhaps one broad lesson is that we can advantageously focus our appreciation upon the teachings of QM/QC/QIT regarding noise.

        These reflections render it natural to enquire generally: What new mathematical and/or physical insights regarding noise has the 21st century gained from QM/QC/QIT?

      • February 27, 2012 12:48 am

        The first apearance of Conjectures 3 and 4 (as they are called now) was on Dave’s “Quantum pointif” almost 6 years ago.

        http://dabacon.org/pontiff/?p=1203

        John Sidles was the first to comment on them and we had a nice discussion. For my last post there I even prepared a special internet page to represent me and my approach; ( http://www.ma.huji.ac.il/~kalai/gpont.html )I wondered if anybody ever noticed it.

  6. February 7, 2012 10:45 am

    This is a fun discussion, and so I would like to contribute some fun references.

    Wholly for fun, and yet with regard to the common-sense idea that even “exquisitely precise” tests of a (classical or quantum) theory do not imply that theory is true, GLL readers are referred to the wonderful web page that describes Prof. E. T. Hall’s Littlemore Clock: the highest-precision (classical) pendulum clock ever build … accurate to about 50 ms/year. Does the existence of the “exquisitely precise” Littlemore clock prove that Newtonian mechanics and Euclidean geometry both are strictly true? :)

    More seriously, a class of dynamical models of computation for which the classical/quantum boundary is particularly well-defined was laid out in John von Neumann’s sole patent “Non-linear capacitance or inductance switching, amplifying and memory organs” (US2815488, 1957). There is an extensive literature on von Neumann’s computing devices, which were commercially marketed as parametrons; among the seminal articles that discuss them are two by Rolf Landauer (“Dissipation and noise immunity in computation and communication”, Nature 1988, and the follow-up article “Dissipation and noise immunity in computation, measurement, and communication”, Journal of Statistical Physics, 1989), which in turn was followed-up by Seth Lloyd’s “Any nonlinear gate, with linear gates, suffices for computation” (Physics Letters A, 1992).

    These works mainly argue for a Harrow-esque point-of-view, in which classical computational processes are specified that are dissipation-free yet robust with respect to noise, thus demonstrating a key similarity of classical computational processes to quantum computational processes.

    It is natural to wonder: What new insights have we gained since 1992 that might alter this conclusion in a more Kalai-esque direction? Without going into detail, a striking limitation of Seth Lloyd’s analysis (and perhaps of von Neumann’s and Landauer’s analyses too) is that the specified nonlinear classical dynamics in general are nonsymplectic, in the sense that nonlinearities are specified that compress state-space volumes.

    While such dynamical state-space compression commonly is observed locally (in both classical and and quantum systems), we are entirely confident that such compression can never happen globally; this being the substance of the Second Law.

    Therefore — in service of Kalai-esque unease with regard to the fundamental feasibility and/or practicability — it might be worthwhile casting the classical and quantum arguments of von Neumann, Landauer, and Lloyd into explicitly symplectic form, upon a variety of dynamical state-space geometries, so as to be entirely confident that the fundamental constraints imposed by the Second Law have been fully accounted, even in the large-dimension state-spaces of QC, within which both order and disorder camouflage themselves so effectively.

    • February 7, 2012 11:26 am

      Poking around a bit more, we discover a remarkable instance of inventive synchronicity: John von Neumann’s patent filing (of April 28, 1954) that described parametron-based computers was simultaneous (approximately) with the independent invention of the same computational method, also in 1954, by Japan’s Eiichi Goto, who was a graduate student at the time.

      Moreover, it turns out that another physicist whose work I greatly respect, Stony Brook’s Konstantin K. Likharev, has written extensively on low-dissipation parametron-type computation implemented via Josephson junctions.

      For everyone else whose research my GLL survey (above) unjustly overlooked, I hereby pledge to pay for a pizza-and-beer the next time we meet. :)

  7. February 7, 2012 2:27 pm

    Dear all, I participate these days in a conference at the US Virgin Islands on analysis of Boolean functions, organized by the Simons foundation. A lot of exciting talks and discussions. One (nonmathematical) thing I heard for the first time: Speaker (Luca) asks: “Do I have five minutes or ten minutes?” Avi :”Whatever comes first”. (At the end, Luca took the ten and the five and it was worth it.)

    So beside the Boolean functions research and discussions and the recreation activity (and some deadline you don’t want to hear about) I will not have much time to actively participate in the discussion before sunday. There are some excellent points addressed to me in the earlier post and some excellent issues to discuss from Aram’s post and comments here. Of course, I would also like to go over John Preskill’s writeup and to relate to some of John’s questions.

    Let me briefly explain two principles which I sat to myself in trying to pursue an explanation within QM for non-feasibility of quantum fault tolerance. The two principles are:

    1. The explanation should not assume symmetry breaking between X-errors and Z errors.

    2. No geometry will be involved.

    The rationale for these two principles is simple. Both the basic model of quantum computers and the model of noisy quantum computers with standard noise satisfy these principles. Moreover, I felt that geometry should not play a role – the explanation should not depend on the geometry of our quantum computer (or the world), and that a different behavior between X-errors and Z-errors cannot be assumed as part of the answer, although it will be interesting to understand the origin of such asymmetry.

    Indeed all the conjectures that are stated in my papers and in the first post are blind to the difference between bit-flip errors and phase-errors, and do not rely on geometry. Please do look at the conjectures and see it for yourself.

    I also want to mention that I do not assume or conjecture that the errors are always correlated. I conjecture that there is a systematic relation between the state of the computer and the errors and all the conjectures I presented in the first post
    are based on such systematic relation. In particular I conjecture that entangled qubits come with positively correlated errors. I suppose that we will discuss next time if this relation between noise and signal violates QM’s linearity, and how plausible it is. While the conjectures are not in conflict with QM linearity they do not represent a walk in the park either, and as I said they require drastically different noise modeling compared to the standard ones .

    In any case, the question Aram raised here about what is the different between classical error correction and quantum error correction is extremely important regardless of your beliefs, and I am nost interested to hear your thoughts about this difference.

    • Anonymous permalink
      February 7, 2012 4:11 pm

      “I conjecture that there is a systematic relation between the state of the computer and the errors and all the conjectures I presented in the first post are based on such systematic relation. In particular I conjecture that entangled qubits come with positively correlated errors.”

      Can you give any example where this happens without violating linearity? It seems by definition that if your Hamiltonian depends on the state, then you have a nonlinear model that contradicts quantum mechanics.

      I don’t see where you are going with this. You are trying to overturn major, well-established physical theories, quantum mechanics and special relativity, but without any experimental evidence to back you up or proposing any new theories to replace them. The main motivation seems to be the belief that factoring should be hard.

      It is depressing to say it, but possibly computer scientists should leave quantum computing to the physicists. Aaronson’s idea that computational complexity conjectures should be granted the status of physical laws has terrible consequences.

      • February 7, 2012 4:27 pm

        Anonymous, since we are going to discuss linearity next time lets leave it to next time. (I dont see your point regarding special relativity.) Meanwhile you can share us with your perception of why quantum fault tolerance is it so much harder than classical fault tolerance.

      • Wim van Dam permalink
        February 7, 2012 6:15 pm

        “It is depressing to say it, but possibly computer scientists should leave quantum computing to the physicists.”

        Good idea. Will you tell Peter Shor, Michael Ben-Or and Dorit Aharonov this, or shall I do that?

      • February 7, 2012 9:53 pm

        The “converse” of Wim’s comment is that there are also well-known physicists (e.g., Gerard ‘t Hooft) who happily ignore known facts about QM in the course of arguing that quantum computing must be impossible. But I guess some people will seize on any excuse to bash CS…

      • February 7, 2012 11:09 pm

        Scott – It wasn’t Wim. It was the ever-prolific Anonymous.

      • February 8, 2012 6:40 am

        Sorry, the first sentence of my comment was amplifying what Wim said, while the second was referring to the ever-prolific Anonymous.

  8. matt permalink
    February 8, 2012 8:06 am

    Hey, why is my awesomely insightful comment still awaiting moderation? Here it is again for all who didn’t get to see this.

    There is one aspect of Gil’s “conjecture 1″ that is not fully clarified. He writes: “The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words.” I can take this to mean that Gil conjectures that any physical device which takes a single qubit input in some particular form, say as the state of a spin, encodes it into some larger number of qubits, then waits some length of time, and finally decodes back into the original form, will necessarily have some non-zero probability of an error, no matter how many qubits are used to encode it. If this is what he means, then _no_ researcher in quantum information would disagree with him, because the first steps of the encoding process, where the information is as yet stored in only a few qubits and so is not well-protected, will have some non-zero error rate depending upon physical details of the system. The important claim made by the theory of fault tolerance is that for sufficiently good gates one can encode the single qubit and decode it an arbitrary time later (with the number of qubits used in the code depending upon the time you choose to wait) with the error probability being bounded above by some constant independent of the length of time that you wait.

    Gil’s statement of his conjecture “In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding” actually seems to me to fit with the interpretation above.

    • February 11, 2012 8:50 pm

      Hi Matt—it was a several-hours delay while I was traveling. As you’ve probably noticed, moderation is only on the first comment; after that you’re in unless you include a lot of spam-filter-catchable stuff.

  9. Boaz Barak permalink
    February 8, 2012 9:57 am

    Hi Aram,
    Do you think you could explain a bit about the reasons why people haven’t been able to build quantum computers with more than a handful of qubits yet?

    • February 8, 2012 11:47 am

      Hi Boaz,

      I know you addressed the comment to Aram, but it’s an interesting question, and I hope you don’t mind me trying to give an answer. Despite what you will often here in popular accounts (which generally claim decoherence is the limiting factor), there is no one single reason for the current limitations. Different quantum computing implementations are limited by different factors. In some cases noise is indeed the limiting factors, but others are limited by factors.

      For example, optical implementations are partially limited by the fact that it is hard to interact two photons and so one has to resort to either weak non-linearities (i.e. weak interaction between photons) or the use of measurements to entangle qubits, both of which incur serious overheads, and it is also technically hard to produce single photons on demand. Liquid state NMR is generally limited by cooling. Each of the qubits is very very nearly completely random. One relies on the very large number of spins in the sample, combined with the tiny polarization of the spins to create a pseudo-pure state (considering only the tiny difference in the populations with different spins). Unfortunately, this difference is an exponentially small fraction of the ensemble, and so eventually you reach a point where the ensemble simply isn’t large enough for this fraction to have a detectable signal (and eventually you run out of molecules entirely). For this reason, liquid state NMR is not presently considered scalable. Simple ion traps have difficulty controlling and cooling ions as the number of ions in the trap increase. For that reason in recent years there has been a lot of work on segmented traps, and also on coupling two or more traps optically, but these techniques are not yet as well developed as more simple trap configurations.

      This is only scratching the surface as there are many different implementations, and each has its own limiting factors. The above limitations are by no means definitive, but I list them to give you a feel for the relevant issues in different settings.

      • matt permalink
        February 8, 2012 12:57 pm

        That’s a great answer. And one of the other answers is that people (with some notable exceptions) haven’t seen a reason to build quantum computers with more than a few bits until they’ve got the devices with a small number of qubits running at a high fidelity. If you still aren’t at threshhold yet with 2 qubits then the priority is to improve your experiment there in a more controlled setting before dealing with any other effects introduced by having other qubits nearby.

    • aram permalink
      February 8, 2012 12:05 pm

      Boaz, that’s a good question. (Just noticed Joe answered too; hopefully our replies are somewhat complementary.)

      The consensus is that for a QC (and without radically new architecture ideas) you need something called the DiVincenzo criteria:
      1. identification of well-defined qubits;
      2. reliable state preparation;
      3. low decoherence;
      4. accurate quantum gate operations and
      5. strong quantum measurements.

      Often these goals tradeoff with each other. e.g. if you run more wires into the system to perform (4), then these wires will carry noise in them which makes (3) harder.
      More generally, you want systems to be highly protected from the environment, and yet accessible to the control pulses that you apply. It’s a tough combination.
      NMR on liquids achieves all of these using technology from the 1970s (and in fact old NMR experiments could be reinterpreted as doing controlled-NOTs and SWAPs), but this doesn’t scale because to get more qubits you need to make the molecule bigger, and as you do, the molecules tumble more slowly, and the nuclei become less well protected. In other systems, you have to develop new technologies for things people have never tried before, and that’s hard.

      In a few experiments, like this one: http://arxiv.org/abs/1009.6126
      people have tried to get “many” qubits, which as Joe points out presents some technical challenges that are somewhat orthogonal to the goal of getting good 1 and 2-qubit operations.
      So often you see people focusing only on bringing the noise rate down, and not yet trying to get more qubits. If you look at the noise rate, here there has been steady progress.
      Check out this awesome plot:

      http://physics.aps.org/articles/v4/103

      The recent results are getting to just about the rate that the FT threshold theorem requires.

      • Boaz Barak permalink
        February 8, 2012 4:15 pm

        Thank you so much Joe and Aram – these are very informative answers! (thanks also for the interesting links). From these answers I understand that right now the effort of constructing quantum computers does not seem “stuck” at any concrete bottleneck but rather has been making significant progress against several difficult challenges,

        I guess that if the next decade will pass without significant additional progress, then it may make sense to start asking if there are fundamental obstacles to this enterprise, but right now there is no reason to assume this will be the case. Is this a correct interpretation of your answers?

      • aram harrow permalink
        February 8, 2012 6:17 pm

        Boaz: Yeah, pretty much.

        A slightly complicating factor is that there’s a large field of quantum information processing, of which quantum computing and cryptography are just two of the applications. e.g. there’s also precision measurement, and wacky stuff like ultralong-baseline telescopes. So the range of possibilities is not just one bit. But broadly speaking, we’re making fast progress now, and I think we should only start giving up after we stop making progress for several years.

  10. February 8, 2012 9:49 pm

    Let me mention some issues that were raised in our discussion. The starred ones are those we plan to discuss, or described as open problems, the double starred ones are those that came up in pur private discussion with Aram, Ken and Dick. Others came up in the discussion here and few are my contribution. (I dont suggest to discuss all of them.) To make things clearer I wrote “Gil’s conjectures” instead “my conjectures.” Sorry if I missed something important.

    1*) Classical Vs Quantum 1 (Aram’s first post) Is it possible that quantum error correction fails while classical error correction prevail?

    2*) Classical vs quantum 2 (Aram’s first post) What accounts for the huge practical difference between classical error correction and quantum error correction?

    3*) Correlated noise (Aram’s second point) are the conjectures about correlated noise reasonable

    4*) Entanglement and error-correlation (Aram’s second post) Can there be dependence between the state of the computer and the noise? Does such dependence violate the linearity of quantum mechanics?

    5*) hypothetical QC to the rescue (Aram’s third point) Aram’s imaginary quantum computers that shows that Gil’s conjectures cannot be true

    6*) Intermediate models (Aram’s third post.) Computation based on cluster states, anyons, adiabatic computation and more.

    7*) Noisy nonabelyons and clusters (Gil’s open problem post 1) Does noise described by Conjecture 1 allow universal quantum computing when we consider noisy cluster states and noisy nonabelian anyons?

    8*) Physics examples (Aram’s open problem, his response part 1) Are there any examples from physics supporting Gil’s conjectures?

    9) Precise predictions (Aram) can Gil’s conjectures lead to any precise predictions?

    10*) Classical/quantum threshold (Ken’s open question, Aram’s post 1 my understanding): Is the conjectures/discussion relevant to the threshold transition between classical and quantum behavior.

    11) QM → QC? Does quantum computer skepticism necessarily mean quantum mechanics skepticism? (Chris) Does conjecture C or any conjectured limitation on quantum states contradict QM? (also Chris)

    12) What if ¬QEC? Are there any spectacular application to physics of failure of quantum error correction? In what ways would the physical world without quantum error correction and universal quantum computers (or before them) be different from a world with quantum error correction. ((Scott) Is the QC skeptical research just “negative?”)

    13) Computational complexity. If quantum error correction fails what is the computational class needed to describe our physical universe? What are the computational complexity implications of Gil’s conjectures (Boaz and others)

    14**) environment. How to understand the notion of “environment” in controlled quantum evolutions.

    15) Lost in translation? (Related to John Preskill and Robert Alicki comments) Relation between the CS models and Hamiltonian models for quantum computers and noisy quantum computers.

    16) Don’t we know better? (Joe, John P.) Arn’t Gil’s conjecture already in conflict which what we know about noise? And also with John Preskill’s Hamiltonian description of noise (in his recent manuscript)?

    17) Simulating nature. If universal quantum computers are impossible, does this mean that classical computers can simulate realistic quantum physics?

    18**) Quantum field theory computations. If quantum computers are impossible what does it say about computations in QM that requires exponential resources? (These computations were Feynman’s original motivation for QCs.)

    19) Universe. Is the universe a giant computer? of what kind?

    20) Correlation in classical noisy systems. (*; perhaps) Correlated errors in classical systems. Do the conjectures say or suggest anything about them?

    21) Analogies. Analogies between the quantum computer endeavor and: heavier-than-air-flight, building classical computers, creating experimentally Eisnstein-Bose states, controlled fusion, perpetual motion machines, solving equations of degree five, flight to Mars…

    22) Intuition transfer. To what extent does our intuition on the behavior of classical computers extend to quantum computers. (Chris, Boaz)

    23) Geometry. Is the geometry of the quantum computer relevant to the discussion? (Note that in the model of a quantum computer, in the model of a noisy quantum computer, and in my conjectures geometry does not enter.)

    24) Natural high entanglement. (**) Are highly entangled states of the kind we don’t see even in quantum computers represented in the universe?

    25) Rate of errors. (Boaz) Why not to simply conjecture that the rate of noise will be high? What can we say about the rate of noise?

    • February 28, 2012 11:56 am

      Let me also add another item to the list.

      26) Thermodynamics. The study of noise and decoherence which is at the heart of the issue is connected to thermodynamics. Robert Alicki expressed his belief that impossibility of fault-tolerant quantum computations should follow from the existing, perhaps refined, laws of thermodynamics. John Sidles mentioned the connection with Onsager regression hypothesis.

      • February 28, 2012 4:19 pm

        Traditional thermodynamics applies to equilibrium systems, which are not sufficient for studying error correction (systems in equilibrium cannot even store classical data. See http://www.de.ufpe.br/~toom/others-articles/engmat/BEN-GRI.pdf for more on this.) Further not all systems are in thermodynamic equilibrium (one need only look around you to convince yourself that we have not yet reached a heat death universe.) So this means that one needs to study thermodynamics of out of equilibrium systems. In that case there are metastable as well as perfectly stable classical memories (2D Ising an example of the first and Gacs CA as an example of the second.) For storage of quantum information, it certainly seems like the 4D toric code is an example of the first, and spatially local QECC is an example of the second. The thermodynamics and out of equilibrium dynamics of these systems seems well studied, for specific noise models. If you’re going to take a crack at breaking quantum fault tolerance, I think pinning your hope on a more refined thermodynamics is a bit of a longshot and really it would have to be tied to the specific physics of the system. So it seems to me that your more traditional argument Gil is more likely to succeed then some magical development in thermodynamics (I’m not downplaying Alicki’s approach: I just don’t think his approach is likely to be a more refined thermodyanmics, except in the sense that the dyanmics of systems with different noise models behave differently. This would then make the refinement thermodynamics not universal, and then the question is experimental: what is our actual noise model.)

      • February 28, 2012 4:47 pm

        Dear Dave, thanks for the comment. I didn’t add any new argument. I tried to gather the issues that were raised by me and by other people in the discussion into a list, the list contained 25 items, and I forgot thermodynamics, so I added it now as the 26th item. (I remember several discussions about thermodynamics and QC on the Quantum Pointif.) I do have some comments on your comment but I will make them later in the most recent thread.

      • February 28, 2012 5:27 pm

        I would like to extend Dave’s post with two recommendations and one meditation:

        Recommendation #1: If you are in downtown Seattle, and enter the commercial building visible on Google Earth at 47.607754° N, 122.338396° W (100 m west of Benroya Hall), and ascend to the 17th floor ( … not one floor more … not one floor less), then you will arrive at a beautiful, uncrowded, open-to-the public zen garden, with sweeping views of the Puget Sound and the Olympic Mountains … and also a cafeteria and high-speed internet connection.

        Best … Seattle … math-doing  … environment … ever! For which guidance, GLL’s appreciation and thanks go to you, daughter Lane! :)

        Recommendation #2: For guidance in simulating quantum systems with due respect for thermodynamical “goodness”, a concise-yet-clear reference is R. K. P. Zia, Edward Redish, and Susan McKay’s “Making sense of the Legendre transform” (arXiv:0806.1147 and Am J Phys, 2009). This article’s mathematically natural and duality-centric exposition of the foundations of thermodynamics is recommended to anyone who struggled to come to terms with thermodynamic potentials (students in particular).

        Meditation: To the extent that there are any fundamental difficulties associated to thermodynamic descriptions of FTQC along the lines of the Zia/Redish/McKay formalism, they plausibly are associated with the well-known practical difficulties that associated to “vacua”, by which is meant dynamical baths \mathcal{B} that:

        (1) are coupled to qubits, in a thermodynamic limit in which
        (2) \dim \mathcal{B} \to \infty, and simultaneously
        (3) \text{temperature}  \mathcal{B} \to 0\pm, while
        (4) qubit relaxation rates remain constant

        Vacua enter idiosyncratically in FTQC (as they do not in ordinary thermodynamics) via the couplings that are associated to unit-efficiency photon emission (i.e., \text{temperature}  \mathcal{B} \to 0{-}) and detection (i.e., \text{temperature}  \mathcal{B} \to 0{+}) … all Lindbladian descriptions of qubit measurement being equivalent to these.

        It may or may not be the case that FTQC’s peculiar vacua pose fundamental issues in physics or mathematics, but for sure, the practical engineering of these vacua poses considerable practical challenges. Perhaps in concentrating on the latter class of practical challenges, we may gain insight into the former class of fundamental questions.

    • March 10, 2012 12:36 pm

      Recent posts and comments threads added a few more items to the list.

      27) Numerics. (Suggested by John Sidles). Are there numerical experiments relevant to the debate, the different suggested noise models, and Gil’s conjectures.

      28) Quantum-like evolutions on non-Hilbert spaces (Suggested by John Sidles). Models of quantum evolutions where the Hilbert space is replaced by other geometric manifolds.

      This idea comes in three forms:

      a) Regular strength: A computational tool strictly within QM

      b) Maximum-strength: An alternative to QM formulation which is consistent with QM

      c) Prescription-strength: An alternative to QM with nonlinear Schrödinger evolution that can falsify QM.

      29*) Noise?; probability?. (Aram’s second thought-experiment). Is noise a fundamental physics notion or a technical nuisance. A related foundational question: What is the physical interpretation of probability (classic and quantum)?

      30*) Engineering and theory (Ken’s open problem for Aram 3rd post). What is the dividing line between an engineering obstacle that cannot be overcome and a theoretical impossibility

      31*) Censorship. Can one draw a line between quantum states that can be created without (before) universal quantum computers via quantum fault tolerance, and with (after) them.

      32*) Universal James Bond car analogy (Gil’s thought experiment; Aram’s post 2) . Is it the case that to create different quantum evolutions and states requires fundamentally different machines.

    • Gil Kalai permalink
      March 19, 2012 1:47 pm

      Recent posts and comments added a few more items to the list

      33) Specific architectures for quantum computers (Aram, others). Can one present specific noise models which show why Gil’s conjectures apply to specific implementations of quantum computers, and to familiar ideas regarding noise for them, like noise from control wires.

      34) Hamiltonian models (John Preskill, Aram). Can one present a Hamiltonian model (or related modelling) of quantum computers that supports Gil’s conjecture?

      35) Locality (Joe, Aram). How does non-local models like those of Gil’s conjectures can be reconciled with known physics?

      36) Quantum fault-tolerance in known physics. Is quantum fault-tolerance manifested anywhere in known physical reality?

    • May 20, 2012 4:28 am

      Recent posts and comments added a few more items to the list

      37) Cooling. (Alicki-Preskill discussion and others). Can you always cool individual qubits? Are there types of quantum states that cannot be cooled at all?

      38) MRI and more (John Sidles). To what extent can ideas from fault-tolerant quantum computing apply to improvements in MRI imaging sensitivity and resolution MRI? to electronic microscopy.

      39) Deep quantum evolutions, iron (Conjecture C post). Does nature support deep (essentially pure) quantum evolutions? (Namely, evolutions which are not of small bounded depth.) Is the presence of iron in the Earth a demonstration of deep quantum evolutions? of natural quantum fault-tolerance? of computationally superior quantum computation?

      40) Very few qubits. How do quantum computers with 2, 3, 4 and 5 qubits behave?

    • January 14, 2013 3:16 pm

      The last post of the debate and a few later discussions added (or reminded me about) a few more items to the list:

      41. Chaos and computation. What is the relation of chaotic classical dynamics with computation and computational complexity? What is the quantum analog of chaotic classical dynamics, and what are the connections with open quantum systems, and with quantum computation.

      42. Smoothed Lindblad evolutions. Can realistic noise and realistic noisy quantum evolutions be described by smoothed Lindblad evolutions? Does such a description cause quantum fault-tolerance to fail? Does it suffice to reduce the computational power all the way to BPP?

      43. Noise and measures of noncommutativity. Can a general lower bound for the magnitude of decoherence in a time interval be based on a noncommutativity measure for the algebra of operators generated by the dynamics in that interval?

      44. Symplectic geometry, quantization and quantum noise. What is the relevance of quantizations of Newtonian mechanics, expressed by symplectic geometry, to quantum noise, and what are the relations of quantum aspects of symplectic geometry with the behavior of open quantum systems and noisy quantum computation. Find relations of Gil’s conjectures with the “unsharpness principle,” and the notions of inherent and systematic noise from symplectic geometry. (See this post for information and links.)

      45. Systematic dependence between the evolution and the noise; symmetries. Is it the case (As Gil’s conjectures posit) that noisy quantum systems are subject to noise which systematically depends on the quantum evolution of the system; that this dependence reflects the dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it performs? In particular, is it the case that noise of a quantum system also largely inherits symmetries of the system?

    • John Sidles permalink
      January 14, 2013 4:09 pm

      46. Pullback-smoothed Lindblad evolutions  (seeks a natural and concrete unification of 23. Geometry, and 28. Quantum-like evolutions on non-Hilbert spaces , and 31. Censorship , and 42. Smoothed Lindblad evolutions, and 44. Symplectic geometry, quantization and quantum noise)   For smoothing concretely defined as a Stratonovich description of Lindblad noise/measurement increments (per Carlton Caves’ on-line internal report Completely positive maps, positive maps, and the Lindblad form) that has been naturally pulled-back onto a lower-dimension state-space (and thereby effectively smoothed), upon what state-space geometries (if any), and for what restricted class of noise/measurement processes (if any), is the informatic causality of the immersing Hilbert space rigorously respected by pullback-smoothing onto lower-dimension state-spaces?

    • May 29, 2013 4:53 am

      Some post-debate discussions added (or reminded me about) a couple more items to the list.

      47. Square-root fluctuation. Can we expect Gaussian fluctuation for the number of errors for a noisy system of n interacting particles? Specifically, can we expect that the standard deviation for the number of errors behaves like \sqrt n. What is the situation for digital memory? (In this case, where interactions are weak and unintended,  we can consider both transient errors for the entire memory, and also the microscopic situation for a single bit.)

      48. Uncertainty. What are the connections (if any) of the issue of quantum noise and  quantum fault-tolerance with various manifestation of the uncertainty principle in quantum physics.

    • March 27, 2014 7:57 am

      Post-debate discussions added (or reminded me about) a couple more items to the list:

      49) The firewall paradox. The black hole firewall  paradox is a recent meeting place for quantum information, quantum computation and physics of black holes. Can a principle of “no quantum fault-tolerance” shed light on the “firewall paradox?”

      50) Classical evolutions without fault-tolerance and computation. Describe classical evolutions that do not enable/hide robust information, fault-tolerance and computation.

  11. February 10, 2012 5:14 am

    Aram’s open problem asked if even a single example can be exhibited were my conjectures hold. Joe Fitzsimons ‘s related remark was that we know what the noise is, so we cannot reinvent it. Here is a proposed case to think about Conjecture 1. We need though to replace “qubits” by a much larger Hilbert space representing the state of a single atom.

    Consider Bose-Einstein condensation for (say) a huge number of atoms. We can thing about it as a quantum code where the same state of a single atom is encoded in a huge number of atoms. We can think if Conjecture 1 holds in this case. I suppose that in reality we cannot achieve pure Bose-Einstein state but rather some noisy positive entropy nearby states. Joe, others, do we know what the noise is in this case? Will it be like independent noise acting on each atom separately? Or perhaps it will be closer to what Conjecture 1 predicts: A mixture of different B-E pure states?

    • matt permalink
      February 10, 2012 9:07 am

      Gil, that is not a sensible quantum code to use. It reduces the effect of bit flip errors, just like a classical repetition code, but greatly increases the effect of phase errors. As for what you get in thermal equilibrium and as shown in experiments, this is a very well-studied problem. You do get a mixture of different B-E pure states (plus some local excitations on top of that), but for reasons that have nothing to do with conjecture 1 and everything to do with this being a bad quantum code. Noise, even completely uncorrelated noise, will dephase the state in a given basis. The earliest studied example of this that I know (dating back about 50 years ago!) was Anderson’s original discussion of what happens in a quantum antiferromagnet, such as an antiferromagnet on a cubic lattice with nearest neighbor interactions. One can prove that the ground state has spin 0, but experimentally we observe instead a breaking of spin rotation symmetry, so that the observed state is not spin 0. This results from two facts. First, there are many states which are almost exactly degenerate with the ground state (splitting between them is of order 1/(number of atoms in the sample)). Second, in this system, the space spanned by these almost degenerate states does not act as a good quantum code and so local noise quickly decoheres in a certain basis, leading to the observation of spontaneous symmetry breaking. The remarkable thing about topological protection is that there are quantum systems where the ground state subspace does act as a good quantum code (the first “fact” holds but the second “fact” does not) and so this argument does not apply; however, I think it doesn’t make sense to appeal to this B-E or antiferromagnetic system as a claim for correlated noise because conversely it shows how uncorrelated noise can lead to effects that some people think are due to correlated noise.

      • February 10, 2012 5:21 pm

        This is very interessting, many thanks, Matt

      • February 12, 2012 12:00 am

        Hi Matt, few little comments. The B-E example was meant to demonstrate conjecture 1 and NOT the idea of correlated noise. As I point out in my short reply the repetition code has the marvelous property that the effect of the noise described by Conjecture 1 is the same as the effect of standard noise.

        According to your interesting description of the B-E example (which was new to me) Conjecture 1, which should apply to bad codes and good codes alike is correct in this case. (Of course I did not mean to offer new predictions on B-E states.) I realize that you think that this has a completely different reason, has “nothing to do” with Conjecture 1, and that you suggest topological protection as a case where Conjecture 1 will fail. That’s interesting.

        Maybe this can be tested already by Abelian anyons which were created experimentally. When you think about them as codes I expect that the states that were created are pretty uncontrolled mixtures of different codewords. What do you expect?

      • Gil Kalai permalink
        February 12, 2012 1:53 pm

        Dear Matt, on second thought, based on your comment, maybe under Conjecture 1, correlated errors occurs for every linear quantum code which encode a single qubit with n qubits (including the repetition code regarded as a quantum code}, assuming that indeed all qubits are used in the encoding. Perhaps what is special with the repetition code is that the correlation occurs only in one direction which allows stable “classical” direction.

        Of course, Conjecture 1 does not specify the quantum operation representing the noise. You can consider arbitrary unitary evolution leading to the encoding and apply some standard noise of very low rate before applying the evolution. In any case, the (mathematical) question if you always get correlated errors is interesting.

      • matt permalink
        February 12, 2012 3:11 pm

        Hi Gil, regarding use of Abelian anyons, curently the experiments aren’t far enough along to test such dephasing questions. http://arxiv.org/pdf/1112.3400.pdf is a recent paper with some interesting results. They measure effects consistent with abelian anyon statistics, and observe jumps in phase consistent with changes in the number of anyons in some island. However, in this experiment, topological protection doesn’t imply much because the anyon number does couple to local operators. In order to have a good code using abelian anyons, you need to have large holes in the system; with large holes, it becomes impossible to detect whether or not a hole contains an anyon without braiding another anyon around it, so you can form topologically protected superpositions of states. (To really make this work, you actually need 2 holes, and then form a superposition of different anyon number in the two different holes (such as 0 anyons in hole 1 and 1 anyon in hole 2, superposed with 1 anyon in hole 1 and 0 anyons in hole 2) because you can’t superpose states with different anyon number.) The topological protection in this case is supposed to be exponentially good in the diameter of the holes and the spacing between holes (you need both distances to be large); however, in the experiment of this arxiv paper, the anyons are just pointlike particles rather than trapped in large holes and so there isn’t supposed to be any protection against dephasing between states with different anyon number in different places (because the “diameter of the hole” is O(1) ). With non-abelian anyons, it should be possible to have topological protection with only pointlike defects; however, in the paper I mentioned, they argue that coupling between these defects and the edge mode has destroyed the non-abelian features…to make it work you’d need a large enough separation between the defects and the edge. In contrast, the paper http://arxiv.org/abs/0911.0345 claims behavior consistent with non-abelian statistics, but even if that paper is correct it is still some distance away from the point of creating protected superpositions of states which you would like to see. Alas, in these experiments there are effects of impurities, small system sizes, charging effects, etc… which make things messy. Hopefully soon it will be more clear. Perhaps Majorana fermions systems will provide a cleaner test soon.

  12. February 10, 2012 3:03 pm

    Following-up on a (very enjoyable!) chat yesterday evening with Aram (Harrow) and Steve (Flammia), there is now on MathOverflow a newly-posted question titled Harris’ “Algebraic Geometry”, quantum dynamics on varieties, and a $100,000 reward, that seeks to specify concrete math-and-physics foundations for meeting Scott Aaronson’s wonderful challenge:

    “There’s a detailed picture of [what] the world could be like such that QC is possible, but no serious competing picture of what the world could be like such that it isn’t. That situation could change at any time, and we would welcome it as the scientific adventure of our lives.”

    Starting next week — or as soon as it becomes administratively feasible — a substantial MathOverflow reputation bonus will be attached to this MOFL roadmap (motto: “MOFL reputations are beyond price”). Happy STEM treasure-hunting, everyone!   :)

    • February 14, 2012 8:32 am

      In regard to the above QM/QC/QIT roadmap — which might be titled Aaronson’s Search in tribute to Borges’ celebrated story Averroes’s Search — please let me commend in Joseph Landsberg’s new textbook Tensors: Geometry and Applications his short essay of Section 0.3, titled “Clash of Cultures“.

      By drawing upon various references in the algebraic geometry literature (that were elicited by the above MOF question) I am becoming reasonably hopeful of posting to MOL, in the next week or two, a concrete conjecture in algebraic geometry that will: (1) help to help reconcile Landsberg’s clash of cultures, and that (2) will concretely further Aaronson’s search, and that (3) will mathematically illuminate Gil Kalai’s conjectures here on GLL.

      This particular roadmap is suggested by a mathematical aphorism of David Hestenes and Garret Sobczyk: “Geometry without algebra is dumb! Algebra without geometry is blind!”

  13. February 10, 2012 3:21 pm

    Gerhard Paseman posted: I recommend making the title be less crass and have more class by omitting the dollar amount.

    Gerhard’s suggestion was excellent, and so here is a link to the MOFL question thus amended: Harris’ “Algebraic Geometry”, quantum dynamics on varieties, and a more-than-monetary reward. Thanks, Gerhard!   :)

  14. February 13, 2012 1:38 pm

    Those are my principles. (G. Marx)
    Let me draw again my proposed picture for Aram’s important classical fault-tolerance test. Before doing that let me remark that my proposed picture can be supported even under the weak interpretations of my conjectures, so it can coexist with the possibility that quantum computers can be built.

    We start with the (rather informal) strong principle of noise which says that you cannot find an (essentially) noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. I find this principle appealing although admittedly it will require pretty wild (fully within QM) Hamiltonian models (probably wilder than Preskill’s model and Alicki’s models). But this is something we will talk about later.

    Next we move to Conjecture 1: For any encoded qubit we have a substantial mixture of the intended state with a “cloud” of unintended states. (I hope that the discussion with Matt helped to make it clear what the conjecture says.)

    Now we look at the repetition code as a quantum code. For this special code, our non standard noise (mixture of undesired code words) has the same effect in terms of bit-flip errors as the standard noise and therefore it allows classical error correction. The symmetry breaking and the special “direction” emerges from the repetition code, not from the conjecture.

    This may explain not only why we do not see quantum codes in nature but also why we do not see other, more sophisticated, classical error correction codes in nature.

    The discussion with Matt suggests that perhaps for every quantum code that genuinely use n qubits to encode a single qubit, Conjecture 1 implies that there will be an effect of error-synchronization. But the repetition code (and perhaps only the repetition code) allows to extract noiseless classical bits. This is an interesting mathematical question.

    Note that this proposed explanation is geometry-free so it will apply also in four dimensions. Kitaev’s remarkable self-correcting 4D memory will only lead to a cloud of codewords – not a single one.

    • February 13, 2012 3:23 pm

      Gil Kalai asserts: This may explain not only why we do not see quantum codes in nature but also why we do not see other, more sophisticated, classical error correction codes in nature.

      Please let me recommend the review by Sancar et al. “Molecular mechanisms of mammalian DNA repair and the DNA damage checkpoints” (2004, 1,315 citations). The point being, mammalian cells have evolved to incorporate dozens of independent, highly sophisticated error-correction mechanisms, acting upon-and-beyond the “obvous:” informatic redundancies that are associated to (1) chromosomal pairing on top of (2) DNA strand pairing on top of (2) the triplet nucleotide code. Heck, Nature exploits for error-correction purposes even the topological information that is encoded in the DNA winding number. “What has been will be again, what has been done will be done again; there is nothing new under the sun.” (Ecclesiastes 1:9, NIV).

    • matt permalink
      February 13, 2012 4:00 pm

      Gil, would you question whether a physical implementation of a classical LDPC code would be able to perform in practice close to what I might predict assuming uncorrelated noise? i.e., I have some physical realization of a communication channel, I make some measurements of its noise properties and succeed in obtaining a rough model of the noise. Then, I choose an LDPC code (or rather, family of such codes) such that this noise is less than the error threshhold for that family. In fact, as a careful engineer, I perhaps allow some safety margin between my measured noise and the threshold for the code. Would you expect that I would be able to see exponential suppression of the error rate in practice or not?

      • February 13, 2012 8:14 pm

        Matt,

        Sure, in such a case you will indeed see exponantial suppression of errors.

        The issue I referred to is why we do not see such codes in nature, in situations where encoding processes themselves are noisy.

        Once we have noiseless bits and we can perform noiseless operations on them (and this is possible under my conjectures) we can prepare noiseless compilcated error-correcting codes which will be useful against independence noise.

        (Of course when I say “noiseless” I mean “essentially noiseless” with amount of noise that we can keep arbitrary low.)

      • matt permalink
        February 15, 2012 12:58 pm

        Gil, why is it that you don’t think that noise correlation will affect a “complicated” classical code like the LDPC code? One reasonable answer of course is that LDPC has a linear distance, so even highly correlated noise won’t destroy the information if sufficiently few bits are flipped. But if the key is the distance of the code, note that there are quantum codes also with linear distance.

      • February 15, 2012 11:21 pm

        Matt, If you believe Conjecture 1 (or even its weakest interpretation) then you get the following picture: a) there is now way to use quantum error correction to get protected quantum evolutions (or at least we did not construct or witness such a way yet), b) You can get via repetition code a protected classic evolution c) on top of that you can protect it further by classic quantum error correction if you wish.

        Anyway, what is your explanation to the large gap we witness in reality between classical and quantum error correction?

      • matt permalink
        February 16, 2012 8:53 am

        The large gap occurs for three reasons. First, the theoretical threshhold for quantum fault tolerance is higher than the threshhold for classical fault tolerance. Both are “numbers of order unity”, but the quantum number seems a lot closer to 1, although theoretical advances have dropped this number a lot. Second, the quantum experiments are much harder to do, and so the gates are worse, although getting closer to threshold. I think this isn’t surprsing: good classical gates require only technology that we’ve had for a long time and can even be built using gears and steam engines if need be. You wouldn’t get very good miniaturization that way, but at least you could get to clearly demonstrate some kind of classical computation, with good enough gates that fault tolerance isn’t required. Finally, the simplest classical code, the repetition code, is naturally implemented by many systems and is in fact why we see classical reality emerge out of quantum mechanics, so fault tolerant is hardware is easier classically. Conversely, fault tolerant quantum hardware, such as a topological state, occur in fewer systems. As a condensed matter physicist, one intuition is that systems just “like to order” classically, and that ordering gets in the way of having a topological phase.

        tl/dr: several qualitative facts make it a lot harder quantum mechanically. Suppose we were debating whether or not it is possible to build a space craft that travels at 10 percent of the speed of light. This debate would have three parts: 1)is it possible under the fundamental laws of physics? 2)is there too much space dust to build such a craft, or not enough fuel available to us to propel it? (i.e., things that aren’t fundamental objections, but are perhaps insurmountable features of the place where we live—the analogue of a quantum theorist who believes that there is a threshhold but that gates at that accuracy can’t be built practically) 3)is this something that our society will ever do? (is there enough money and will to do this?) Each of those have an analogue in the case of building a quantum computer. My feeling is that a quantum computer is like the problem of, say, building a spacecraft to go to Pluto (hard but possible), while your fear is that it is fundamentally impossible (you object on question 1). The least interesting case is that it is impossible for reasons analogous to question 2.

      • matt permalink
        February 16, 2012 8:55 am

        btw, to clarify: when I say “good classical gates” can be built with old technology, I mean good in the sense of high fidelity. Smaller/faster gates are definitely a technological feat! (and fidelity drops as size drops)

      • matt permalink
        February 16, 2012 9:04 am

        at the risk of talking too much, I guess the history of classical fault tolerance is interesting too. Vacuum tube machines had people running around trying to fix burnt-out tubes, and Feynman has some nice Manhattan project stories of starting small computational jobs to patch up part of a larger job when an error occured, so fault tolerance at the time had a human component. And people using hand cranked calculators tell me that they had to be careful not to turn the crank too quickly. So, I guess it is not that wide a window in technological time where we have really high fidelity classical gates.

      • February 16, 2012 9:40 am

        Matt, your post and mine on the same subject overlapped in time and substantially agreed in content.

        Even today substantially all large-scale classical memories and communication channels are error-corrected. As classical technologies increasingly press against quantum limits to size, speed, and (especially) power-efficiency, it is not surprising that classical error-correction techniques are becoming steadily more sophisticated, deeply nested, and ubiquitous.

        When we apply modern dynamical simulation methods to predict error rates in error-corrected quantum computer memories, we discover ample reason to conceive that Gil Kalai’s conjectures may be correct.

      • matt permalink
        February 16, 2012 2:41 pm

        Indeed John, I used the word “window” for a reason. As classical gates are getting made smaller and larger computations are being done, some form of classical error correction becomes more important.

      • February 16, 2012 3:38 pm

        Yes Matt, your “window” point is very well taken … it was your post that led me to appreciate that pretty much all of today’s (classical) computer memory technologies and (classical) communication channels are either error-corrected, or else they are (necessarily!) power-hungry.

        Soberingly, in human beings, inborn DNA repair-deficiency disorders often are associated with a grave medical prognosis.

        Thus we should none of us overlook the ubiquity, necessity, and physics-level subtlety of classical error-correction dynamics.

    • February 14, 2012 7:03 am

      Well, John and Matt you are indeed correct: when I wrote “The discussion with Matt suggests that perhaps for every quantum code that genuinely use n qubits to encode a single qubit, Conjecture 1 implies that there will be an effect of error-synchronization. But the repetition code (and perhaps only the repetition code) allows to extract noiseless classical bits. This is an interesting mathematical question.”

      the “perhaps only the repetition code” is too good to be true. We can simply start with a repetition code and then apply an additional, as complicated as we wish, classical error correcting code. (Both your examples look essentially like this.) Anyway, conjecture 1 allows classical error correction but not quantum error correction. The repetition code indeed seems to be especially suited for noise obeying Conjecture 1 but it is not clear to me how exactly to formalize it.

      The interesting possibility that came up in the discussion that Conjecture 1 always implies error-synchronization is new to me and we can further think about it. As always, counterexamples are most welcome.

      • February 14, 2012 8:04 am

        Gil, I have a lot of respect for Conjecture 1. For me, this conjecture can be focused equally plausibly upon lower bounds to noise levels associated to synchronized error processes, or alternatively upon upper bounds to measures of spatial localization and isolation, and this was the point of my earlier post regarding the emergence of macroscopic spatial parameters (in transport equations) from microscopic parameters (in qubit Hamiltonians).

        In the hardware design of QCs, a reliable rule-of-thumb is that as detectors approach near-perfect efficiency and near-zero noise, spatially correlated qubit couplings (and thus spatially correlated noise) become stronger-and-stronger, via cavity QED effects. It is natural to ask, does this engineering rule-of-thumb regarding correlated noise reflect a fundamental law of physics regarding the limits to spatial localization?

        Thus, one way to read your Conjecture I (as it seems to me) is as a suggestion that we should examine more closely the nuts-and-bolts processes by which the set of macroscopic transport equations that we collectively regard as comprising “three-dimensional space and one-dimensional time”, emerge naturally from the microscopic quantum structure of qubit couplings.

        Needless to say, a full century of arduous wrestling with the notion of “quantum field theory” — both theoretically and experimentally — tells us that these investigations into the quantum dynamics of spatial localization will not be easy. For me, a great virtue of Kalai Conjecture I in particular, and modern advances in QM/QC/QIT in general, are that they encourage us to think about these space-time problems in new ways, with new mathematical tools.

  15. February 16, 2012 6:11 pm

    Thanks, John
    I can especially identify with the sentiments of your last two paragraphs (without claiming to understand the technical jargon).

    • February 16, 2012 10:06 pm

      Gil, please let me see that I earnestly regret the use of technical jargon. The real path by which our group’s QM/QC/QIT ideas arise is by the canonical random walk: (1) contemplate any practical problem that pushes against quantum limits to sensitivity, speed, power, size (etc.), (2) analyze the associated dynamics by any computational strategy that works, then (3)  gaze slack-jawed at the resulting computational miracles and say to oneself mathematicians surely have a name for this.

      • February 17, 2012 7:57 am

        Whoops, the above link is paywalled (which is deplorable) and not much fun (which is worse). Much more fun are the lectures of last summer’s Topics in Tensors: A Summer School by Shmuel Friedland (Coimbra, 2011).

        It is striking that certain key themes of the Kalai/Harrow GLL/QM/QC/QIT debate were sounded too at these Coimbra Lectures … as reflected in the fun-house mirrors of modern algebraic geometry.

        For example, to appose Dick’s quote of Einstein’s great truth “God is subtle, but He is not malicious” we have Shmuel Friedland’s dual aphorism “Matrices were created by God and tensors by the Devil.”

        Similarly, to appose the Aaronson Quantum Prize of $100,000 generously offered for (essentially) a viably non-Hilbert state-space for quantum dynamics, we have the Allman Salmon Prize, generously offered by the geometer/biologist Elizabeth Allman for (essentially) a better understanding of certain algebraic state-spaces that, by the synchronicity so usual in math-and-science, are reasonable candidates too for the Aaronson Quantum Prize. Moreover, because Prof. Allman teaches at the University of Alaska, her tender of a tasty wild salmon is a prize well worth winning!

        And finally, a virtue shared by this GLL debate and by the Coimbra Lectures is this: challenges are construed broadly rather than narrowly, such that fundamental problems in representation theory are regarded as encompassing quantum dynamics and phylogenetic trees and sparse decompositions and image compression … such that each discipline and each application enjoyably illuminates the others.

  16. August 7, 2012 2:47 pm

    The last post in my debate with Aram will be come rather soon. We had an interesting and useful off-line discussion on some of the issues that I raised in my response to Aram’s posts. Here is something that came up: When you have independent noise on qubits then the number of errors decay exponentially above the rate level. This is a crucial property for quantum fault tolerance proofs. Another property of independent noise is that the number of qubit errors is highly concentrated. This property seems to me unrealistic, e.g., in the context of the repetition code and Aram’s microscopic description on digital computers. The exponential decay property whixh enables FTQC is witnessed also in the more general models of Aharonov-Kitaev-Preskill and Preskill. I expect (or suspect) that for these models the high concentration of the number of errors also hold. (But this requires a proof). In contrast, for the noise described by Conjecture 1, applied to the repetition code the number of errors will be “smeared” around the rate. This observation seems to me as giving some support to Conjecture 1.

    • August 7, 2012 6:31 pm

      Why should this concentration of errors be unrealistic? In classical computers, the distribution of errors is fairly well understood, and is broken into 1/f noise, unwanted capacitive/inductive couplings, Johnson noise, manufacturing faults, and so on. These are all what I would call realistic, and generally lead to the observed infinitesimal failure rates of the repetition-encoded bits that transistors use. What is the realistic noise source that we have not yet noticed that will derail quantum computers?

      • John Sidles permalink
        August 7, 2012 7:33 pm

        Aram, please let me say that the commitment, and effort, and the imagination that you and Gil both bring your debate is greatly appreciated, by me and (I’m sure) many folks.

        Aram asks  “What is the realistic noise source that we have not yet noticed that will derail quantum computers?”

        As usual, the history of science does not supply a specific answer, but it does provide a reasonably reliable generic answer: “The best way to discover this mechanism is to attempt to build practical quantum computers.”

        E.g, our theoretical appreciation of plasma instabilities has been hugely enhanced by numerous humility-inducing attempts to build practical fusion reactors. As the Soviet theoretician Lev Artsimovich famously prophesied: “Fusion will be there when society needs it.” Now the Soviet Union is gone, and society urgently needs fusion power … and it is not here. Ouch!

        A little closer to home, please allow me to commend the historical review in Orbach and Stapleton’s article “Electron Spin-Lattice Relaxation” which is a chapter in Electron Paramagnetic Resonance (1972, S. Geschwind, ed.), for its humility-inducing account of the struggle to account for spin-lattice relaxation mechanisms … a multi-decade struggle in which Nature’s ingenuity in creating spin relaxation mechanisms consistently dominated the ability of theorists to foresee those mechanisms.

        As concrete evidence that this struggle continues even to the present era, we see on the arxiv server the very interesting preprint by Lara Faoro, Lev Ioffe, and Alexei Kitaev titled “Dissipationless dynamics of randomly coupled spins at high temperatures” (arXiv:1112.3855v1, 2011).

        Perhaps good question to ask, therefore, is this one: “How many realistic noise sources that we have not yet noticed will derail quantum computers?” As an (entirely sincere) tribute to the strength and creativity of Alexei Kitaev’s work, that is informed by the above history, I have posted on CalTech’s new Quantum Frontiers weblog a brief essay to the effect that the best we can hope for is the Kalai-esque yet utopian answer “Plenty!”

      • August 9, 2012 1:37 pm

        Aram, the observed failing rate of digital memory is supported both by the independent error-model and its variations, and also by Conjecture 1. The difference is that under the independent model the fraction of faulty qubits is highly concentrated around the expectation. This concentration behaves like one over the square root of the number of microscopic spins -and this is what I regard unrealistic. Under Conjecture 1 the fraction of errors is not concentrated.

      • August 10, 2012 12:52 am

        What makes you say that O(sqrt(n)) fluctuations are not realistic?
        Here is one simple example:

        A capacitor with capacitance C charged to a constant voltage will have O(C) electrons more on one plate than one the other. But at constant temperature, the fluctuations in this number will be O(sqrt(C)).

        http://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise#Noise_at_very_high_frequencies

        To keep citing wikipedia for everything, this article claims that the shape of the noise is Gaussian:

        http://en.wikipedia.org/wiki/Noise_(electronics)#Thermal_noise

        I promise I didn’t just edit those two articles to say that. :)

        Now it’s your turn: what’s a realistic source of noise that would give rise to Omega(n) fluctuations with exp(-o(n)) probability?

  17. August 8, 2012 8:24 am

    Dear Aram and John,

    The issue of my comment was not so much to identify new sources of noise (which is quite interesting) but to address the question of correct modelling of the noise we are already familiar with in the context of more general noisy quantum evolutions. Especially in the context of quantum codes, and the familiar repetition code.

    Aram proposed to regard the memory of a digital computer as representing the repetition code which represent ‘0’ by |00000…0> and ‘1’ by |111…1>. The zeros can correspond to microscopic up-spins and the ones to down spins. The noise causes some spins to flip. When we consider the model of independent errors with probability p then the fraction of flips will be highly concentrated at p. We do not need this high concentration for the purpose of classical error correction (and even not for quantum error correction); it is enough that the probability that a large fraction of spins will be flipped is negligible. But we get this concentration nevertheless.

    Realistically, I would expect that the fraction of flipped spins is not concentrated as the number of spins n grows but rather is smeared around a certain level while maintaining the crucial property that the probability of many flips which change the value of the majority function is negligible.

    I suspect that the sharp concentration witnessed for independent noise also holds for the Aharonov-Kitaev-Preskill model and the recent Preskill Hamiltonian model in the cases where it was shown that FTQC is possible. (A new version of Preskill’s paper was recently arxived.) One nice feature of Preskill’s recent model is that FTQC is possible if a certain norm associated to the noise is bounded. (Preskill proved that this is the case if some correlations are decaying in certain geometric models.) Can we expect that the norm described by Preskill will indeed be bounded in realistic situations? As I said, I expect that this model like a model of completely independent noise will predict that the fraction of flipped spins is highly concentrated.

    Conjecture 1 suggests that the noisy repetition code will lead to states which are the encoding of a certain “neighborhoods” of |0> and |1> in the Bloch sphere and this is consistent with the fraction of spin-flips being “smeared” and not concentrated on a single value. Conjecture 1 will still allow us to have robust classical bits from the repetition code but will not allow quantum error correction. (Of course, one can also describe models of noise based on independent noise where the number of errors is not concentrated but events of having many errors are negligible.)

    Three further remarks.

    1) The 1/f noise that Aram mentioned were also mentioned to me by Nadav Katz and it will be interesting to look how and if 1/f noise is relevant to our discussion.

    2) Robert Alicki made an interesting remark regarding the boundedness of the norm used by Preskill. He considered more and more refined description of the same familiar quantum evolution. The coarser description can be regarded as a noisy version of the finer description or as a noisy description of the limit evolution. Alicki expected that the norms associated to the noise compared to larger and larger refinement is unbounded.

    3) My last paper on the topic describes also a somewhat different way to think classically about the repetition code which explains why in certain cases where the analog of Conjecture 1 holds classical memory and computation are still possible.

    • August 8, 2012 8:26 am

      Let me add relevant links to my comments above: Preskill’s arxived article, Alicki’s comment which followed this comment by Preskill (My description is based on my understanding of some explanations by Robert); 1/f noise (scholarpedia article).

    • John Sidles permalink
      August 9, 2012 9:56 am

      Gil and Aram, with regard to the quantum origins of noise in general, and more specifically with regard to “1/f” noise, and most specifically with regard to “1/f” noise originating in spin dynamics, please let me commend the long arc of literature (6+ decades!) that extends from Hendrik Casimir’s “On Onsager’s principle of microscopic reversibility” (Reviews of Modern Physics, 1945) — a work that I highly commend — through Azriel Genack and Alfred Redfield’s “Theory of nuclear spin diffusion in a spatially varying magnetic field” (Physical Review B, 1975) to Lara Faoro, Lev Ioffe, and Alexei Kitaev’s recent preprint “Dissipationless dynamics of randomly coupled spins at high temperatures” (arXiv:1112.3855v1, 2011).

      This long arc of literature is written entirely in the “classical” dynamical and thermodynamical language of Dirac and Onsager, that today is reflected in epynomic textbooks by (for example) Slichter, Landau and Lifshitz, Nielsen and Chuang, and Kittel (among dozens of excellent works).

      And yet, during the 1950s, and continuing to the present day, a second dynamical language was conceived, emphasizing naturality and geometry, that is associated to names like Élie Cartan, Samuel Eilenberg, Saunders Mac Lane, and Vladimir Arnold.

      Simultaneously, a third dynamical language was conceived, emphasizing the unraveling of open systems into ensembles of dynamical trajectories, that is associated to names like Göran Lindblad and Howard Carmichael.

      And nowadays, what seems to be a fourth dynamical language is being conceived, emphasizing entropy-area properties of dynamical state-spaces, and their intimate relation to the existence of efficient computational simulation algorithms … we don’t yet know whose names will be associated with this new dynamical language!

      It is straightforward and instructive exercise to set down the natural equations that govern these four points-of-view — a single page suffices — and yet will be a very long time (decades at a minimum) before we grasp all of their implications … and still longer before any textbook is written that syntheses these four perspectives.

      Therefore (for me) the most valuable outcome of the Kalai-Harrow debate will not be any narrow determination of “who’s right” — which is an unlikely outcome in any event — but rather an already-gained appreciation that each of the above four languages contributes substantially to our 21st century’s integrated appreciation of quantum dynamics, and therefore, a concrete near-term opportunity is to integrate, synthesize, and apply the dynamical insights that we already have achieved.

      Therefore, with regard to building practical quantum computers (or determining finally that they are infeasible), we need not hurry, because we have *plenty* of useful work to do, and good ideas to pursue, right now. Which is GOOD, eh? :)

    • John Sidles permalink
      August 9, 2012 6:20 pm

      Here is an ad hoc suggestion for a class of noise models that are (1) maximally “toxic” (perhaps) to quantum computation in the sense of Kalai, and yet are (2) realistic (maybe) in the sense of Faoro, Ioffe, and Kitaev’s arXiv:1112.3855v1, and futhermore are (3) theoretically tractable (maybe) in the sense of Aaronson and Arkhipov arXiv:1011.3245v1.

      We’ll start with an error-corrected ion-trap quantum computer, and for noise we’ll shine a (weakly coupled) laser diode on *one* of the ion qubits, that is itself an element of *one* of the computational basis qubits. So far, no problem for error-correction. Now direct the diode output into *one* of the 50 input fibers of a 50×50 Aaronson-Arkhipov (passive) optical coupler, and shine each of the 50 output fibers onto a *different* ion.

      Hmmm … now *this* noise is problematic for any error-correction method (know to me), for the physical reason that we can (in principle) collect all 50 of the outgoing photons in *another* Aaronson-Arkhipov, thus creating a 50-in/50-out photon interferometer that continuously monitors high-order quantum correlations among the ion quibits, and thus acts to quench those self-same correlations. Ouch!

      To make the above noise model more physical, we first dispense with the output interferometer (which is irrelevant by ambiguity of operator-sum representation), and then we dispense with the input photons (substituting unpaired electrons in the walls of the optical cavity holding the ions), and finally we dispense even with the input Aaronson-Arkhipov interferometer, substituting optical modes and/or conduction-band modes of the ion trap.

      Here it point is that models like the above render it non-obvious (to me) that nasty high-order quantum noise mechanisms do not ubiquitously lurk in innocuous-seeming laboratory apparatus. In this regard there is no substitute for going into the laboratory and attempting difficult experiments. In this, everyone agrees!

      • August 10, 2012 1:00 am

        This is a very nice idea, combining experimental falsifiability with theoretical tractability. I need to think more about its realism (especially with “50” replaced by a number that grows with n), but let’s leave this aside for the moment.

        Let me mention, though, that “problematic for any error-correction method” is a tricky thing to claim. Yes, “problematic” in that we cannot trivially say that QECC will work simply by appealing to the minimum-distance property. However, it is not enough for the noise to have a non-negligible probability of being high weight. It’s crucial that the noise operators also are close to logical operators for the code, otherwise they may still be correctable. This is especially true if we get to measure and characterize the noise before choosing our error-correction scheme. For example, we _do_ observe collective dephasing, and this is one of the easiest types of noise to correct for, simply by using a decoherence-free-subspace (cf. Dave Bacon’s dissertation).

      • John Sidles permalink
        August 10, 2012 8:47 am

        Aram, with regard to computation-challenging quantum noise-and-transport models, please don’t look (to me) for any final conclusions very soon.

        In my own reading of the history of science, quantum noise-and-transport problems are remarkable for: (1) the distinguished cadre of physicists who have worked upon these problems, and (2) the stately pace of progress in the past century, and (3) the immense journey still before us, before we arrive at a reasonably comprehensive and natural understanding.

        So the good news is, there are very many, very interesting, very important open problems in quantum noise-and-transport. The sobering news is, few or none of these open problems are easy.

      • John Sidles permalink
        August 10, 2012 9:04 am

        Oh, and perhaps I’d better mention, that physics disciplines like cavity QED in general — and single-photon emission as a canonical example — for me are “transport” disciplines, in the sense that these features are present:

        • the coherent dynamics is Hamiltonian,

        • the decoherent dynamics is Lindbladian,

        • the transport dynamics is “Onsagerian”, and

        • the informatic dynamics is “Shannonian”

        An objection might be, that these elements are present in *most* physics articles, and (explicitly or implicitly) these elements are present in *all* quantum computing articles.

        To which the answer is: yes.

      • John Sidles permalink
        August 12, 2012 8:55 pm

        And finally, the above class of qubit noise models plausibly (and perhaps naturally) violates the k-qubit noise scaling that is postulated in Preskill’s recent “Sufficient condition on noise correlations for scalable quantum computing” (arXiv:1207.6131v1). Most likely, it will require a considerable effort, and a lot of ingenuity, to experimentally verify and theoretically validate our understanding of these noise-related quantum dynamical processes.

      • John Sidles permalink
        August 13, 2012 6:53 am

        And as a coda to the above piece-wise meditation, here is a delightful passage from the introductory chapter to David Deutsch’s book Fabric of Reality (a book that is strictly Harrow-compatible):

        In the future, all explanations will be understood against the backdrop of universality, and every new idea will automatically tend to illuminate not just a particular subject, but, to varying degrees, all subjects.

        By Deutsch’s universality principle we can infer that a mathematical/physical inquiry that casts a sobering light upon scalable quantum computing, almost surely will provide inspiring illumination to other research programs and technologies. We need only look, and we will see. Good!   :)

  18. August 12, 2012 11:33 am

    Aram: “Now it’s your turn: what’s a realistic source of noise that would give rise to Ω(n) fluctuations with exp(-o(n)) probability?”

    It is good that we identified, Aram, a simple clear-cut technical place of disagreement.

    A simple model with Ω(n) fluctuations

    The simplest description of a realistic noise with Ω(n) fluctuations is for a bit to be faulty with probability p where p is not a constant but rather determined based on some probability distribution on [0,1]. I would regard the description where p is a constant unrealistic. For memories of digital computers it is realistic to assume that p is supported on an interval close to zero (like [0.05, 0.1]). This allows noiseless bits and classical computation.

    ( Ω(n) fluctuations may still allow quantum fault-tolerance. If the distribution of p is (entirely) supported on an interval sufficiently near to zero, or decay fast enough, then quantum fault tolerance is possible.)

    Preskill’s model

    Noise models where p is concentrated and the fluctuation is O(\sqrt n) are, in my opinion, unrealistic. (Of course, they can still be useful.) This (I think) applies also to the new general model of Preskill. This would mean that Preskill’s model leaves out a major ingredient of realistic noise. This ingredient which is abstracted out is needed to tell if FTQC is possible or not. This also means that the norm described by Preskill is unbounded in realistic situations.

    Why I regard O(√n) fluctuations unrealistic for memories of digital computers

    A few sentences for why I regard, for memories of digital computers, highly concentrated p as unrealistic. A single bit in digital computers can be modeled by a 2-D Ising model. Here in cold “temperatures” T there are two stationary states: in one a fraction of p spins are up and a fraction of 1-p spins are down and it is the other way around in the other. The value p of the fraction of up-spins depends on T and indeed if T is fixed then so is p and we have suqre-root n fluctuations. Realistically, I would expect that T is not fixed but fluctuates as well. Putting more effort into the engineering you can make T fluctuates less and less but this will only affect the constants and the overall fluctuations of the number of spin ups will be linear and not square-root.

    • August 12, 2012 12:27 pm

      Gill, Two questions

      1. You say ” A single bit in digital computers can be modeled by a 2-D Ising model” Why would a zero-field Ising model be a good model for the bits in a computer memory?

      2. “The value p of the fraction of up-spins depends on T and indeed if T is fixed then so is p and we have square-root n fluctuations.”
      Why does this lead to square root fluctuations? For the Ising model in the low temperature phase Martin-Löf has proven that the magnetisation converges to a gaussian, and so have very nice concentration properties
      See http://proxy.ub.umu.se:2103/content/n65471h167w25j58/?MUD=MP

      • August 12, 2012 12:51 pm

        Klas, thanks for the comment! But can you please fix the link for those of us not at your university?

      • August 12, 2012 12:54 pm

        Aram, sorry about that, I used the proxy since I logged in from home. Here is the correct link

        http://www.springerlink.com/content/n65471h167w25j58/

      • August 12, 2012 12:59 pm

        Thanks! One more editorial comment: Springer is apparently offering to sell the article for $39.95, while Project Euclid offers CMP articles from 1965-1997 for free.

        http://projecteuclid.org/euclid.cmp/1103859069

        I mention this only in the vain hope that everyone will be inspired to always put all of their articles on arxiv.org.

      • August 12, 2012 2:57 pm

        Klas, Regarding the second question, dont we say the same thing? note that n is the number of spins and that when T is fixed when we condition on the state represeting ‘one’, we have a very nice concentration which gives that the standard deviation of the number of up spins is sqrt n. More precisely, conditioned on majority of spins being up the distribution of spins is essentially binomial with probability for a spin=up is p. (p depends on T)

        Regarding the first question: The 2-D Ising model is regarded as a “role model” for a two-state system with spontaneous (or active) error correction. So I think it is common to think of digital computer memory as modeled by it.
        (In some sense Kitaev’s 4D quantum model is an analogue of the Ising 2-D model as the toric code is an analog of the 1D Ising model.)

      • August 13, 2012 2:56 am

        Gil, yes I think we are saying the same thing about the standard Ising model. What I was really wondering is how you get the linear fluctuations from that?

        Do you intend to have an individual, somehow random, T-value for each spin? In that case I don’t see right away why you would see large fluctuations, rather than something close to the Ising model at a different, but fixed, temperature. Here I’m assuming that you do not allow local temperature variations which are large enough to mover parts of the system into the high temperature region.

      • August 13, 2012 8:07 am

        Dear Klas, It is reasonable to assume that T (or the average value of T) will flactuate as a function of time. (It can also change for individual spins..)

      • August 13, 2012 12:16 pm

        Gil, I am not questioning that. I just wonder how large you must make those fluctuations in order to get as large fluctuation in the magnetization as you propose?

        When one is not close to the phase transition the magnetization M, and so the effective value of p, is not very sensitive to changes in T. For low enough T the change in magnetization would be sub-linear in the change in T, so for small expected T it looks like the fluctuations in M should be small even with a fluctuating T. However closer to the phase transition this could change. So maybe there is a phase transition in terms of the expected value of T here?

  19. August 14, 2012 4:40 pm

    Hi Klas, I prefer to think directly on the fluctuation of the fraction p of up spins (and not in terms of the parameter T). Suppose that your computer bit is represented by 10^8 spins and a ‘0 state means roughly a fraction of 0.01 spins up and a ‘1’ state means roughly 0,99 spins up. Square root fluctuations mean that these values will be extremely stable so the 0.01 will be 0.01+- 0.00001 or so.

    Such a strong stability is not needed for computation (which is essentially based on the majority function) which will work perfectly if p is constrained to an interval of length proportional to 0.01. It also looks counter-intuitive to me that such a strong stability is maintained rather than p, while being kept in a small interval, fluctuates and oscillates conditioned on some physical parameter of the system. But Aram thinks differently. Probably some people understand much better the physics of digital memory to enlighten us.

Trackbacks

  1. Aram’s rebuttal | The Quantum Pontiff
  2. How Deep is Your Coloring? « Gödel’s Lost Letter and P=NP
  3. Nature Does Not Conspire « Gödel’s Lost Letter and P=NP
  4. Updates, Boolean Functions Conference, and a Surprising Application to Polytope Theory | Combinatorics and more
  5. The Quantum Super-PAC « Gödel’s Lost Letter and P=NP
  6. Assorted Links (Theory) | David Cerezo Sánchez
  7. Flying dragons of the 3rd millennium? | Are You Shura?
  8. The Quantum Fault-Tolerance Debate Updates | Combinatorics and more
  9. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb
  10. Can You Hear the Shape of a Quantum Computer? « Gödel’s Lost Letter and P=NP
  11. Quantum Repetition « Gödel’s Lost Letter and P=NP
  12. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP
  13. The Quantum Debate is Over! (and other Updates) | Combinatorics and more
  14. My Quantum Debate with Aram II | Combinatorics and more

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,035 other followers