Quantum Supremacy or Classical Control?
Final summations of the KalaiHarrow debate
source—our congratulations 
William Unruh is a professor in the Theoretical Physics group of the University of British Columbia. He is known for the Unruh effect, which predicts that an accelerating observer will feel radiation that an observer in constant motion will not. This does not contradict relativity, but experimental tests have so far produced debatable results. He also wrote a paper soon after Peter Shor’s famous algorithm appeared in 1994, showing how quantum noise would swamp a direct implementation of it. This was an early spur for quantum error correction, and Unruh is said to have withdrawn his skepticism after the quantum faulttolerance theorem was established.
Today we conclude our debate between Gil Kalai and Aram Harrow on the scalability of quantum computers. What is our computational world—our mundus computationalis?—one with quantum supremacy or classical control?
Unruh’s Science Canada profile (see also this Q&A) includes this quotation in his own voice:
“William Blake said ‘If a fool would persist in his folly, he would become wise.’ I’m grateful that society allows me and other scientists to persist.”
On behalf of Aram we note that Unruh has not persisted in his critical stance—at least the quantum computing notes on Unruh’s group webpage are skepticismfree. On behalf of Gil, we note the value of persisting to make us wiser, besides the possibility of ultimately being right. We have persisted since January, and on behalf of Dick, I (Ken) thank both principals.
Our debate ends with a summation by Gil and then by Aram. The first part of the summation turned into a renewed discussion of Conjecture 1. In this post Gil begins by responding to Aram’s two other detailed defenses in our first round of the debate, posts II and III. He then reviews his original conjectures in light of these responses, our debate’s second round in which one conjecture was partially refuted, and copious backandforth discussion in many comments to these posts. Then Aram gives his summation, updating the considerations that answered Unruh’s questions long ago. We conclude with thanks and some “what could be next?” considerations.
Gil Kalai: An overall look at Aram’s posts
I will continue to discuss the arguments that Aram made in his interesting posts. Aram raised important technical and conceptual issues, and some of the conceptual differences between our points of view will certainly stay unsettled for quite a while. However, with the exception of Conjecture C as we discussed earlier, all the technical arguments against my conjectures raised by Aram’s are either weak to start with or incorrect.
Aram’s classical faulttolerance test indeed poses a major challenge to quantum information theory, but I regard my explanation that is based on Conjecture 1 not only as adequate but also as advantageous compared to other explanations. Aram’s recent study of Conjecture 1 is useful, but his formulation is incomplete and his interpretation of the conjecture is incorrect. We discussed these matters in a previous post.
The concern that my conjectures are in conflict with the linearity of quantum mechanics is wrong—as exemplified by (among other things) Aram’s own shadow qubit example. The technical claim from the second post that pairwise positive correlation is insufficient to imply errorsynchronization is incorrect. Aram’s thought experiments from the third post are interesting but do not support Aram’s claim. The specific claims that the high correlation of the kind predicted by my conjectures are not physical are not correct, and this general issue represents an important difference on appropriate modeling of noise. We will discuss some of these matters now.
Correlated Errors and Entanglement
A quick review
Central to my argument is the assertion that noise can be highly correlated and the assertion that errors will be highly correlated when we try to create highly entangled states. Before discussing Aram’s objections to my conjectures I would like to emphasize two important points.
The first is that the ultimate obstacle failing universal quantum computers is simply that the rate of noise will be too high. Correlation is an important part of the picture, but the firstorder obstacle is the rate of noise in terms of qubit errors. High correlation and many qubit errors go hand in hand because it is reasonable to assume that the rate of noise in terms of tracedistance is stable over time for quantum evolutions since tracedistance is invariant under unitary operations. But, when the noise is very correlated, the rate in terms of qubit errors scales up linearly compared to the rate in terms of trace distance.
The second point is the distinction between specialpurpose devices and generalpurpose devices which was discussed in the previous post. I regard my conjectures as describing noisy specialpurpose quantum devices, and this description will explain why generalpurpose quantum devices cannot be built.
Many of the arguments and intuitions against my conjectural picture of noisy quantum devices are based on taking for granted a scenario of a universal quantum device. For example, Peter Shor commented:
“…the universe needs to know what code you are using in order to foil you. This attributes both more intelligence and more malice to the universe than I am willing to believe that it has.”
This makes sense if the code is created on a universal machine. But if every code (that can be created at all) requires a specific machine, then neither malice nor intelligence on the part of the universe to tell them apart is required.
The main objections to my conjectures
Conjectures 3 and 4 proposed an explicit relation between the intended state and the noise. Conjecture 3 asserts that a pair of entangled qubits are subject to error (information leak) with positive correlation. Conjecture 4 asserts that qubits whose state is very entangled are subject to errorsynchronization, namely there is a substantial probability for errors effecting a large fraction of all qubits. In my papers I present a formal mathematical description of Conjectures 3 and 4 and point out that a certain strong form of Conjecture 3 implies Conjecture 4. One idea that was raised in the discussion (especially with commenter matt) is that errorsynchronization follows from Conjecture 1, for every quantum errorcorrection code that genuinely depends on qubits.
The main objections to my conjectures were:
The conjectured properties of noise violate the linearity of quantum mechanics.
To the best of my judgement this is not true. Let me mention three points:

The class of Hamiltonian quantum evolutions (namely smoothed Lindblad evolutions) that describe the “bad noise” forms a subclass of all Hamiltonian evolutions of system and environment.

Aram described a quantumtheoretic scenario (shadow qubits) that may support the phenomenon I conjecture. The same applies to variations on John Preskill’s models.

There are strong reasons to believe that the conjectures do apply in many cases, e.g., for small quantum computers, say those with at most 20 qubits. Why isn’t this a violation of quantum mechanics linearity?
My conjectures can be regarded as nonlinear behavior of decoherence that are fully supported by quantum mechanics. The claim regarding QM’s linearity also manifests a confusion between the behavior of specialpurpose machines and of generalpurpose machines.
The conjectured properties of noise violate “known physics.”
My response to this important point raises the central issue of choosing between two approaches for modeling:
 The first approach is that in modeling open quantum systems we should take “known physics” as part of the noise model.
It turns out that if you impose the “known physics” on the noise it leads to surprising “unknown physics” that we will be able to create by controlled quantum evolutions via quantum faulttolerance. Joe Fitzsimons’ 2locality suggestions as well as John Preskill’s model represent this approach.
 The second approach is to let the “known physics” (both for the noise and for the signal) emerge from the model itself.
My modeling follows the second approach. I expect that my models of noise will confirm these “known physics” insights both for the noise and for the signal. I find the approach appealing since there is no reason to believe that when we have a natural quantum system the laws that describe the interaction of the system with the environment will be different from those that describe the interactions within the system.
src 
Joe Fitzsimons’s 2locality argument was a specific attempt to explain why my conjectures violate known physics. It turned out, however, that a strict version of it is too strong to be realistic (see this comment by Preskill), while a weak form of 2locality is too weak to be in conflict with my conjectures. But again, my main objection to Joe’s attempt was different: insights of this kind should emerge from the model rather than being imposed on the model.
While I disagree with these two outofhand reasons to reject my conjectures, it is quite possible that my conjectures are simply false. They should be carefully examined on a casebycase basis. Be that as it may, my conjectures are proposed also as a formal description for nonFT quantum evolutions even in a reality that allows or even demonstrates quantum faulttolerance.
Towards a theory of nonFT quantum evolution
Switching the Quantum FT option OFF
Suppose that we have universal quantum computers, on every desk and every lap. One way to think about my conjectures is as follows. Let the quantum computers be equipped with a quantumFT on/off switch, such as noisecancelling earphones have. My conjectures describe the behavior of noise when the quantumFT switch is off. Moreover, many of the attempts to build quantum computers or to create experimental quantum evolutions do not apply faulttolerance schemes to start with, so quantum computers with the quantumFT turned off should suffice to describe them.
A basic noise example
The most basic example of noise when the quantum FT is switched off is the following.
You have an algorithm described by a unitary operator . Let be a standard simple noise operation. Consider the noise operation . In words, this means that the noise at the end of the evolution is like applying at the beginning and then running the unitary operator noiselessly. My conjectures say that some part of the overall noise behaves just like this example.
Smoothed Lindblad evolutions
Start with a general (timedependent) Hamiltonian evolution defined between times and on the system and the environment. Let denote the Hilbert space describing the system. The evolution can be described infinitesimally via the sum of two superoperators and , where describes a unitary evolution on and describes the “noise.” (I assume that is Lindbladian and correspond to POVM measurement. But we can consider more general or more restricted scenarios.) Let be the unitary operator describing the evolution between times and . Note that unitary operators on induce an action on superoperators.
Let be a positive kernel defined on . (In order to include standard noise we can allow an atom at 0.) The smoothing operator is obtained by replacing by a weighted average of a 1parameter family of superoperators of the form with weight . (Here varies over all times between 0 and 1; it seems important to take times in the future as well as times in the past.) The discretetime model is especially simple. I propose smoothedintime noise of this kind as a formal description of noise “without faulttolerance”. Namely, this type of approximations to pure evolutions expresses formally the idea that quantum faulttolerance was not activated and at the same time will not allow it.
Other Topics in Brief
The debate touched on many technical and conceptual issues, and I tried to round up many of them in this comment, plus some followups. Let me relate some here:
Hamiltonian models and concentration of the number of errors
In various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. I regard this feature as unrealistic, and believe such models abstract away a crucial aspect of noise that may or may not allow fault tolerance. See this comment and the previous post overall.
Bose Einsteincondensation; superconductivity; teleportation
I proposed to examine Conjecture 1 (adapted) on BoseEinstein condensation, and discussed it mainly in this thread.
David Deutsch proposed (privately) superconductivity as a counterexample for my strong principle of noise. Peter Shor proposed Kitaev’s teleportationbased quantum computing as a counterexample to my conjectures. (I have some reasons to think that smoothed Lindblad evolutions exclude this type of quantum computing.)
Perturbation methods; renormalization group
Understanding the feasible approximations for pure quantum states and evolutions is of interest not only in the context of controlled quantum evolutions. Non FTevolutions are relevant for describing a quantum evolution on a small Hilbert space that neglect some of the degrees of freedom of a quantum system.
John Sidles, Aram, Hal Swyers, and other commentators mentioned some connection with perturbation theory, and indeed this is a direction that I find very interesting.
In a private discussion Aram mentioned the renormalization group as an example that there can be an effective theory at some scale (i.e. low energy, or large distances) that results from “integrating out” dynamics at higher energies/shorter distances/higher frequencies/etc, Again, this can be related to questions about modeling noise coming from neglecting internal structures of our qubits (or qudits).
Censorship
My Conjecture C from the first post was refuted by Aram in conjunction with Steve Flammia. We discussed this matter in this post, where we also discussed a related 1980 paper by Anthony Leggett. The post on censorship discusses boundeddepth quantum circuits as “Shor/Sure separators,” an idea that goes back to Unruh. Other alternatives for Conjecture C were offered by John Sidles in this comment based on tensorrank, and also here, and by me in this comment (based on Pauliexpansion).
If smalldepth quantum computation is indeed the limit of approximated pure evolutions this suggests that quantum computation not only offers no computational superiority, but rather that interesting computation in our world is classically controlled.
Can the computation hide the geometry and physics?
One insight of classical simulation that people tend to take for granted in the quantum case is that computation abstracts away the physics and geometry of the simulated device. I challenge this insight and our thirdround post was devoted to this issue.
Anyons
There are fascinating suggestions for robusttonoise states that (in principle) can be created experimentally, see this review paper.
The highlevel concern is that since the proposed experimental process does not involve FT, the noisy states will satisfy Conjecture 1 and will not exhibit the expected robustness to noise. However, the description of how the stability of certain states increases, such as when two anyons are taken apart, is quite convincing—and it will worth the effort to try giving a specific physical mechanism that supports my conjectures in this case.
Simulating quantum physics on classical computers
The connections with classical simulations of quantum physics was mentioned quite a few times. A basic question is, what can be said about computations in physics that clearly require an exponential number of steps on a digital computer? Aren’t such computations (which apparently nature can carry on routinely) a clear evidence for “quantum supremacy?” My guess is that in cases where the computation is intractable, the answer is irrelevant.
The rate of noise
Neither the ordinary models of noise nor my conjectures for nonFT noise give a lower bound on the rate of noise. One proposal in my paper, drawn after some work by Sergio Boixo, Lorenza Viola, and Gerardo Ortiz, is:
Conjecture: The rate of noise in a time interval is bounded below by a measure of noncommutativity of the unitary evolutions in this interval.
Nonflat substitutes for Hilbert spaces
John Sidles offered, over many comments, a certain geometric theory of quantum evolutions on nonflat spaces, and connected it with various aspects of our debate. This is certainly very interesting.
Non FT classical evolutions
It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.
src 
Aram Harrow: Rebuttal and Summation
Quantum mechanics has been a counterintuitive theory from its birth, and the theory of quantum information in a sense aims to distill the strangest aspects of quantum mechanics. So it is not surprising that they both have faced a chorus of skepticism from the start, including echoes of today’s debate. The problem with the skeptic’s task—from the original EPR paper to the Gil’s contemporary objections—is that their arguments entail an alteration of this supremely successful theory. Not only does the evidence unequivocally support quantum mechanics, there is not even any candidate theory that could replace it, waiting in the wings for experimental confirmation.
Gil’s conjectures, then, are really preconjectures, meaning that they are neither purely mathematical conjectures nor specific claims related to observable phenomena, but rather organizing principles around which we might look for such claims. He conjectures that a consistent theory can be found that will encompass both the impossibility of quantum computers and the last century of experiment supporting quantum mechanics. This is a worthy goal and a fascinating intellectual challenge. But it is at least very hard, and is probably impossible, as evidenced in part by the difficulty that Gil faces in making precise statements about his conjectures.
More concretely, he can make precise statements about his conclusions, such as “realworld quantum computers experience noise that makes them classically simulable,” but not about any microscopic claim about physics, even in any concrete model, that could plausibly lead to these conclusions. Similarly, his conjectures have difficulty avoiding reference to the “intended” operation of a quantum device, although any physical principle we discover is unlikely to depend on our intentions.
Of course, Gil does suggest a number of physical mechanisms that are candidates for derailing quantum computing, or at least refuting the assumptions of the FT threshold theorem. He discusses selfsynchronizing metronomes, smoothed Lindblad evolutions, constantsize fluctuations in temperature, specialpurpose quantum computers introducing correlations in errors, and other ideas. Without at this point engaging with the details, I believe that each of these mechanisms suffers from some combination of failing to rule out quantum computing, inconsistency with existing observations (e.g., working classical computers) or imprecise predictions.
Why I believe quantum computing should be possible
I have tried to explain why I don’t believe Gil’s conjectures. But it is further important to ask why I, or anyone else, should believe that quantum computing is a plausible technological possibility.
Part of it, as I’ve said, is believing in quantum mechanics itself, with its exponentially large Hilbert spaces, its entanglement, its interference, and all its other counterintuitive features. Quantum mechanics did not succeed by being a more intellectually appealing theory than its competitors. On the contrary, the route from Schrödinger’s equation to EPR to Bell’s theorem to secure quantum communication took over 50 years, even though today it can be covered in a few hours of lectures. This delay is basically because many of the principals—most notably Albert Einstein—didn’t want quantum mechanics to be true. Instead, many preferred to think about interpretations such as Bohmian mechanics that hold close to older intuitions, but are dogs’ breakfasts to work with.
Quantum mechanics succeeded by simply giving the right answer, over and over again, while no other theory could come close. And now, with the modern quantuminformation perspective, many of the conceptual difficulties of the theory are being tamed. This textbook is a great example of the new pedagogical approach.
But couldn’t we have quantum mechanics without quantum computing? Isn’t that what this debate is all about? After all, as John Sidles has mentioned several times, many perturbation expansions are hard to compute, or not even convergent, and these can give rise to complicated manybody terms even when the original Hamiltonian is simple. Nothing about Schrödinger’s equation guarantees that we’ll ever see anything resembling textbook quantum mechanics in the real world.
Let me give a simple example of how things could have been much worse for us. Electrons are thought to be pointlike particles, but suppose that one day we discover they are, like protons, composites of much smaller particles. These particles could have their own dynamics that could ruin a twoslit experiment by effectively decohering the “which path” information. However, this turns out not to happen, and we do observe electron interference. Moreover, such alternate theories would work via the Pauli exclusion principle to alter the periodic table as we know it, so we can rule them out without any fancy experiments.
In practice, we see many areas of quantum coherence even in current areas of technology. We have multiplequantum coherence in NMR, BoseEinstein condensates, superconductivity (with topological effects and protection), lasers, and simply the fact that transistors work the way we expect. Higherorder perturbation theory, or some new physics, could have derailed any of these technologies, but did not—a fact explained in part by renormalization.
src 
Thanks
(Gil:) This debate gave me a great opportunity, for which I am thankful, to discuss my work on quantum computers. First and foremost let me thank Aram Harrow for initiating a public discussion of my paper and for being my partner in this debate. Aram did a great job in presenting the case for why quantum computers are feasible. He presented serious and interesting objections to my conjectures, and made many enlightening comments on technical and conceptual levels. I thank also the many commentators, and allow me to mention John Sidles for the breadth and spirit of his involvement. There were many excellent comments and questions by many people; I tried to respond to questions raised to me, and I enjoyed reading some of the further discussion and interesting ideas not related to my work.
Of course, I am thankful to Ken Regan and Dick Lipton for hosting and editing the debate, for summarizing the earlier threads, for adding exciting introductions, for illuminating private discussions with Aram and me on the matter, and for very interesting related and unrelated blog posts during this time.
(Aram:) I also want to thank Gil for his discussions both here, and in many private emails. It’s been useful for me to think through these things, and has caused me to think about skepticism in a new way. (For example, am I like one of those establishment scientists who rejected continental drift out of hand? How certain can I be about my position on other controversial things, like the probability of global warming, or relativity, or my own atheism?) It has led to my work with Steve Flammia on Conjecture C, and also a new project on adversarial noise joint with Matt Hastings and Anup Rao, which I’ll talk about soon on the Pontiff blog.
Also, I wish to thank the many commenters, who have gone far beyond the original posts into many interesting realms. I hope that my participation here will turn out well for the field of quantum information, which has faced a large amount of skepticism and misunderstanding from the field of computer science. Many arguments in favor of quantum computing inevitably stem from physics; however, I hope that this will help encourage dialogue and mutual understanding between physics and computer science more generally. Finally, I am grateful to Ken and Dick for hosting, editing and framing this debate, and for the contributions of their blog more generally.
Conclusions
(Gil:) The construction of universal quantum computers is a serious scientific possibility and an excellent working assumption. Building universal quantum computers will be a onceinalifetime scientific and technological achievement. Operating quantum computers will lead to further terrific scientific and technological fruits, for decades, just like the discovery of Xrays. Quantum errorcorrection is an important concept for building faulttolerant quantum computers and is a fundamental scientific concept on its own. Building quantum computers is an endeavor which should vigorously be pursued, and indeed it is pursued theoretically and experimentally by top researchers who already established impressive achievements. My university, The Hebrew University of Jerusalem, just established a new Quantum Information Science Center.
Yet, it may well be the case that universal quantum computers are not possible, and that “no quantum faulttolerance” should be taken as the guiding principle in modeling quantum decoherence. Developing a theory of nonfaulttolerant quantum evolutions, and studying how quantum computers can fail, is an interesting and important endeavor. NonFT quantum (and classic) evolutions may shed new light on existing models, offer new relevant models for quantum evolutions, and lead to substantial new insights on related physical questions, models, theories, and computational methods.
Finally, it can be useful to draw two extreme scenarios as possible conclusions for this debate. One commonly believed scenario coined as “quantum supremacy” by Preskill asserts that the quantum world offers and enables superior forms of computational complexity. The other scenario, which we can call “classical control,” asserts that computation in our quantum world can only be based on classical information—via a repetition code or a similar mechanism—and that realistic nearlypure quantum evolutions are (approximately) of bounded depth.
(Aram:) The quest to build quantum computers has been fraught with difficulties, but to my knowledge none of these are rooted in error correlations increasing with system sizes. Rather, they involve the sort of problems that elephants would have in dissecting fleas. Sometimes we don’t know how to decouple noise sources, sometimes we have a hard time addressing qubits individually instead of collectively, sometimes our fabrication techniques are unreliable, and sometimes we can initialize or coherently manipulate or measure, but not all three. All of these are important challenges, and some could lead us to abandon a technology (e.g. if we need an optical table the size of a football field), but none suggests a fundamental showstopper.
At a certain point, we gain confidence that there is no monster hiding under the bed, not only because there hasn’t been one before, but because nothing about our knowledge of the world would suggest such a monster.
Open Problems
Choose a platform that is currently being considered for an experimental implementation of quantum computing (e.g. ion traps, linear optics, superconducting qubits, etc.) and come up with a concrete noise model (i.e. derived from Schrödinger’s equation) that (a) is consistent with all existing experiments, and (b) provably rules out scalable quantum computing. Can one do this? or is it quixotic?
Does Conjecture 1 suffice to reduce the computational power of noisy quantum computers to BPP? Show that basing decoherence on smoothed Lindblad evolutions supports Conjectures 14 and leaves us with BPP.
Can smoothed Lindblad operations be used in a nontrivial way to show how classical memory and computation emerges under noise?
What is the correct picture of our mundus computationalis (Latin for “computational world”): quantum supremacy, classical control, or something in the middle?
Trackbacks
 What’s Blue and White and Red All Over? « Gödel’s Lost Letter and P=NP
 The Quantum Debate is Over! (and other Updates)  Combinatorics and more
 Thanks For Sharing « Gödel’s Lost Letter and P=NP
 Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work.  Combinatorics and more
 My Quantum Debate with Aram III  Combinatorics and more
 Interstellar Quantum Computation  Gödel's Lost Letter and P=NP
 Sex, Lies, And Quantum Computers  Gödel's Lost Letter and P=NP
 Quantum Supremacy and Complexity  Gödel's Lost Letter and P=NP
 Predictions For 2019  Gödel's Lost Letter and P=NP
 Quantum SwitchEm  Gödel's Lost Letter and P=NP
 Gil’s Collegial Quantum Supremacy Skepticism FAQ  Combinatorics and more
 IBM Conference on the Informational Lens  Gödel's Lost Letter and P=NP
Gil there seem to be possibility that will satisfy everyone. Phase transition. The single atoms are well described by the quantum mechanics (small circuits ), and it is not completely obvious what effects would be on the large scale. In particular, there is gaseous phase with nothing interesting can be done with it, except to fill balloons. On the other hand there is solid state (highly overlapped wavefunctions/strong interactions), and because of the banded electronic structure we can create electronic circuits and perform classical computation. It is quite possible, that large scale entangled bodies will produce “noise/correlated” bands that are required for computations, and at the same time produce large inevitable correlations.
Dear mkatkov, as far as I am concerned everything is described by quantum mechanics. (Not that you said otherwise, but this is how what you wrote can be read.) In fact, my conjectures make no distinction between microscopic and macroscopic behavior and not even refer in any other way to the geometry of the quantum device. They only postulate a certain behavior for feasible approximations of pure quantum evolutions. Of course, the distinction between microscopic and macroscopic behavior and how it emerges from the rules of quantum mechanics is a familiar and interesting issue. Whether the emergence of classical information is a threshold phenomenon is an interesting question. This is an appealing possibility but it is also reasonable to believe that emergence of classical information occurs at all scales.
In part because I don’t know how else to subscribe to comments, let me start a little discussion about Gil’s smoothed Lindblad evolution. Many of my other objections apply to them: I think they lack any physical basis (either theoretical or experimental) and I think that they depend too much on knowing what is “intended.”
I thought I’d also say that they’d refute classical computing. But I don’t think they do, nor do I think they refute quantum computing. Here’s why.
Suppose we apply gates U_T … U_3 U_2 U_1 (The ordering reflects the order we apply them in.)
If noise operator E (say a Pauli matrix, so everything is unitary; this is more or less WLOG) happens after step 1, then we obtain
U_T … U_3 U_2 E U_1
Gil’s smoothed noise process says that instead there is a chance we will get
U_2^{1} E U_2 happening after step 1
or
U_2^{1} U_3^{1} E U_3 U_2 happening after step 1
etc.
Let’s suppose we get E’ = U_2^{1} U_3^{1} E U_3 U_2 happening after step 1. Then our resulting evolution is
U_T … U_3 U_2 E’ U_1 =
U_T … U_3 U_2 (U_2^{1} U_3^{1} E U_3 U_2) U_1 =
U_T … U_4 E U_3 U_2 U_1
i.e. it’s identical to the original noise operator E happening after step 3.
More generally, Gil’s smoothing operator causes noise to effectively move backwards or forwards in time. However, if the original noise is isotropic in time (e.g. 0.1% noise rate in each time step) then the smoothing makes no difference. Yes, it does change the noise operators. But it won’t change the effect of these operators once you combine them with the unitaries being applied.
Ok, there are some subtleties here, but they mostly relate to the model being vague. e.g. what if U_1, .., U_T are replaced by quantum operations that are not always unitary (as is crucial to FT computing, either classical or quantum)? Then the smoothed evolution isn’t defined, as far as I can tell. Are the noise operators themselves included in the U_1 .. U_T used to compute smoothing? Or is there a separation between intended and unintended gates? This might mean that things could be different when noise operators move past each other. So I’m not totally sure about these details, but at least the basic model appears to be one that
(a) violates the assumptiosn of hte FTQC threshold theorem, by having highweight correlated noise, but
(b) nevertheless is no more of a problem for FT computing than standard noise, because its effects are identical.
Dear Aram, Of course I will be very happy to look carefully at my smoothed Lindblad evolutions and thanks for this interesting technical point (in fact a few technical points). I look forward to examine your critique closely! It will probably take several days.
Just curious, did anyone consider resonances due to loops in the quantum circuit pumped either by the computational signal or vacuum fluctuations.
Mkatkov, the constructions associated to the postulate Roadhouse of the Larger StateSpace rely essentially upon symmetrybreaking “resonant loops” to synthesize timedependent computational dynamical elements — for example, system clocks, memory flipflops, and gate arrays — wholly by unravelling timeindependent Lindbladian generators.
And of course, this is precisely the way that real system clocks, memories, and gatearrays are constructed … consult the Wikipedia article “Multivibrators” for an introduction to these methods.
as a correlated source of the noise ;), since the physical signal used for computation is not synced with those modes. and John, (about multivibrators) I know that the first Moscow radio station was using parametric resonance for amplifying the signal, and finally, stable electronic orbits in atoms corresponds to “resonances” (what you would call spectrum) everything else decay. If you have loop in quantum circuit it has to have stable mode. You may be able to avoid exciting it, but it should exist. If you have complicated circuit, than you may have distribution of loops, and who knows what spectrum it would have.
Gil’s “smoothed” Lindblad generators are a terrific idea, and yet Aram has a good point that they lack an obvious physical basis (either theoretical or experimental).
Perhaps a reasonable middle ground is to smooth the Lindbladian generators not in the time domain, but in the statespace domain, along the lines of Spielman and Teng’s justly celebrated Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time … with a longterm view toward a parallellized proof broadly along the lines of “Smoothed analysis of Lindbladian dynamics: why quantum trajectory unravelling usually takes polynomial resources.”
This statespace Lindbladsmoothing has a reasonably natural physical basis: the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.
As Gil says “It will probably take several
daysweeksmonthsyearsdecades” to work through the details. 🙂Dear John, The idea behind the smoothed Lindblad evolution is to formalize how a noisy quantum evolution that do not use quantum fault tolerance can be described (or approximated). It goes without saying that this idea requires (beside various sanity checks of the kind Aram is posing in his comment) to be conforted with existing experimental and theoretical iknowledge.
“Perhaps a reasonable middle ground is to smooth the Lindbladian generators not in the time domain, but in the statespace domain, ..This statespace Lindbladsmoothing has a reasonably natural physical basis: the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.”
Dear John, sure, we can think about your suggestion on its own, not as some middle ground. I must confess that another appealing thing for me in the timesmoothing I propose, is that it takes a few sentences to motivate and a few more sentences to describe. (See above.) So if a short selfcontained fromscratch but fairly complete explanation of your proposal, not referring to arxived papers or textbooks is possible this will be especially useful for me.
Dear Gil, I will try to post a concrete, proofsusceptible, onepage conjecture this Sunday!
Perhaps I should mention too, that you and I (and Ken Regan too) have posted DWAVErelated comments on Scott Aaronson’s weblog.
An especially good outcome of the Kalai/Harrow debate (as it seems to me) would be a set of conjectures whose truth (or falsehood) is suitable for discussion in the first chapter of coming generations of of quantum physics textbooks.
To be useful to students, these “firstchapter” nextgeneration conjectures would usefully blend (one hopes) physical naturality with mathematical naturality … and the prospects for this union of physicality/naturality merger are the thrust of the Shtetl Optimized comment linked above.
DOH! … it looks like one more weekend is going to be required … with a view toward specifying a statespace smoothing that is manifestly “intentionfree” in regard to both naturality and physicality. 🙂
Dear John, I thought more about the issue of geometric models. Overall, I agree with your assertion that geometric modelling can be a middle ground which may give my conjectures some familiar (yet rather general) physical basis. The model of quantum computer itself has no geometry in it, and so are essentially the basic models of noisy quantum computers. This is why my approach is to model non FT quantum evolutions on this level of abstraction where indeed a lot of the physics is abstracted away.
Since we are talking about approximations I don’t see the advantage to replace the Hilbert space itself with non flat spaces; If our evolution is related to some complicated manifold at the end we will work with tangents to this manifold which are “ordinary” Hilbert spaces. But for the sake of approximating complicated quantum evolutions on small Hilbert spaces, the target Hilbert space need not be fixed and the way it changes may very well has a geometric meaning.
Dear Gil, although I *love* simulating quantum trajectories by (geometrically natural) pullback onto curve manifolds, that is *not* the conjecture that I have inmind to post (on MathOverFlow hopefully). Rather, the program is (1) specify a timeindependent set of Hamiltonian/Lindbladian generators that unravel as FTQC trajectories (on a full/flat Hilbert space), and then (2) perturb the generators per Spielman and Teng’s “Smoothed analysis of algorithms.”
The intuition is that idealized Hamiltonian/Lindbladian generators, whose Lie algebras have small finite dimension, may in practice be generically nonideal, in the sense that their Lie algebras don’t quite close, in consequence of the quantenfeldtheoretisch Dreckeffecten that were mentioned in previous GLL comments.
Then the conjecture asks: Do the perturbed Hamiltonian/Lindbladian generators unravel as FTQC trajectories?
Or more strongly (and geometrically): Do perturbed Hamiltonian/Lindbladian generators generically compress quantum trajectories onto manifolds of polynomial dimensionality?
The point being, that if the latter (timeindependent and purely geometric) conjecture is true, such that dynamical trajectories can be efficiently simulated, (a) with polynomial resources, (b) by pullback methods, (c) on quantum statespaces that are effectively nonflat … then the resulting computational dynamics are unlikely to yield exponential improvements over purely classical simulation of those dynamics.
Posing this as a serious conjecture on MathOverflow (as I hope to do) turns out to be a considerable undertaking (the references alone are a lot of work!) and so I apologize that this is taking awhile.
Gil, please accept my appreciation and thanks, to you and Aram (and to Dick and Ken too) for sustaining this outstanding and immensely enjoyable FTQC debate!
In regard to the above, there is also an geometric inspiration from (computational) symplectic integrators.
We recall the central idea of symplectic integration: a Hamiltonian function that is nominally exact, whose timestep is only approximately a symplectomorphism, is exchanged for a Hamiltonian function that is approximate, but whose timestep is exactly a symplectomorphism.
Similarly, in computationally unravelling FTQC trajectories, we begin by specifying a timeindependent set Hamiltonian/Lindbladian generators that are nominally exact — or more realistically, that are subject only to such imprecisions as are compatible with the “FT” method chosen — such that the unravelled trajectories fill a statespace of intractably large dimension, so that FTQC is accomplished in accord with the standard errorcorrection and faulttolerance theorems (that John Preskill has so admirably described).
Then we suppose that these generators are generically perturbed in the sense of Spielman and Teng — such that the standard errorcorrection and faulttolerance theorems no longer apply — so that we may reasonably conjecture that generically the unravelled trajectories are dynamically restricted (via the Lindbladian stochastic flow) to a statespace of tractable dimensionality, that is, an effective statespace dimensionality that is polynomial in the number of Hamiltonian/Lindblad generators.
By intent, this conjecture is so constructed as to be naturally subject to geometric analysis methods, that complement the algebraic methods of traditional FTQC constructions.
That’s the main idea, Gil! 🙂
Concrete examples of the quantenfeldtheoretisch Dreckeffecten that (as it seems to me) geometrized versions of the Kalai conjectures need to treat realistically, are provided by Peropadre et al. “Approaching perfect microwave photodetection in circuit QED” (<i<Phys Rev A, 2011).
This article is of course barely a beginning … nearperfect detectors are of very little utility unless they are coupled to nearperfect emitters … and it is far from evident that both kinds of perfection can be optimized simultaneously.
That’s why finalizing the “geometrized” Kalai conjectures is taking awhile. And more broadly, that’s why scalable presentations of (for example) the Aaronson/Arkhipov linear optics experiments are so very difficult to specify as realizable Hamiltonian/Lindbladian processes!
Dear John, Perhaps my recent post Symplectic Geometry, Quantization, and Quantum Noise, and the works by Leonid Polterovich (and others) on quantum noise mentioned there, may be related to the quest of some geometric middle grounds that we discussed over this thread. Namely, a middle ground, with physical basis, for quantum computers and quantum evolutions, in between general noisy models and particular experimental implementations.
Dear Gil … I saw that post and it is terrific!
My own interest in geometric quantum mechanics run precisely in the opposite direction: instead of geometric quantization (as the references you provide call it) the topic of interest is taken to be geometric dequantization (as it might be called), that is, the pullback of quantum dynamics from Hilbert space onto lowerrank submanifolds of Hilbert space.
There is no one book on dequantization, but it turns out to be pretty straightforward to learn the framework of John Lee’s outstanding textbook Introduction to Smooth Manifolds — the concluding chapter of the Lee’s brandnew 2nd edition covers symplectic manifolds and Hamiltonian flows — and then translate the dynamical elements of Nielsen and Chuang’s Quantum Computation and Quantum Information and of Landau and Lifshitz’ Statistical Physics Part 1 (especially the chapters on Onsager relations), and selected examples from Bird, et al.’s Transport Phenomena, and finally selected examples from our QSE Group’s own Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of largescale open quantum systems, all into the unifying geometric language of John Lee’s outstandingly clear (2nd edition) textbook.
There is no one step in this program that is difficult (or even new!) … but there are a *lot* of steps … and regrettably the effort is doubled because everyone’s mathematical notation (except Lee’s) is awkward and/or outdated … and then the effort is redoubled because no two texts use the same notation or even the same physical intuitions. Ouch!
Eventually I’ll post the resulting math/physics/engineering synthesis — most likely as a *long* article on the arxiv server — of which one conclusion will (informally) amount to:
The restriction regarding the validation test serves to accommodate “good enough” Presource simulation of AaronsonArkhipov BosonSampling processes.
This conclusion (for me) represents a viable synthesis of the very best elements of the Kalai/Harrow debate. In the words of the famous joke: You and Aram are both absolutely right!
Happy New Year to everyone! 🙂
——————–
New years bonus: A website with free WordPress LaTeX preview!. Shhh! Don’t tell anyone! 🙂
Dear Gil, I have posted a longish comment to your site; a comment that concludes with my heartfelt appreciation & thanks to you and Aram Harrow (as expressed by a joke). 🙂
Here is a response to Aram’s comment regarding smoothed Lindblad evolutions.
Smoothed Lindblad evolutions are offered as a mathematical tool to describe noisy quantum evolutions which do not involve quantum faulttolerance. The smoothing operation is described in the post above and precise continuoustime and discretetime descriptions can be found in my paper “When noise accumulates,” Section 2 formulas (3) and (5). The smoothing operation depends on a positive kernel which we can simply take as Gaussian. (We can also allow having an atom at zero to take into account an additional amount of standard noise.)
Aram suggests that somehow if the noise before smoothing is timeindependent then faulttolerance schemes will still work (at the end) since the “smoothing” only move noise around in time. However, Aram is describing his own version of smoothingintime (in some much simplified setting) which is different from mine. (Greg Kuperberg made a similar observation a few years ago.) Thus, Aram’s assertion that the smoothed Lindblad evolution model is “no more of a problem for FT computing than standard noise, because its effects are identical,” is not correct.
As an aside, let me mention that assuming that the noise is timeindependent and even that the noiserate is timeindependent is, in my opinion, unrealistic. My criticism of John Preskill’s new general noise model is that this model exhibits a strong concentration of the number of qubiterrors. Such concentration suggests that a major aspect of realistic noise is missing from the model. (This criticism is “orthogonal” to the issue of quantum faulttolerance.)
In regard to “unsmoothed” (standard) Lindbladian dynamical evolutions, as contrasted to “smoothed” Lindbladian dynamical evolutions, there is a body of work in geometrical thermodynamics that suggests that a natural transition between the unsmoothed/smooth dynamical regimes is associated to a diffusion constant that is given in terms of fundamental constants by
Here “OZ” stands for OnsagerZiegler, and the above expression results from joining orthodox Hamiltonian/Lindbladian dynamics (of Michael Nielsen & Isaac Chuang, Howard Carmichael, Carlton Caves, etc.) with orthodox thermodynamics (of Lars Onsager and Hans Ziegler, etc.) via natural mathematics (of Saunders Mac Lane, Vladimir Arnold, etc) in geometric notation (of Jack Lee’s Smooth Manifolds text, for example).
Eventually the above train of analysis will be posted as a commentary on the Kalai/Harrow debate; its main conclusions are evident already, in the small magnitude of the above fundamental diffusion scale:
• For purposes of quantum computation, nature’s statespace is effectively a Hilbert space … even though at sufficiently small spacetime scales, nature’s dynamics becomes (“stringily”?) nonHilbert.
• For practical purposes of thermodynamical simulation, is effectively very much larger than the above value, such that the associated dynamical statespaces are effectively nonHilbert … and for this reason, natural dynamical systems can be computationally simulated with PTIME resources.
Needless to say, both of these two conclusions are very good news for 21st century science, technology, engineering, and mathematics. And upon this optimistic concluding note, best wishes for a happy holiday season are extended to everyone!
I also do not know, how else to subscribe to comments, so I am recollecting a problem for skeptics. It is often discussed that Yuri Manin suggested idea of quantum computer (in fact, “quantum automata”) around 1980. It is less widely discussed, that it was in relation with question about DNA functionality – see translation in appendix to his own paper http://arxiv.org/abs/quantph/9903008
A question for skeptics: to show that DNA may not be a quantum computer (automata). If the question is too difficult, to consider a simpler problem and to prove that DNA is not an universal quantum computer. Maybe it was some not very accurate speculations around that idea, but anyway it is a method to think about all that in more constructive way.
Dear Alexander, This is an interesting question, but I do not think that your question (or similar questions) should be addressed just to skeptics. Manin made ( very early, I suppose it is even before Feynman’s paper on quantum computers,) an argument that DNA’s replication requires some superior computational complexity of a quantum computers of some sort. I think that Manin is credited as one of the very early people to suggest quantum computers but I am not sure what is the status of his suggestions regarding DNA.
Dear Gil, I do not think, that the consideration of models of quantum gates on polymer molecules is much more unreasonable than on ions in traps or on photons in cavities. The problem may be a precision and so doubtful possibility for “too digital” quantum algorithms like factoring of huge numbers.
Today a story on SlashDot had a remarkable headline:
It seems (to me) that recent groundbreaking insights from the AspuruGuzik group may reasonably justify this investment in quantum computing … and it seems too (to me) the Kalai/Harrow debate issues are at the heart of the considerations that will determine the longterm viability of this investment.
MIT Technology Review has a followon article, The CIA and Jeff Bezos Bet on Quantum Computing, that includes plenty of snappy quotes — both optimistic and skeptical — from “the usual suspects” in the quantum computing community.
It is plausible that DWAVE’s enterprise value reflects breakthroughs in both hardware and algorithms … breakthroughs that DWave’s investors regard as real, and DWave’s skeptics regard as illusory.
Assuming the former leads us to consider whether there may exist natural variants of Gil Kalai’s conjectures, that are associated to augmented algorithmic power, possibly along the lines that AspuruGuzik and colleagues have been suggesting (per the comment above).
Assuming further that DWave’s advances have an algorithmic aspect, it remains to be determined whether quantumentangled hardware is strictly necessary in implementing these novel algorithms … either way, DWave and its investors are wellpositioned to benefit.
That’s why DWave seems like a reasonable investment to me! 🙂
When it come to classical control let me mention the problem of how to express in Latin the term “computational universe” or “computing universe”. I will spare you my own initial suggestions based on google translate but both Ken and I consulted with experts in classical studies (and Ken himself seems quite strong in Latin) and were offered or considered also “res mathematicae,” “res arithmeticae,” “mathesis universalis,” “calculandum circumstant,” and “mundus computationi,” before making the choice “Mundus Computationalis.” In some draft, Ken even wrote the whole sentence “What is our computational world” in Latin which was very exciting. But at the end this sentence did not make it, and now I forgot how it goes.
Of course, we need also to mention that our final debate post was launched a few hours after the first US presidential debate was concluded. Aram has an awesome line related to the presidential debate as well as to our debate, but did not find the context to say it. You should stay tuned, if nothing else just for that awesome line.
So I watched with interest yesterday’s debate (I am at Yale right now). Now, I am a little hesitating if I should reply to Aram’s concluding section addressing my conjectures.
Take, for example, the matter of intentions. Aram’s claim that the conjectures depend on intentions reminded me of the following nice story (or a profound philosophical joke according to Wittgenstein) from “Littelwood’s Miscellany”:
“Schoolmaster: ‘suppose that x is the number of sheep in the problem.’ Pupil: ‘But, Sir, suppose that x is not the number of sheep.’ “(See also this post for a similar story.)
So is it really difficult to state my conjectures without any reference to “an intended state” or “an intended evolution?” It is quite easy at least to go over the conjectures (or preconjectures) one by one and check this matter. In any case, my own intention is that these conjectures apply (in the relevant context to each one) to general realistic approximations of pure quantum evolutions. It’s really all about approximation…
Never in history did an invention, a scientific discovery, make it possible to suddenly solve all problems more efficiently. Even the greatest ones were made to solve only a class of problems. For example, the wheel was invented for transportation and topological spaces for continuity – but not the other way around.
In contrast, if you proved P=NP or built a quantum computer, from the same minute you’d become able to solve all problems more quickly. This runs counter to what experience tells us – you can’t acquire infinite power in finite time.
There used to be an old motto in computer science: all that can be done with hardware may also be done with appropriate software. For this reason, I believe it will take as long to build a quantum computer as to find a polynomial algorithm for integer factoring.
Unlike large parts of the debate where Aram raised specific points against my conjectures, which we thoroughly discussed, Aram’s summation is of more general nature. Let me repeat seven points that Aram made. The first two give general reasons to believe in quantum computers:
1) QM is an extremely successful theory. A reality that allows QM and forbid QC requires an alteration of QM.
2) QM could have “conspired” even against quantum theoretic phenomena that we can witness like the twoslit experiment, but this is not the way nature works.
The other five points are claims of general nature regarding my conjectures
3) The conjectures are not supported by any physical mechanisms on the microscopic level or a specific model.
4) Rather the conjectures suggest precise statements only about their conclusions. Aram mentions especially the conclusion that “realworld quantum computers experience noise that makes them classically simulable.”
5) The conjectures give only some framework for a hypothetical theory, they are neither puremath conjectures nor they give precise predictions on observations.
6) It is difficult to state the conjectures without referring to an “intended” state or evolution. In contrast in natural processes there are no “intentions”.
7) There are indeed technical difficulties on the way of building quantum computers but in no case these difficulties come from errorscorrelations that the conjectures talk about.
All these points are very good! (Some of the points remind me of Cris Moore’s very first comment of this debate which led to fruitful discussion.) The second point was quite new to me although I raised a similar point myself in this comment . I plan to relate to some of Aram’s summation points in later comments.
On another matter let me mention that a few hours ago Serge Haroche and David J. Wineland whose work is pretty much in the direction of building quantum computers were named as the 2012 Nobel prize recipients in physics “for groundbreaking experimental methods that enable measuring and manipulation of individual quantum systems.” Warm congratulations!
Two papers of interest
A survey on quantum memories http://arxiv.org/abs/1210.3207
Experimental tests on a stable qubit http://www.nature.com/nphys/journal/v5/n1/full/nphys1151.html
Let me mention a few relevant posts (by John Preskill) on “Quantum Frontiers”. The recent post on the 2012 physics Nobel prizes mentions a skeptical paper from 1996 entitled Quantum computing: dream or nightmare? by Serge Haroche and Jean‐Michel Raimond, and a more optimistic note from that time written by John Preskill. An earlier post entitled “Supremacy now” describes a recent paper by Martin Zwierlein’s group (arXive version) that suggests that “quantum supremacy” may have already happened. Yet an earlier post talks mainly about the Higgs but draws an interesting analogy with quantum information science.
Let me report on other relevant papers, posts, and prizes. Scott Aaronson’s recent post argues with a recent paper of physicist Michel Dyakonov which is skeptical towards building quantum computers. Michel amusingly compares building QC to teaching donkeys to speak, and Scott attempts to carry this analogy to interesting places. This donkey’s analogy is an additional zoological ingredient to the QC debate, after Aram’s amusing comparison of the task of skeptics to passing a camel through the eye of a needle (which I tried to carry to interesting places). Michel, Scott and the commentators go on to discuss aspects which I am very glad we did not discuss over here, like funding and sociology of science.
Ignacio Cirac and Peter Zoller were announced as winners of the 2013 Wolf Prize for physics, “For groundbreaking theoretical contributions to quantum information processing, quantum optics and the physics of quantum gases.” Warm congratulations!
The points listed above that Aram made in this post are not as developed and concrete as those in his original three posts, but nevertheless they are quite interesting. Aram regards his point about “intentions” as central, and, at some point, even wished that he had constructed his reply around it. So it is not too late to discuss it in some detail.
Intentions: Aram wrote that my conjectures “have difficulty avoiding reference to the “intended” operation of a quantum device, although any physical principle we discover is unlikely to depend on our intentions.”
The reason I regard this as an important issue is that while the question of the limits of manmade controlled quantum systems is important, the question of the nature of approximations to purestates and pureevolutions in QM is of a larger scope and is perhaps even more important. As many commentators remarked the question of approximation is of crucial importance in quantum physics and this is the correct scope for many issues of this debate. Methods for approximating a state on a large Hilbert state using a much smaller Hilbert state are in the center of many of the glorious successes of quantum mechanics.
Aram is correct that any physical principle should not depend on our intentions and this gives a clear consistencycheck for my conjectures. Aram himself did not check the individual conjectures but we can do it together. Generally speaking, my conjectures talk about an “intended” pure evolution and its realistic noisy approximation described using a noise operation , (which I assume is a POVMmeasurement ). Generally speaking, you can simply omit the word “intended,” which originated in our specific context of controlled evolutions.
Conjecture 3 asserts that when x’ and y’ are a noisy realistic approximation described by a noise operation E, to two qubits x and y in a pure joint state, then if x and y are substantially entangled then the informationleaks represented by E are substantially positively correlated for the two quits. (The conjecture describes this in precise terms.) If your noisy evolution creates a mixed joint state on two quits which is approximately pure, the conjecture applies to every pure states that the noisy state wellapproximates. (As a matter of fact, you can even base the conjecture on the entanglement between the noisy qubits.) Similarly, Conjecture 4 refers to arbitrary noisy approximations for pure highly entangled states.
The old and new versions of Conjecture C deal with the question of realistically approachable pure states and have no mention of intention. Take the conjecture which can be taken back to Unruh that when a realistic mixed state is well approximating a pure state then it is (approximately) boundeddepth. No intentions are involved. The dependence on “intentions” in the informal version of Conjecture 1 can easily be removed and the formal version of Conjecture 1 from the last post (which already took into account Aram’s “intention”critique) avoids referring to an intended state altogether. Conjecture 2 has no mention of intentions.
While I agree with Aram that the physical principles do not depend on our intentions, I want to emphasize again that a quantum device itself, humanmade, or natural, may very well depends on the evolution it describes.
Let me also mention that one of the reasons for considering POVMmeasurements in my papers and to discuss leaks of information, is precisely to make the conjectures intentionfree. (And this is discussed in the papers.) This is needed because you can think about noise which is hard to be discussed intentionfree. For example, when instead of one pure evolution your device produces another pure evolution. The issue of intentions is harder to deal with when it comes to classical noise. But, of course, noise and denoising, are very important issues for classical processes. Some aspects of the classical case are conceptually harder.
Physical explanation (on the microscopic level): Aram remarked that my conjectures are not supported by a physical mechanisms on the microscopic level or a specific model describing the microscopic behavior. This is, of course, a very important point and, as I said all along, my conjectures should be tested theoretically, experimentally, or both, on a case by case basis.
The conjectures should be tested for controlled quantum devices, and also for approximations to pure evolutions in other physical contexts. My own attempts at a casebycase examination of my conjectures are pretty modest and are mainly based on discussing the matter with a few colleagues in theoretical and experimental physics over the years. This is certainly interesting but it is a separate project (and perhaps also a separate debate with other debaters). Let me mention a couple of cases that I looked at.
Topological quantum computing
As I wrote in the first post I think that it will be easier to support (theoretically or experimentally) my conjectures for proposals which create, using conventional experimental methods, certain gadgets that are used later for the quantum computations. The proposal regarding (essentially) noiseless qubits based on anyons is especially interesting in this respect. (Another interesting direction is measurementbased quantum computation.) Indeed, the open problems (Questions 1 and 2) in the first post were devoted to these two proposals.
As far as I understand, the stable properties of qubits based on pairs of anyons, and related suggestions do not take into account decoherence in the physical process used to create these anyonic systems. Since there is no reason to believe that quantum errorcorrection and quantum faulttolerance will be used in the proposed physical processes that create these anyons it is hard to believe that these anyons will describe highly stable single qubits (which regarded as encoded qubits violate Conjecture 1).
Of course, experimentally creating quasiparticles obeying nonabelian statistics in laboratory systems, is important for many reasons. We do not have to insist on highly robust qubits and we can ask: Looking at (pairs of) anyons as encoded qubits, what kind of mixing of the states of the encoded qubit are supported (i.e. approximated) by a bounded depth quantum circuit? The maximal entropy state is perhaps boring, but if we can figure out lower bounds for the entropy of such a mixture, and on the other hand, create interesting such states experimentally, this will be very nice.
Iontrap quantum computers
I tried to explore a little the situation for iontrap computation. It is interesting to try to examine specific suggestions for their implementation in light of my conjectures. For example, one concern is that controlling simultaneously the locations of the ion traps during computation can potentially lead to errorsynchronization. Recall that errorsynchronization for a tiny fraction (in term of tracedistance) of the overall noise suffices to cause faulttolerance to fail. Of course, even this specific concern should be checked carefully and casebycase within the wide area of ion traps computation. There are some teleportationbased iontrap architectures for which there is no need to move the qubits around at all. These suggestions (which are fairly remote from experimental reality) should be examined separately.
Iontrapped qubits with internal Zevolutions
Specifically, the concern that simultaneously controlling the locations of the qubits may lead to errorsynchronization (for some tiny fraction of the overall noise) applies to one type of qubits based on iontraps that I want to mention now. The most stable iontrapped qubits created today (as far as I know) are not static but rather they have an internal Zevolution with known frequency. However they rely on having reference clocks that are extremely precise on the relevant timescale. Imprecise knowledge, drifts, all with respect to this clock, are considered contributors to unwanted noise, and the overall noise level for this kind of qubits is very low. In other words, the Zcomponent of the qubit is subject to a very stable periodic evolution. The gap between the ideal carrier (which is some sinus, I suppose) and the actual carrier is taken as part of the noise and is very small.
Now, we can suppose that the actual carrier can be described as a function in time with some additional higher harmonics, so, in this case the noise has strong timedependency.
My concern is that these extra harmonics (which are part of the overall noise) can be synchronized for many simultaneously controlled qubits. (This is a place were the analogy with metronomes might make sense.) Such an error synchronization can fail scalability (and fault tolerance) even if the rate of the harmful noise to start with tends to zero with the number of qubits (or, in fact, is barely larger than one over the number of qubits).
Back to generalities.
Finally, let me recall that a quantum computer or a quantum circuit is a hypothetical abstract physical construction. Quantum computers abstract away much of the physics. It is a mistake to talk about a common detailed physical explanation on the microscopic level for my conjectures on the level of abstract quantum computers.
The model of a quantum computer (or a quantum circuit) is a useful umbrella to discuss together phenomena and devices in many different areas of physics, but for building quantum computers as well as for unbuilding quantum computers (i.e. for showing that quantum computers cannot be built,) you need to dive in and make a casebycase study. In both the positive direction of building quantum computers or in the negative direction of understanding why they cannot be built the notions of quantum errorcorrection and quantum faulttolerance appear to be crucial.
There is also the important question of models of noisy quantum evolutions which are, on the one hand, more specific than quantum computers so that after the correspondence into the classical world we do see (mathematically, of course,) some familiar classical physics; but, on the other hand, more general than a specific implementation of a quantum computer as described above. This is also a different project (not to say, comment), but we can return to this one day.
I heard today in our Quantum Information Science Center, a talk by David DiVincenzo entitled “Making superconducting qubits scalable.” After describing the progress in creating stable superconducting qubits (much by the “Yale group”) David talked about a remarkable proposition for quantum computers’ architecture based on planar arrays of qubits, that allows to apply simple sequence of CNOT gates and measurements on half the qubits, to create a toric code on the remaining half! Implementation of this idea are on their way, at IBM among other places. (The IBM project was mentioned in some earlier comment of the debate.) The theory behind this suggestion involves a lot of recent theoretic advances in quantum error correction. Here is a related paper (I think) From Majorana Fermions to Topological Order by Barbara Terhal, Fabian Hassler, and David P. DiVincenzo. David’s paper from 2000, The Physical Implementation of Quantum Computation is also relevant to various aspects of our debate.
The lecture was quite appealing and rather convincing. In this implementation you do not even need to move the qubits in order to implement the CNOT gates. My conjecture 1 says that you will not get the desired codewords, but rather a cloud of codewords (and additional standard noise). My other conjectures predict strong effect of error synchronization, and also that the error rate in terms of qubits error will scale up with the number of qubits. It would be nice to examine these predictions (theoretically or experimentally) for this case.
Gil, your statement (it seems to me) makes a fundamentally important point, that we quantum systems engineers formalize as follows (and we shared this perspective with Aram at our seminar this week).
Let us suppose that we have a Hamiltonian system (which can be classical, quantum, Kählerian, etc.) that conserves a vector of quantities and whose dynamics are localizable on some spatial manifold .
In general physical localizability is associated to the existence of Lindbladian dynamical processes that generate a continuous stream of indexed, spacedependent, timedependent, measurements that satisfy
where is a vector of (conserved) global values.
By the following chain of reasoning we appreciate that the spacetime fluctuations of have fundamental physical significance.
• Ludwig Boltzman teaches that Hamiltonian systems are associated to a global entropy function
• Lars Onsager teaches that localized entropy is associated to a function , whose local values fluctuate in space and time, but whose global integral is constant:
• Göran Lindblad teaches that entropy is spatially extensive — that is, the local entropy function is welldefined — iff the microscopic dynamical evolution is not strictly Hamiltonian, but rather is “Lindbladian.”
• Howard Carmichael teaches that the localized values are concretely computed as stochastic datastreams that are associated to the continuous observation of unravelled (the term unravel was originated by Carmichael) dynamical trajectories.
The fluctuating Lindbladian observations and the thermodynamic statefunction are concretely linked by a relation that was first appreciated by Onsager:
Here the are a basis (that is, the comprise the set of physical units that is associated to the quantities that are conserved by the microscopic dynamics, e.g. kilograms, Joules, Coulombs, Amperemeters^2, etc.). Per Onsager, the partial derivatives that are observationally specified from correlations of (and that according to our 21st century understanding, necessarily originate in a Carmichael unravelling of a Lindblad process) are asserted to be Frobeniusintegrable to an Boltzmann entropy function .
Note: The above relation (or relations that readily are shown to be equivalent to it) can be found in multiple sources that include: Casimir’s article On Onsager’s principle of microscopic reversibility (1945, eq. 4), Kittel’s textbook Elementary Statistical Physics (1958, chs. 33 and 34), and Landau & Lifshitz’ textbook Statistical Physics: Part 1 (1980, following eq. 120.10). A modern survey is Martyushev and Seleznev Maximum entropy production principle in physics, chemistry and biology (2006), and the mathematical discussion of Zia, Redish, McKay article Making sense of the Legendre transform (2009) is so outstandingly clear and natural as to be essential reading.
A crucial insight that is absent in the above references — and that owes its appreciation very largely to quantum computing research — is recognition that is properly given as resulting from a Lindblad/Carmichael observational unravelling, and not as a Boltzmannstyle classical observation, and neither as a von Neumannstyle operator expectation. The dynamical backaction that is associated to Lindblad/Carmichael observational unravelling, and that is absent in Boltzmannstyle and von Neumannstyle formalisms, turns out to be essential in understanding multiple thermodynamical phenomena; the Third Law for example.
With Lindbladian backaction, and thus localization dynamics, properly accounted, the above relation authorizes us to analyze (at our discretion) either backward to microscopic Hamiltonian dynamics (from the observed values via Carmichael and Lindblad), or alternatively forwards to macroscopic thermodynamics (from the entropy function via Onsager and Boltzmann).
Obviously, there is far more to be said regarding these two immensely broad topics! We thus appreciate that quantum computing research greatly clarifies our 21st century understanding of thermodynamical phenomena. And this broader, more natural understanding has emerged as a main theme of our UW QSE Group’s 2012 seminar: Natural Elements Of Separatory Transport.
From an FTQC pointofview, the above formalism suggests that the statement “FTQC is feasible” is true only if we can construct macroscopic physical systems for which notions of thermodynamics are not welldefined. This is a question that (as it seems to me) is hardly likely to be settled by purely mathematical arguments.
We can all be glad that advances in our understanding of quantum computation are proving to be so strongly and naturally linked to exceedingly useful advances in our practical engineering understanding of quantum thermodynamics. And this is (for me) a wonderful outcome to the KalaiHarrow debate! Thank you, Aram and Gil! 🙂
Oh dang the lack of a preview! A concluding “1” exponent is missing from the above relation, which should read:
Eventually I will post an analysis of the Frobeniusintegrability of these relations as an answer to Gil’s fine MathOverflow question What is an integrable system? within a quantum thermodynamical context that naturally unites Gil’s integrability question with his Physics StackExchange request for Onsager’s regression hypothesis, explained and demonstrated. That is a pair of deep, natural, wonderful, practically important questions, Gil! 🙂
Greetings, everybody, from beautiful snowcovered Jerusalem. Such a substantial snow storm here is a onceinadecade event, and it is great that it is happening today.
The discussion on Scott’s blog (190 comments and counting, continuing in a second post) about Dyakonov’s paper have led to a fullscale quantum computer debate and to several interesting comments related to our own debate, including to some items on my long “agenda” which were not discussed here and some further discussions of things that we did discuss here. Here is a partial list.
1) Is QC skepticism amounts to QM skepticism? (This was item 11 on the agenda) A comment by me explaining why the claim that QM logically implies the feasibility of QC (that I attributed to Scott) is flawed; A reply by Scott explaining what his approach is. Eventually we agreed that if quantum computers are not possible this reflects additional layer of physical laws based on QM, (and on a few other things, see below).
2) Implications to QFT/QED computations. Our point 18 on the agenda was: Quantum field theory computations. If quantum computers are impossible what does it say about computations in QM that requires exponential resources? (These computations were Feynman’s original motivation for QCs.) Jay asked me specifically about QED computations, I replied: If superior quantum computational power is not possible, then QEDcomputations in the range that requires superior computational power are irrelevant to our physical reality. Aram wrote that failure of QED would require changes to the operating system of the universe (quantum mechanics)
3) Two comments (1 and 2) summarizing my approach with an item on “What is the environement?” (item #14 on the agenda) and my picture of our computational world of “classical control”.
4) Discussion between me and John Preskill on his noise model and the “Gausian” concentration of the rate of noise that I regard as unrealistic. (My concern regarding the concentration – 1, John’s reply and question about classical bits 2, my guess about digital bits 3)
5) A comment by Michel Dyakonov and responses by Scott and by me.
6) My very short explanation for why QC cannot work. Let me repeat it here (slightly edited).
Why QC must fail – my very short explanation
Quantum systems based on specialpurpose quantum devices, are subject to noise which systematically depends on the quantum evolution of the system; this dependence reflects dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it is doomed to perform.
This systematic dependence can be expressed formally as follos: Quantum systems are subject to noise which is expressed by smoothed Lindblad evolutions. The amount of noise in a time interval is bounded below by a noncommutativity measure of (the projections in) the algebra spanned by unitaries describing the evolution in this interval. Smoothed Lindblad evolutions reflect the idea that quantum faulttolerance is impossible (or not active) and quantum noise must accumulate.
This property of quantum systems based on specialpurpose quantum devices explain why universal quantum devices are forced to fail. (The reference to “devices” applies both to humanmade devices and naturemade devices.)
7) Robert Alicki’s explanation and reference.
Robert Alicki’s very very short explanation for why QC cannot work:
“For me QC was a challenge similar to perpetuum mobile in XVIIIXIX century. After 12 years of trials and errors I believe I know the answer : a fundamental conflict between stability of information storage and reversibility of its processing which does not harm classical but kills quantum large scale computing (see arXiv:0901.0811, OSID , vol. 19, no. 03 ).
BTW, I think also that I deserve Scott Aaronson’s 100 000 USD prize!”
8)
Five topics that Scott Aaronson and I agree on
(The usual warning: we can both be wrong. QM=quantum mechanics, QC=quantum computers, my comment Scott’s reply)
1) The basic framework to describe our physical reality is the framework of QM. The “operating system” of our physical reality is written in this language.
2) There are additional layers of physical laws (or specific physical models) that we refer to as “program applications” written in the language of QM. Practically speaking, in some cases those layers require specific (sometimes very complicated) mathematical formulations that we do not always know how to express in terms of QM.
3) Failure of universal QC require a profoundly different way to understand QM, again, on the level of the additional layers of physical laws.
4) Building universal QC (or “just” computationally superior QC) represent a completely new reality in terms of controlled and observed quantum evolutions, and also a new computational complexity reality. (But most likely not the ability to solve NPcomplete problems.) On this point our agreement is limited.
5) The idea/model of quantum computers itself is a profound revolutionary way to look at quantum physics (I completely agree with Ethan about that). The notions of quantum errorcorrection and quantum faulttolerance are additional profound new theoretical physics.
Gil, here is yet another view regarding the attainability of FTQC and/or BosonSampling (regrettably its links are incompatible with Shtetl Optimized‘s weblog filter).
—————————–
That is a reasonable request!
Ambiguity [1] Nature’s quantum statespace is infinitedimensional and perturbatively divergent, and moreover it is wellestablished that both attributes impact the noise observed in real devices and also their thermodynamical properties; therefore it is concerning that standard FTQC models have neither trait.
Disambiguation [1] the term realistic must be realistically defined, with scrupulous respect for the continuum interactions of quantum field theory.
————
Ambiguity [2] All mathematical proofs (that presentday mathematicians accept as valid) are valid iff they are verifiable with polynomial resources; therefore it is concerning that complexity theorists do not similarly impose a polynomial constraint upon simulation verification resources (or even upon verification of membership in complexity classes).
Disambiguation [2] the term simulate must be defined with scrupulously realistic respect for polynomial restrictions upon verification resources.
————
It’s true that disambiguation means more work (or … does … it … ?), and surely we believe the truth will come to us, and hopefully scrupulous attention to verification will speed the day that truth arrives at our door! 🙂
Dear Gil, Thank you for review. It seems to me, the operation system is not very good metaphor. We may see in S. Weinberg or other books, that QFT is inevitable relativistic form of quantum theory. We are not considering special relativity as some applied programs running on a theory with an action at a distance or something like that. Why we should talk about QFT in such a way? I myself used “QFT” in my comments as a synonym of “relativistic quantum theory”.
Dear Alexander, This metaphor was useful for Scott and me to reach the common grounds that I quoted. (I certainly am aware that it is problematic.) We refer by QM to the very wide framework of quantum probability. With this view of what QM refers to, QFT (or relativistic quantum theory) can be seen as a theory which fits in the fraemwork of quantum probability (in the sense that it does not violate the principles of quantum probability), so (in principle, at least) QFT can be written in the language of quantum probability. I think that this is a reasonable “framing” of the situation, especially In the context of the QC debate.
Dear Gil, seems idea of quantum probability was introduced into discussion by Greg, but I did not quite realized then, what does it mean in particular. Could it be applied to individual systems/qubits?
Alexander, Thinking about QM as a theory of quantum probability is a very pleasant interpretation (especially to a mathematician). Greg prefers this point of view see: A concise introduction to quantum probability, quantum mechanics, and quantum computation (36 pages) by Greg Kuperberg. (Itamar Pitowsky also prefered to think about QM in this way, see this post: Probability in physics, where does it come from?.) Both Itamar and Greg influenced my own way to think about QM.
Gil, perhaps some readers of Gödel’s Lost Letter readers will be interested in the latest post on your weblog Combinatorics and More, titled Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work.
Thomas Vidick’s fine new quantumrelated weblog My CQState has an essay titled A Quantum PCP theorem? that (as it seems to me) nicely complements your MIT talk as follows:
Then an emerging (very limited) skeptical model is
This is another way of saying that the UC Berkeley Simons Institute workshop for Spring 2014, titled Quantum Hamiltonian Complexity surely will be very lively (and important) workshop, as appreciation increases that (in essence) all scalable quantum information technologies require a deeper appreciation than we presently possess, of the dynamical and informatic subtleties that are associated to quantum fieldtheoretic vacuum interactions.
Thanks for linking to my post and the other interesting links and ideas, John. Indeed, a week ago I gave a talk at MIT, with very lovely participation, and met Aram for the first time (and also met quite a few old friends and colleagues and a few new ones.)
The quantum PCP question that you mentioned is indeed a great question. Its informal description in the Simons’ site is : “Does the exponential complexity of general quantum systems persist at high temperature?” which I do not entirely understand (I suppose exponential complexity refers to the gap between QMA and NP but I am not sure.) This is quite close in spirit to a natural problem from my point of view: “Does the exponential complexity of general quantum systems persist at high temperature, without quantum faulttolerance?”. Here “high” just means bounded away from zero (as in Unruh’s old paper). Of course, a main issue for me is to describe formally “general quantum systems without quantum faulttolerance.”
Both in my lecture, and private discussions, we came back to my “smoothed Lindblad evolutions.” At the time, Aram and I were a bit tired to pursue them further here at the end of the debate but now we may come back to them.
A long and interesting discussion about smoothed Lindblad evolution took place over my blog.
Below are links to a videotaped lecture in two parts entitled “why quantum computers cannot work” recorded at the Simons Institute for the Theory of Computing on December 2013 and two additional videos: a short talk on topological quantum computers and a twelve minute overview of this whole movie project. The lectures draw on my yearlong debate with Aram here on GLL, and, of course, on my earlier papers which also motivated the debate itself.
Here are the links: Overview, Part I, Part II, Topological QC.
A related blog post “Why quantum computers cannot work: the movie!” over “Combinatorics and More,” and a subsequent post by Steve Flammia, Quantum computers can work in principle over The Quantum Pontiff.
Without regard for differences of opinion regarding the feasibility (or not) of scalable quantum computing, please let me say that the above resources are terrific, and that I admire especially the concluding summary (minute 13:16 of “Why Topological Quantum Computers Cannot Work”)
Thank you, Gil Kalai, and Aram Harrow too, and all of the many folks who contribute so generously and creatively to quantumcomputing discourse.
And thank you especially, Dick Lipton and Ken Regan, for GLL’s sustained hosting of this wonderful discourse!
Let me mention (in two comments) two topics which we did not discuss in the debate but are potentially quite relevant. One topic is the relevance of quantum computation and information to the black hole information paradox. The second topic is BosonSampling.
The firewall black hole paradox is discussed, e.g., here and here. (There are related exciting issues in the classical mathematical theory of gravitation and moving to the quantum theory makes the discussion more hypothetical and philosophical but still very exciting.) There is a recent attempt by Harlow and Hayden to solve the AMPS firewall paradox using computational complexity. This is quite problematic since the fact that something is not efficiently computable does not mean that we do not have the information needed for the computation. (Harlow and Hayden mention in their paper various explanation around this but those are not so convincing.) Since the issue is essentially about information and not about computation we may try to assume that the computation power (for everybody, Alice Bob, Charlie…) is unlimited, and only information is limited.
When we take this approach we need to pay a price: we cannot talk about closed quantum systems. For those, everybody can (in principle) compute everything from the initial conditions, and if we allow this type of computation the paradox (along with many other basic physical insights) goes away. What we can do is to talk about open quantum systems, namely to assume to start with, that we consider unitary evolution on a large Hilbert space where we have access only to a smaller subspace. (So this is the setting of noisy quantum evolutions.) And the small Hilbert space will represent a good chunk of the physical situation outside and inside the black hole.
I see reasons to hope based on standard assumptions on quantum noise that the black hole firewall paradox remains equally powerful (even with stronger informational foundations) for open quantum systems as well. (For example, we can send many Alices rather than one to overcome the noise, and we can take quantum computation for granted.) Maybe this is already known.
But perhaps conditioned on a no quantum faulttolerance principle we have a chance to resolve the paradox . In particular, the hypothesis of positive correlations for information leaks for entangled subsystems seems relevant. If so, then in turn, we may gain insight on the quantitative aspects of this hypothesis. Note that my noquantumfaulttolerance conjectures are of informationtheoretic nature and not of computational complexity nature.
In a series of recent papers (arXiv:1306.0533 , arXiv:1402.5674 , arXiv:1403.5695 ) Leonard Susskind (and collaborators) studied the paradox and posits that the paradox goes away by an appropriate understanding of the ER=EPR principle of Maldacena and Susskind. A crucial aspect in the argument is that Alice does not have “sufficient quantumcomputational power,” but rather demonstrates “ordinary interaction with the environment.” Thus a crucial ingredient of Susskind’s endeavor is to find a mathematical distinction between “ordinary interaction with the environment” and “quantum computational power.” Such a mathematical distinction (if possible at all) is, of course, at the heart of my debate with Aram. But instead of making a distinction that will apply just to poor Alice, perhaps the simplest principle to consider is simply that “quantum computational power” beyond classical computation is universally impossible.
BosonSampling is a proposal by Scott Aaronson and Alex Arkhipov to demonstrate “quantum supremacy” without quantum faulttolerance. There are several experimental avenues towards this goal with fruits expected in the next few years.
I truly like BosonSampling as a theoretical idea and I even devoted back in 2010 a post on my blog to discuss it. However, I did not think that BosonSampling will exhibit “quantum supremacy” without quantum faulttolerance. In fact, I thought untill Scott explained to me otherwise that this is the common opinion. This perhaps explains why we almost did not mention BosonSampling in our debate. (John Sidles mentioned it in this thread.) Recently, Guy Kindler and I wrote a paper about Noise sensitivity and BosonSampling. The paper demonstrates that for a viable BosonSampling with n bosons the level of noise should be below 1/n perboson, and demistifies (above this level of noise) the claims of “quantum supremacy” via BosonSampling. As we learned, this 1/n threshold is consistent with some researchers’s expectations and there are strong beliefs that the noiselevel can be pushed below 1/n per boson for 2030 bosons. So we need to wait and see.
The work by Guy Kindler and me relies on a theory of noise sensitivity that was developed in the late 90s by Itai Benjamini, Oded Schramm and myself. BosonSampling and Noisesensitivity is briefly mentioned in the second part of my videotaped lecture on quantum computers. (Click here to jump to the relevant place.) A pleasant more recent feature of my work with Guy is the ability to make concrete computations via FourierHermite like expansions.
A paper by Greg Kuperberg and me Contagious error sources would need time travel to prevent quantum computation gives a result in the positive direction! It shows that fault tolerance quantum computing based on teleportation can work against a large class of correlated noise.
Here is the arXiv version of my paper with Guy Kindler on noise sensitivity and BosonSampling (mentioned above) and here is a videotaped presentation.
I forgot to mention that Greg and I heavily rely on a method developed by Manny Knill.
Last month we had here at HUJI a workshop sponsored by the American Physics society and our Rakah Institute of Physics entitled “Quantum computers achievable reality or unrealistic dream,” featuring me and Professor Nadav Katz. My lecture was entitled “What can we learn from a failure of quantum computers” and Nadav’s lecture was on “Quantum information science – the state of the art. Here is a blog post about it and here is a video of the event:
Thank you Gil, for posting links to your workshop “Quantum computers: achievable reality or unrealistic dream?” … I’ll review it just as soon as The Quantum Pontiffs finish their survey of another fine workshop: “QIP 2015”.
Please let me note too that Scott Aaronson’s Shtetl Optimized has just posted an essay “Memrefuting” that critiques — not very sympathetically — recent exceedingly interesting (to me anyway) computation/simulation papers like “A coherent Ising machine with quantum measurement and feedback control” (arXiv:1501.07030 [quantph]) and “Memcomputing NPcomplete problems in polynomial time using polynomial resources and collective states” (arXiv:1411.4798 [cs.ET]).
It’s challenging to write sympathetically and seriously about radical quantum ideas … invariably the flaws are more evident than the virtues, and so the elements of comedy are present. The following appreciation attempts both comedy and sympathy, in assessing these works.
——————
Fans of the Monty Python sketch “The Annoying Peasant“ will appreciate the eerie echoes of its themes that are evident in this week’s Shtetl Optimized essay “Memrefuting”:
Resolved for debate The scientific literature provides abundant “annoying peasant” arguments that effectively refute the “Arthurian” quantum computing mythos.
For the affirmative Wikipedia’s article “quantum superposition” reminds us that
There is at present no strong scientific consensus regarding the origins of quantum nonsuperposition, and perhaps this is why the phrase “physical law” does not appear anywhere in Wikipedia’s account.
A regrettable lack On Physics StackExchange the highestrated question (by far) regarding quantum superposition issues is Andrea Amoretti’s “Linearity of quantum mechanics and nonlinearity of macroscopic physics” (2010). Regrettably (and surprisingly to me), Amoretti’s question has attracted no highquality answers to date (and it is specially moderated for this reason).
Here is Amoretti’s Physics StackExchange question in its entirety:
Whence this lamentable shortage of good answers to Amoretti’s Question? The world wonders!
The question is entirely reasonable and natural (as it seems to me): any student might ask it … and perhaps most do.
Surveying the literature When we seek good answers to Amoretti’s Question in the literature, we are led to:
Here “linear” is a shorthand for “dynamical flows that are isometries of largedimension complex Hilbert statespaces”, and “strong” is a shorthand for “mathematically natural and physically verified”.
A considerable literature that supports Observation One — and it is supporting plenty of prospering 21st century academic and commercial enterprises too! — traces back twentysix years, to Asher Peres’ critique “Nonlinear variants of Schrodinger’s equation violate the second law of thermodynamics” (PRL, 1989), to which Steven Weinberg’s concluding response (in “Weinberg replies”, PRL, 1989) has proved to be prescient (as I read it anyway)
Observation An expanding universe of 21st century quantum dynamical simulators have in practice succeeded in “generalizing the quantum definition of entropy” (in Weinberg’s phrase). In this regard the freeasinfreedom manuals and online videos of the Schroedinger Corporation are commended to Shtetl Optimized readers, as presenting practical recipes for predicting a vast array of entropyassociated parameters of complex quantum dynamical systems (free energies in particular).
Where’s the mathematical beef? Of the 21st century’s fourteen Fields Medal winners (to date), the work of at least eight — Terence Tao and Wendelin Werner (in 2006), Elon Lindenstrauss, Stanislav Smirnov and Cedric Villani (in 2010), and Artur Avila, Martin Hairer, and Maryam Mirzakhani (in 2014) — present quantum researchers (younger ones especially) with fresh mathematical universes that help in appreciating and extending the burgeoning capabilities of 21st century postHilbert quantum dynamics.
A working answer to Amoretti’s Question Practical experience teaches that macroscopic observations of a classical and thermodynamical world emerge as the mathematically natural unravelling of dynamical trajectories, as the integral curves of symplectomorphic flows, upon microscopic statespaces that effectively are lowdimension algebraic varieties.
Summary of the affirmative argument Every quantum experiment ever conducted — including experiments regarded as precision tests of the linearity of quantum mechanics — can be accurately simulated with classical resources by dynamical models that can be mathematically appreciated as the natural pullback of the 20th century’s Hilbert dynamics onto the 21st century’s nonHilbert algebraic statespaces.
Practical realities These transformational capabilities are already being realized in academic and commercial enterprises that are the 21st century’s besthope (as it seems to me) of creating familysupporting jobs in the abundance that the next generation of quantum researchers will require.
The Annoying Peasant’s last word In Pythonesque languge, the best hopes of young researchers for prosperous familysupporting careers are grounded in 21st century notions of universality, naturality, physicality, practicality, and responsibility … not some farcically flat 20th century statespace of mythically many dimensions!