Skip to content

Quantum Supremacy and Complexity

April 22, 2016


An AMS article by Gil Kalai updates his skeptical position on quantum computers

GilKalaiRotschild
Cropped from Rothschild Prize source

Gil Kalai is a popularizer of mathematics as well as a great researcher. His blog has some entries on Polymath projects going back to the start of this year. He has just contributed an article to the May AMS Notices titled, “The Quantum Computer Puzzle.”

Today we are happy to call attention to it and give some extra remarks.

The article includes a photograph of Gil with Aram Harrow, who was his partner in a yearlong debate we hosted in 2012. We say partner because this was certainly more constructive than the political debates we have been seeing this year.

We’ve missed chances to review some newer work by both of them. In 2014, Gil wrote a paper with Greg Kuperberg on quantum noise models and a paper with Guy Kindler relating to the sub-universal BosonSampling approach. Among much work these past two years, Aram wrote a position paper on the quantum computing worldview and this year a paper with Edward Farhi. The latter reviews possible ways to realize experiments that leverage complexity-based approaches to demonstrating quantum supremacy, a term they credit to a 2011-12 paper by John Preskill.

Supremacy

Quantum supremacy has a stronger meaning than saying that nature is fundamentally quantum: it means that nature operates in concrete ways that cannot be emulated by non-quantum models. If factoring is not in {\mathsf{BPP}}—let alone randomized classical quadratic time—then nature can do something that our classical complexity models need incomparably longer computations to achieve. We like to say further that it means nature has a “notation” that escapes our current mathematical notation—insofar as we use objects like vectors and matrices that have roughly the same size as classical computations they describe, but swell to exponential size when we try to use them to describe quantum computations.

Aram’s paper with Fahri leverages complexity-class collapse connections shown by Michael Bremner, Richard Jozsa, and Daniel Shepherd and an earlier paper by Sergey Bravyi, David DiVincenzo, Reberto Oliveira, and Barbara Terhal. For instance, via the former they observe that if the outputs of certain low-depth quantum circuits can be sampled classically with high approximation then the polynomial hierarchy collapses to level 3. This remains true even under an oracle that permits efficient sampling of a kind of quantum annealing process. This is arguably more of a hard-and-fast structural complexity collapse than factoring being in {\mathsf{BPP}} would be.

Quantum supremacy also entails that the quantum systems be controllable. Preskill’s paper raises concrete avenues besides ones involving asymptotic complexity classes. Gil’s article picks up on some of them:

  1. Attempts to create small universal quantum circuits with up to “a few tens of qubits.”

  2. Attempts to create stable logical qubits based on surface codes.

  3. Attempts to have Boson Sampling for 10–50 bosons.

  4. Attempts to create stable qubits based on anyonic states.

  5. Attempts to demonstrate quantum speedup based on quantum annealing.

As Gil notes, the last has been undertaken by the company D-Wave Systems. This has come amid much controversy but also admirable work and give-and-take by numerous academic and corporate groups.

Some Remarks

Our first remark is that Gil’s paper highlights a nice example of how computational complexity theory informs and gives structure to a natural-science debate. Aram and others have done so as well. We especially like the connection between prevalent noise and bounded-depth circuits vis-à-vis low-degree polynomials. We believe the AMS Notices audience will especially appreciate that. We’ve wanted to go even further and highlight how Patrick Hayden and Daniel Harlow have proposed that complexity resolves the recent black hole firewall paradox.

Our second remark is that this is still largely a position paper—the arguments need to be followed into the references. For example, the fourth of Gil’s five predictions reads:

No logical qubits. Logical qubits cannot be substantially more stable than the raw qubits used to construct them.

On the face of it, this is just the negation of what the quantum error-correcting codes in the fault-tolerance theorem purport to do. Gil follows with a more technical section countering quantum fault-tolerance in a stronger fashion with some technical detail but still asserting positions.

Our third remark is that the nub—that “when we double the number of qubits in [a quantum] circuit, the probability for a single qubit to be corrupted in a small time interval doubles”—is presented not as new modeling but “just based on a different assumption about the rate of noise.” We think there needs to be given a fundamental provenance for the increased noise rate.

For instance, a silly way to “double” a circuit is to consider two completely separate systems {C_1} and {C_2} to be one “circuit.” That alone cannot jump up the rate, so what Gil must mean is that this refers to doubling up when {C_1} and {C_2} already have much entanglement. But then the increased rate must be an artifact of entangling them, which to our mind entails a cause that proceeds from the entanglement. Preskill on page 2 of his paper tabs the possible cause of supremacy failing as “physical laws yet to be discovered.” We’ll come back to this at the end.

The Puzzle

Gil puts the overall issue as being between two hypotheses which he states as follows:

  • Optimistic Hypothesis: It is possible to realize universal quantum circuits with a small bounded error level regardless of the number of qubits. The effort required to obtain a bounded error level for universal quantum circuits increases moderately with the number of qubits. Therefore, large-scale fault-tolerant quantum computers are possible.

  • Pessimistic Hypothesis: The error rate in every realization of a universal quantum circuit scales up (at least) linearly with the number of qubits. The effort required to obtain a bounded error level for any implementation of universal quantum circuits increases (at least) exponentially with the number of qubits. Thus, quantum computers are not possible.

I have a position that Dick and I have discussed that sounds midway but is really a kind of pessimism because it might make nobody happy:

  • “Medimistic” Hypothesis: A full binary tree of {n-1} Controlled-NOT gates on {n} qubits, possibly with 1-qubit Hadamard and {T}-gates in-between, takes {\tilde{\Theta}(n^2)} not {\tilde{O}(n)} effort in our world. So the effort required to obtain a bounded error level for any implementation of universal quantum circuits increases quadratically with the number of qubits. Thus, quantum computers are possible but the most general ones are sandbagged.

This would put factoring in {O(n^4)} quantum time. That would still leave public-key cryptography as we know it under a cloud, though it might retain better security than Ralph Merkle’s “Puzzles” scheme. But the quadratic scale would be felt in all general quantum applications. It would leave everyone—“makers” and “breakers” alike—operating under the time and hardware and energy constraints of Merkle’s Puzzles, which we recently discussed.

Thus we have a “pun-intended” opinion on how the “Puzzle” in Gil’s title might resolve. However, I have not yet solved some puzzles of my own for marshaling the algebraic-geometric ideas outlined here to answer “why quadratic?” They entail having a mathematical law that burns-in the kind of lower-bound modeling in this 2010 paper by Byung-Soo Choi and Rodney van Meter, under which they prove an {n^{1/d}} depth lower bound for emulating such CNOT trees with limited interaction distance where {d} is a dimensionality parameter. This brings us to our last note.

The Onus of Novelty

The main issue—which was prominent in Gil and Aram’s 2012 debate—is that everything we know allows the quantum fault-tolerance process to work. Nothing so far has directly contradicted the optimistic view of how the local error rate {r} behaves as circuits scale up. If engineering can keep {r} below a fixed constant rate {r_0} then the coding constructions kick in to create stable logical qubits. If some {r'_0 > r_0} is a barrier in our world, what might the reason be?

It could be that this is a condition of our world, perhaps having to do with the structure of space and entanglement that emerged from the Big Bang. Gil ends by arguing that quantum supremacy would create more large-scale time-reversibility than we observe in nature. It would also yield emulations of high-dimensional quantum systems on low-dimensional hardware of kinds not achieved in quantum experiments to date—on top of our long-term difficulties of maintaining more than a handful of qubits.

This hints that an explanation could be as hard as explaining the arrow of time, or that {r'_0} could be a fundamental constant like others in nature for which string theory has trended against unique causes. Still, these other quantities have large supporting casts of proposed theory. If the explanation has to do with the state of the world as we find it then how do initial conditions connect to the error rate? More theory might indicate a mechanism by which initial should at least help indicate why {r} isn’t simply an engineering issue.

Thus there is still an onus of novelty for justifying the pessimistic position. It may need to propose a new physical law, or a deepened algebraic theory of the impact of spatial geometry that retains current models as a limit, or at least a more direct replacement for what Gil’s article tabs as “the incorrect modeling of locality.”

Open Problems

How effective is the “Puzzle” for guiding scientific theory?

We note that the muchacclaimed and soberlyevaluated answer on quantum computing by Canada’s Prime Minister, Justin Trudeau, had this context: A reporter said, “I was going to ask you to explain quantum computing, but… [haha]…when do you expect Canada’s ISIL mission to begin again…?” Trudeau ended his spiel by saying, “Don’t get me going on this or we’ll be here all day. Trust me.” Would he have been able to detail the challenges?

Update: Gil released an expanded version to the ArXiv.


[added qualifier on Choi-van Meter result, added main sources for Farhi-Harrow]

40 Comments leave one →
  1. April 24, 2016 1:00 pm

    💡 ❗ congratulations on initiating/ hosting this great/ historical/ prescient debate on noise and scalability of QM computing systems by harrow vs kalai which is indeed turning into a huge lynchpin of the field, but possibly still downplayed by some. there was a recent claim of a “scalable shor system” but my thinking is the label “scalable” is premature here. trudeau maybe is familiar with QM based somewhat on the world famous dwave which arguably has demonstrated the most “scalable” QM systems on the planet so far but by a near-radically different approach of adiabatic computing which is not to the taste of many QM scientists or maybe the mainstream research which focuses on “clean” qubits. coincidentally just posted on related areas, plz drop by for many latest links/ developments

    QM computing and eg the new $1B EU initiative

  2. April 26, 2016 11:39 am

    Thanks for the lovely post and kind words, Ken! Regarding the “burden of proof” issue, I must admit this always puzzled me. I couldn’t understand the claim that the optimistic assumption that with feasible engineering effort we can achieve small error rate per qubit (for larger and larger systems in the worst case) requires no justification, but the pessimistic assumption that the required engineering effort explodes with the number of qubits does require justification. (To be clear, I thought that both these assumptions require justification.)

    One thing that have changed since the debate ended is that my paper does offer an explanation (actually, based on computational complexity insights) for the pessimistic hypothesis when you start with the very standard noise model of statistically-independent constant error-rate per qubit. This is related to the analysis of small-scale quantum systems, like in my work with Guy Kindler.

  3. April 26, 2016 10:38 pm

    I’ve never kept up with all the papers on quantum error correction, and I definitely don’t claim to understand the gory details in most of them, but the ones I’ve seen all seem like they’re implicitly assuming an absurd premise as if it made so much sense that it didn’t even need stating: that the so-called “error rate” can ever be less than 1.

    The proportion of gate operations that will introduce some amount of error is always 1, because the operations are done by inexact devices and error is being continually introduced by the environment, even when no (non-identity) operation is occurring. The relevant metric is not what portion of the operations “failed”, but how much error is introduced, and the distribution and correlations of that error. Some error will be perfectly correlated every time, due to the limits of calibration; some will be effectively independent; some will have medium- and long-time correlations or correlations depending on the particulars of some wavefunction.

    Maybe there’s some obvious mapping between that and the model where a wavefunction is either perfect or completely wrong, and it could very well be that I’ve simply missed it, but barring something like that, I find that I have difficulty taking most discussion of quantum error correction seriously. (That’s not even to mention the bogus assumptions people seem to make when talking about error correction for AQC or QA.) Where that leaves us on the optimism vs. pessimism scale, I don’t know, because it seems like people haven’t really looked into it enough to know much one way or the other.

    • Matthew Cory permalink
      April 27, 2016 12:23 pm

      The entire enterprise is based on a speculation about modeling quantum systems but you can’t run the argument backwards. Quantum simulations have a high Turing complexity but this doesn’t mean a non-classical reality requires some hypothetical exponential computation on psi-ontic wave functions. We only see single outcomes in quantum mechanics and predict the expected value of observables. There are strictly no observed probabilities or “quasi-probabilities.” There are no simple “amplitudes” but derived quantities. We get probabilities by assuming particles but there are no classical particles. All of this stuff goes beyond what is strictly empirical physics. Even more strongly stated than Gil’s argument about noise, we can simply say that theorists may be taking QM abstractions too literally. The debate gets quite far ahead of itself.

  4. April 27, 2016 4:34 pm

    Appologies for typos. Typing from android. I have very basic problems with quantum computations. The basics are in Schredinher equations, whete we have wabefunction depending on spatial coordinates. There are limited number of them. Where do you get exponential number of superpositions i do not see. The implicit assumptions here are that creation operator adds dimensions. For that you need to specify what is a particle. In any reasonable assumption there are only finite number of particles in the volume, unless you assume nature operates on a real real numbers. In other words you assuma that an infinity symbol introduced in cantor argument about uncountable reals is a physical reality and not an artificial construction to simplify calculations. Actually not even simplifying but seemingly consistent mathematical construction. Any real physical noise, i mean real random physical noise just breaks down this construction. You better ask more basic question what is particle and what are collective entanglement effect directly and not indirectly through the compkexoty issues. Although this might seem to be the only tractable way to come up with experiments testing the basic issues of quantum mechanics. Just a crackpot thought. We have scalar vector and chromatic particles. Isnt it just an expansion around say some singularity representing the world?

    • April 27, 2016 4:37 pm

      I mean that the minimal action principle is simly wrong construction though good enough to explain currently visible effects.

    • Matthew Cory permalink
      April 27, 2016 6:23 pm

      I believe we are simply witnessing the Schrodinger’s cat fallacy by QM advocates. They are pretending probabilities are like separate realities. In fact, QM doesn’t need probabilities any more than other branches of science. We only observe single outcomes and QM just predict the expected value of observables. Essential randomness is a meaningless term and Bohr was right all along. QM is just a theory about observables.

      Crackpots like John Bell have just confused us with arguments about entanglement and hidden variables but, contrary to amateur opinions, the Bell test experiments didn’t strictly say anything about randomness or locality. It was just an attempt to prove QM wrong and they couldn’t do it. Quantum are not classical and the sooner people face it, the faster we can move on.

      People are going beyond what is empirically known in QM. That’s the major lie involved with QCs.

      Note: Cantor’s mathematics is not even self-consistent classical logic and Solomon Feferman has addressed this point. Ignorant people don’t read the logic literature and pretend to know things about mathematical foundations. Norman Wildberger can’t even get a response to his published points about analysis. It’s all degenerate discreteness and the proofs are a sideshow hiding finite arguments. The ultrafinistists are really kicking out the kooks.

      • April 28, 2016 12:46 am

        Matt, i think we are in opposite camps. Im trying to find the limits of knowledge and not denying current finding. Its like a question to experts and if they cannot answer in 5 min they do not really know. But that their internal indication for their ignorance. And up to their will to progress. Haute couture is also has nothing to do with reality but still persist. Somebody likes it.

      • Matthew Cory permalink
        April 28, 2016 8:05 am

        Camps are the problem because they legitimize crude appeal over soundness. Talk about haute couture! There is nothing as stupid as pointless labels.

        Fuddy-duddys are denying current findings and that was my point. Look at how many people are still trying to prove non-locality via Bell. After the recent loopholes were closed with local hidden variable theories, we have been making experimental progress disproving non-local hidden variables (Leggett). Amateurs don’t even understand what debates about hidden variables even amount to and exaggerate the importance of such arguments.

        You can’t make up the rules and then talk about the limits of knowledge and that also goes for the rules of proof. Assuming a self-contradictory premise like complete infinity was just Cantor’s religious Scholasticism and he made no effort to hide his intentions. Nobody cares about the limits of fantasy.

      • April 28, 2016 10:08 am

        Look to advance the lnowledge you need to come up with a new construction dwell into untill saturation and come with a new construction. The croterion in physics is clear – new construction should explain exsisting knowledge and predict a new experimental finding. There is no such criterion in math. It is only indirect throug the physics. Otherwise it is haute couture with no practical meaning. Just denying practical explanation is a word exercise.

  5. April 28, 2016 10:16 am

    All that doscussion are asking whether quantum are different from classical world or not. In particular whether quantum computation is really different from classical ones. I personnaly like complexity approach to test the limits of quantum mechanics. It will eventually saturate. Just making complicated matematical constructions leads to string theory – a theory about everything and at the same time about nothing. It has no predictive power. Meaning we are missing the basic intuition.

    • Matthew Cory permalink
      April 28, 2016 4:03 pm

      Quantum ARE different than the classical world but it still doesn’t imply quantum computers. Tests are important but, like fusion, I’m will not be suckered into waiting (ITER, DEMO, PROTO – beyond 2050). That would be purely a scam. You know, funds aren’t infinite for your little physics hobbies. The burden of evidence is completely reversed.

      • April 29, 2016 2:21 am

        You will be waiting for others to do unless you will do it yourself.

      • Matthew Cory permalink
        April 30, 2016 5:33 pm

        Biology is way more important than extremist physics.

  6. May 2, 2016 1:43 am

    Following Ken’s line of thought at the end of the post, we can talk about the minimum error-rate r for a single qubit, and there are related parameters r_k the minimum average error-rate per qubit per computation step in best realization of worst case computation for a quantum circuit with k qubits. Another neat parameter is c the minimum correlation between depolarizing errors for the two qubits approximating a quantum cat states. Of course, one need to be fairly careful in formal definition of these quantities. The optimistic hypothesis asserts that all these quantities are zero. The pessimistic hypothesis indeed implies that r>0 and that r_k increases with k as to make it impossible to realize quantum computation on more than a handful of qubits. (I remember that Ady Stern also raised the objection that if r is not zero then what is it? is it a fundamental constant of nature?)

    We can also talk about u_k the best approximation we can have for a random unitary state for k qubits. It is indisputable (both under the pessimistic hypothesis and under the optimistic hypothesis) that say u_{100} is positive and large. The reason is that realizing a random unitary requires an exponential sequence of computing steps. So the nonnegativity of u_{100} represents a computational insights but it may well be related also to physical insights and constants. (I think people did refer to it in the literature, but I dont remember where.). A similar story can be told about our constants.

  7. May 2, 2016 10:17 am

    Dear Profesor,

    I would like to share you a serious proof of UP = NP. I hope my work can be useful for you.

    Here is the link

    https://hal.archives-ouvertes.fr/hal-01304025v2/document

    In addition, I would like to share you another link if you feel interested on this:

    http://pentian.com/book/fund/2384

    I invite you to share these works to other people if you wish,

    Best Regards…

  8. May 9, 2016 10:19 am

    Just-noticed this quote, “Scott Aaronson, an associate professor at MIT, says… a collection of just 50 qubits operated that way will likely be the first computer to demonstrate “quantum supremacy”—the power to solve a computational problem immensely difficult and perhaps practically impossible for conventional machines.” I wonder if the reporter ignored the reference to Preskill or if Aaronson felt that crediting the term was unnecessary now that it has entered the public domain through this blog. Still, fair credit scrupulously given would seem appropriate at this early stage of nomenclature.

    http://scitation.aip.org/content/aip/magazine/physicstoday/news/10.1063/PT.5.8174?utm_source=Physics%20Today&utm_medium=email&utm_campaign=7075074_The%20week%20in%20Physics%202%E2%80%936%20May&dm_i=1Y69,47N5U,E1NXYI,FCCK9,1

    • May 9, 2016 12:07 pm

      RD, when I edited the 2012 post, although I linked Preskill and another source, I had the impression the term was already common parlance then.

      • May 9, 2016 12:15 pm

        Ah, that helps – my mistake! It was the first I’d heard of it, so I guess I’m behind-the-times, thanks.

  9. aram permalink
    May 15, 2016 6:09 pm

    Not to relitigate our extensive debate, but I will say a few things:
    Physicists love investigating possible deviations from the standard assumptions of physics. e.g. magnetic monopoles have been hunted experimentally for almost a century. But the pessimistic and metamistic hypotheses are at present too vague to be tested empirically.

    The challenge remains to QC skeptics: come up with a theoretical model that makes (a) clear predictions in all cases, (b) is consistent with existing observation, (c) is inconsistent with QCs, and for extra credit, (d) can be tested with experiments that are not “build a million-qubit QC.”

    Gil proposes models such as smoothed noise and no-high-degree-polynomial states (also advocated by John Sidles) which I argue have not been shown to satisfy (a) and (b).

    • June 12, 2016 8:18 am

      Aram, that does “metamistic” mean? “Anything except optimistic”? I think, QC optimists need for clear promotion of they hopes as well, especially after recent serious attention to this area. It may be even more natural, than claim to point a reason of skepticism about devices often considered earlier as rather hypothetical mathematical model. Why theory of quantum computing should have privileges in comparison with, e.g., some challenging ideas and theories, such as different approaches to quantum gravity, time travel, etc.?

      Maybe such an argument is slightly less relevant with Gil’s idea to use hypothetical process to manage with the hypothetical devices. Yet, currently, it could be quite reasonable way to steer toward a further analysis of all the area. I would like to know more about a derivation of possibility/impossibility of scalable quantum computers developed without intensive use of a toy models of QM especially adapted for description of QC.

  10. May 17, 2016 12:51 am

    It can be useful to point out “what is new” since the 2012 debate with Aram. In the debate I essentially raised the “pessimistic hypothesis” as a possibility and elaborated on some of its conjectural consequences, including some implicit properties (related to error-correlation) and modeling of noise which emerges from it.

    Since the debate, based on my paper with Guy Kindler, there is now a good argument in support of the pessimistic hypothesis by analyzing small scale quantum systems. Our work deals with systems of noninteracting bosons (BosonSampling) but the insights should transfer to small quantum circuits. This work also shed some light into the question raised in Aram’s first debate-post regarding “why classical computers are possible?”

    So when Aram writes: “The challenge remains to QC skeptics: come up with a theoretical model,” he misses a crucial point: my basic model is precisely the same as the standard noise models of the last two decades – constant-rate depolarizing noise with the standard statistical independence assumptions. (It is a bit simpler because for the skeptical direction we can consider just depolarizing noise.) More exotic implicit noise-modeling emerges for larger systems because of the failure to reach the starting point for quantum fault-tolerance.

    The question is if the engineering effort to achieve general circuits with low level of noise grows moderately with the number of qubits, or explodes already for a small number of qubits. (A useful analogy from the paper is the comparison to computing Ramsey numbers R(n,n). ) The fact that small scale quantum circuits represent a very primitive computational device (in terms of computational complexity) supports the possibility that the engineering efforts must explode. This means that the scaling behavior of small quantum systems is a basic scientific question and not “merely” an engineering issue, just as computing the Ramsey number R(7,7) is not “merely” a programming issue.

    Much progress since 2012 refers also to Aram’s last point: “(d) can be tested with experiments that are not “build a million-qubit QC.”

    The remarkable progress witnessed during the past two decades in the field of experimental physics of controlled quantum systems places the decision between the pessimistic and optimistic hypotheses within reach. You certainly do not need building a million qubit QC for that. Ken mentioned a few avenues: Attempts to create small universal quantum circuits with up to “a few tens of qubits,” attempts to create stable logical qubits based on surface codes, and attempts to have Boson Sampling for 10–50 bosons. Each direction (and a few more directions) represent quite a few experimental groups working in various different directions. Based on the optimistic view, and the experimental progress of the last years some people regard a definite demonstration of “quantum supremacy” as imminent. I predict a decisive failure for all of these attempts to demonstrate quantum supremacy, or very stable logical qubits, and also that this failure will be witnessed for small systems.

    Perhaps a good summary to my current understanding is what I offer in the paper:

    Understanding quantum computers in the presence of noise requires consideration of behavior at different scales. In the small scale, standard models of noise from the mid-90s are suitable, and quantum evolutions and states described by them manifest a very low-level computational power. This small-scale behavior has far-reaching consequences for the behavior of noisy quantum systems at larger scales. On the one hand, it does not allow reaching the starting points for quantum fault tolerance and quantum supremacy, making them both impossible at all scales. On the other hand, it leads to novel implicit ways for modeling noise at larger scales and to various predictions on the behavior of noisy quantum systems.

    • aram permalink
      June 12, 2016 2:32 pm

      my basic model is precisely the same as the standard noise models of the last two decades – constant-rate depolarizing noise with the standard statistical independence assumptions. (It is a bit simpler because for the skeptical direction we can consider just depolarizing noise.)

      Ok, but in the standard noise model, depolarizing noise isn’t something that just gets pulled out of thin air. e.g. for ion traps, noise comes from spontaneous emission, various laser imperfections, heating or voltage fluctuations from the nearby electrodes, etc. These then give rise to some effective noise process for the qubits, with an associated noise rate.

      If I understand your model, it seems that when there are more qubits either this noise rate increases, or there are new Kraus operators that affect multiple qubits. It is this prediction specifically that seems central to the pessimistc (or metamistic or whatever) hypothesis, and this prediction which I don’t know how to relate to any proposed physical mechanism.

    • June 13, 2016 11:33 pm

      Please consider this short comment to be the outline of a longer comment, that regards QED as a model for both the universe of “Quantum Supremacy” (of John Preskill, Aram Harrow, and many others) and the universe of “Quantum Comity” (which is a QED universe in which Kalai’s postulates hold).

      Here’s the reasoning by which the reconciliation is achieved. Compared to the dynamical length-and-time scales of our synaptic brain physiology, the speed of light seems plenty fast (light-speed being about three million times faster than neural conduction velocities). Let us therefore consider a “Fast Light” QED-universe in which light-speed is infinitely faster than nerve-conduction speed, and conversely a “Slow Light” universe, in which light-speed is comparable to nerve-conduction speed.

      To anticipate, it will turn out that Quantum Supremacy is natural in Fast Light universes, whereas Quantum Comity is natural in Slow Light universes. Moreover, it will turn out (very surprisingly to me at least) that our physical universe is far more nearly a Slow Light universe than a Fast Light universe.

      The natural units of QED universes (both fast and slow) are called by quantum chemists atomic Units (which Wikipedia discusses). These are units in which the Planck constant, Coulomb constant, electron mass, and electric charge all have numerical value unity. It follows that in all these universes the Hartree energy and Bohr radius also are unity; moreover the Josephson constant, the von Klitzing constant, and the Bohr magneton also are order-unity constants.

      So what’s left to vary? The fine structure constant 𝝰, in terms of which we have the speed of light c = 1/𝝰. Thus the Fast Light universe corresponds to the limit 𝝰 ➝ 0, while the Slow Light universe corresponds to 𝝰 ➝ ∞. Importantly, the magnetic constant 𝛍_0 = 4𝛑𝝰^2, so that magnetic-moment interactions vanish in Fast Light universes.

      The upshot is that (AFAICT) Quantum Supremacy survives in fast-light universes, but experimental approaches have to be pretty substantially modified. In fast-light universes there are photon-electron interactions and no magnetic field interactions. Fortunately transistors still work, two-slit interferometers still work, the relativistic Dirac equation reduces to the Galilean Schroedinger equation, molecular biology and neurophysiology still works, and so (with ingenuity) we can design Quantum Supremacy devices. In fact the dark Galilean Fast Light universe turns out to be terrifically advantageous from a noise-reduction point-of-view, in that information no longer leaks into the QED vaccuum.

      In contrast, any attempt to slow the speed of light appreciably from the value we’re used to (namely c =137) runs into severe problems. For example, the photon-photon scattering cross section 𝞂_𝝲𝝲 turns out to be (from Liang and zarnecki, arXiv:1111.6126v2):

        𝞂_𝝲𝝲 = 𝝰^18 973/(10125𝝿).

      The quantum chemistry literature and the field theory literature are equally innocent (AFAICT) of any discussion this remarkable power-dependence. Eighteen powers of inverse light-speed entering into a physically natural quantity … that’s a lot!

      It would take an impolitely lengthy comment to discuss all of the implications of Slow Light scaling (that occur to me), but the one plausible conclusion is that we humans already live in a universe in which the speed of light is about as slow as it can be, while still maintaining (as convenient fictions) the notions that electrodynamics is linear and that physical systems can be dynamically isolated from one another.

      In other words, we humans evolved and presently live in a Slow Light QED universe. From a practical point of view, this means that we live in a noisy, lossy, relativistic, brightly illuminated QED universe in which conditions are highly favorable for medical research (in that optical and magnetic resonance imaging both work well, and in which high-accuracy quantum chemistry and condensed matter simulations are feasible with classical resources). And from a fundamental mathematical and physical point of view, we live in a universe whose Slow Light QED dynamics plausibly models (in the mathematical sense) Kalai’s postulates.

      Therefore Kalai-compatible Slow Light universes seem pretty good (to me) — which is fortunate, given that we live in one.

      Conversely, if by some means the fine structure constant 𝝰 could be increased without bound, then the physical isolation required for scalable quantum computing could be more readily achieved, and in the resulting dark Galilean universe we would have good reasons — or at least new reasons — to be optimistic regarding the practical prospects for demonstrating Quantum Supremacy.

      Hence Fast Light universes seem pretty good (to me) — because imagining that we live in one inspires us to good mathematics and wonderful experiments.

      Heck, maybe we Slow Light biological beings do live, all of us, in pretty much the best of all possible worlds — if we have the imagination to realize it, and the courage to grasp all of the opportunities that Slow Light universes present. 🙂

    • June 14, 2016 7:15 am

      Here are some further addenda and errata, that are relevant to the above meditations on Fast Light versus Slow Light Universes.

      Errata  (1) In the sixth paragraph the crucial word “no” should appear in the phrase “In fast-light universes there are no photon-electron interactions”. (2) The third-from-last paragraph should begin “Conversely, if by some means the fine structure constant speed of light could be increased without bound […]”.

      Addenda  Advanced techniques of universe construction are on-display in Harnik, Kribs, and Perez “A universe without weak interactions” (2006, see arXiv:hep-ph/0604027); this work helped to inspire the construction of Fast Light versus Slow Light alternate Universes. Experimental considerations that relevant to the construction of Fast Light versus Slow Light universes are surveyed in Hansjörg Scherer’s and Benedetta Camarota’s “Quantum metrology triangle experiments: a status review” (2012); in particular it’s by no means trivial to conceive triangular realizations of the Josephson effect, the quantum Hall effect, and the single-electron transport effect, that remain feasible in the Fast Light Universe. Numerical, dimensional, and conventional considerations related to practical measurements are concisely summarized in “CODATA: recommended values of the fundamental physical constants” (2015, see arXiv:1507.07956v1 [physics.atom-ph]).

      In regard to the total photon-photon scattering as expressed in atomic units, the eighteen powers of the fine-structure constant \alpha \simeq 1/137, or equivalently the eighteen powers of the inverse lightspeed c = \alpha \simeq 137, arise from equation (21) of the Yi Liang and Andrzej Czarnecki review (2012, arXiv:1111.6126 [hep-ph]) for the total photon-photon scattering cross-section:

         \displaystyle\sigma({\gamma\gamma\to\gamma\gamma}) = \frac{973\,\alpha^4\,\omega^6}{10125\,\pi\,m^8}\quad\text{(natural units)}

      Here the photon energy scale \omega is identified to the (chemical) Hartree energy E_{\text{H}} and the mass scale m is identified with the electron mass m_e. Restoring SI units via \omega\to E_{\text{H}}/(\hbar c) and m\to m_e c/\hbar, and then transforming to atomic units with E_{\text{H}}=\hbar=m_e=1 and c=1/\alpha yields

         \displaystyle\sigma({\gamma\gamma\to\gamma\gamma}) = \frac{973\,\alpha^{18}}{10125\,\pi}\simeq 1.053\times 10^{-40}\quad\text{(atomic units)}

      Evidently the exponent 18 has arises via 18 = (+4)+(+6)-(-8).

      It was immensely surprising (to me) that such a tiny dimensionless number (as 10^{-40}) emerges so naturally from such a relatively slow light-speed (as c\simeq 137). What other small numbers might arise similarly in field theory (gravity maybe)?

      In summary, the Fast Light and Slow Light QED universes constitute (in effect) convex and concave field-theoretic lenses through which Gil Kalai’s postulates can be quantitatively appreciated. Hopefully the above (unpreviewed) \text{\LaTeX} will render OK, and will help the motivations and results of the original comment to be more readily appreciated.

    • June 14, 2016 9:41 am

      Aram, the two remarks below, regarding Fast Light versus <iSlow Light QED universes, are a conscious attempt to specify “a theoretical model that makes (a) clear predictions in all cases, (b) is consistent with existing observation.”

      In a nutshell, that theoretical model is Slow Light QED (c ∼ 137), which (of course) describes the physical world that we live in.

      Proposals for demonstrating Quantum Supremacy have commonly assumed (what amounts to) Fast Light QED dynamics in noise modeling. Everyone appreciates that Fast Light dynamics — in effect, vacuum-decoupled electrodynamics — is wonderful for proving theorems, and for inspiring ingenious experiments, and for pedagogy, but isn’t it open to debate whether or not Fast Light modeling assumptions have generated (to date) any feasible engineering paths for practically and scalably demonstrating Quantum Supremacy?

    • June 19, 2016 2:55 am

      “Ok, but in the standard noise model, depolarizing noise isn’t something that just gets pulled out of thin air.” Excellent point, Aram, we should return to it. “Quantum Supremacy is natural in Fast Light universes, whereas Quantum Comity is natural in Slow Light universes.” Lovely idea, John!

  11. May 27, 2016 8:52 am

    Maybe, yet another fresh thing is free access to IBM 5Q cloud based quantum chip. Recent work http://arxiv.org/abs/1605.04220 tests Mermin inequalities for a few qubits using IBM 5Q. Even if reported problem with too small effects for five qubits exists, it looks deeper than discussed earlier issues with noise because of strong difference between four and five qubits.

    • May 31, 2016 1:23 am

      Dear Alexander, this is a very interesting comment! Let me add that both for BosonSampling and for small quantum computers we can predict not only what will go wrong but also what will go right. Namely, that we will be able to well-approximate noise-stable states (in the sense of my old work with Benjamini and Schramm). For the Bosonsampling case those are defined in terms of Fourier-Hermite polynomials (namely, noise stable states are those well-approximated by their low degree Fourier-Hermite expansion). For the case of quantum computers, the setting of the paper http://arxiv.org/pdf/0810.2435v5.pdf by Montanato and Osborne and Pauli-based Fourier analysis is relevant.

      In any case, I completely agree that the 3-4-5 qubit IBM quantum computers can give valuable information regarding the question of how small quantum computers scale and we should look forward to 6- and 7- qubit devices.

      • May 31, 2016 8:16 am

        Dear Gil, are you talking about http://arxiv.org/abs/math/9811157 ? Currently, I only may recall the idea, that pure classical considerations could be not enough, because they should concern to usual computers as well. The mentioned test with IBM 5Q is interesting for me because of relation with scalability. Some trouble with quantum computer, is that scaling at some level may intersect an (imaginary?) boundary between quantum and classical worlds. Yet another recent work devoted to work with IBM 5Q is https://arxiv.org/abs/1605.05709 If 5 qubits is enough to check your conjectures about right and wrong behavior?
        Relating skepticism: there is recent short post https://www.facebook.com/groups/qinfo.scientists.unite/permalink/10157003622890338/
        (I am not sure, if it’s visible without facebook account) with some comments on Karl Svozil’s paper https://arxiv.org/abs/1605.08569

      • May 31, 2016 1:33 pm

        “are you talking about http://arxiv.org/abs/math/9811157 ? ”
        Yes! and the relevant paper with Kindler is http://arxiv.org/abs/1409.3093
        (A small leap is needed to move from the BosonSampling model to the quantum circuit model.)

        “Currently, I only may recall the idea, that pure classical considerations could be not enough, because they should concern to usual computers as well. ”

        The point is that the majority function which allows creating immune classical bits is noise stable. (So you can have higher and higher quality encoded classical bits with uniformly bounded small degree function.) Quantum error-correction (with higher and higher quality) cannot be performed (I think) with uniformly bounded depth polynomials.

        Thanks for the additional references it would be interesting to experiment with the IBM machine. E.g. to see even for 5 qubits if it indeed support only “low degree” states. I am curious what table II in http://arxiv.org/abs/1605.04220 means and what’s its relevance to the “region of states” the 5-qubit IBM machine can reach.

Trackbacks

  1. Grading yourself in quantum computing | Are You Shura?
  2. The Quantum Computer Puzzle @ Notices of the AMS | Combinatorics and more
  3. Coins on a Chessboard III | Gödel's Lost Letter and P=NP
  4. "Quantum Supremacy" expected at 50 qubits | Physics Forums - The Fusion of Science and Community
  5. "Quantum Supremacy" expected at 50 qubits | Physics Forums - The Fusion of Science and Community
  6. qm computing summer 2017 update | Turing Machine
  7. Quantum computing – Wikipedia – 161-180
  8. Quantum Computing - Maxximux TechBits
  9. How does a Quantum Computer work? - Science and Engineering

Leave a Reply to mkatkovCancel reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading