Can You Hear the Shape of a Quantum Computer?
Debate round 3: Computation cannot hide the physics
Mark Kac was a great mathematician, and worked mainly in probability theory. Kac is famous for the ErdősKac theorem, which is often called “the fundamental theorem of probabilistic number theory.” It asserts that the distribution of the number of distinct prime factors of an integer behaves like a standard normal distribution with mean and variance . He is also famous for the FeynmanKac formula from stochastic partial differential equations.
Today we present the third round of our debate between Gil Kalai and Aram Harrow on the feasibility of building scalable universal quantum computers. We intend one further post with final summations by both.
Kac is recalled most of all for the following famous question he posed:
Can one hear the shape of a drum?
This sounds evocative but is actually technical. Hearing depends mostly on the spectrum of the drum vibrations. How much does that tell us about the structure of what is being vibrated?
Technically speaking the question is about our ability to “read” the geometry of a domain from the eigenvalues of its Laplacian. Years of study showed that while we can hear a great deal of the shape of the drum, the answer to Kac’s question is no, as there are isospectral drums with substantially different geometry. But the basic question still reverberates, and Gil Kalai here asks about its extent in quantum computation.
Our debate began in late January with a post in which Gil explained his conjectures of impediments to quantum computation, and Aram described the roadmap for his reply. Aram responded in three posts (I, II, III), to which Gil attached short rejoinders. A second round of the debate emerged after Aram cowrote a draft paper with Steve Flammia that proposed compelling counterexamples to Gil’s original Conjecture C, in which Gil replied with revisions to his conjecture. As we go to “press,” we note another recent paper by John Preskill on related matters.
Gil Again: Taking Stock
We have had a very interesting and wideranging discussion on various issues surrounding quantum computers and quantum faulttolerance. This has been joined by over a dozen researchers besides ourselves in hundreds of comments to the posts, which have continued right up through this month, and which both I and Aram have learned from. I will first summarize by taking a wider look at the subject at hand, and my later summation will reply to the two most important issues that Aram has raised.
Recent posts by Dick and Ken have compared the world with to the world with , also considering various forms of each alternative. Russell Impagliazzo famously compared “five possible worlds” in complexity and cryptography. Here, moving to the quantum domain, I consider two other worlds, two that are likewise mutually incompatible and one of which we belong to. FTQC stands for the world with universal faulttolerant quantum computers. NFT, for “nonFT evolutions,” refers to the situation that we actually have without quantum fault tolerance which enable universal quantum computers.
Telling Between Two Worlds
Below are six fundamental differences between the world before (or without) universal quantum computers and the world after. Here “noiseless” means for all practical purposes, i.e., that the probability of error is negligible.
 Can an (essentially) noiseless encoded qubit be constructed? For nonFT evolutions the answer is no. With FTQC: yes.

For mixedstate approximations of pure states, is there a systematic relation between the state and the noise? For nonFT evolutions: yes. With FTQC: essentially no.
This distinction means that the same pure states (e.g., BoseEinstein states) will have different mixedstate approximations in nature compared to their mixedstate approximations in faulttolerant quantum computers.

Are there generalpurpose quantum emulators? Alternatively, perhaps every type of quantum state/evolution that can be emulated requires a special machine? For nonFT evolutions: the latter. With FTQC: the former. We discuss this issue in the last part of my post.
Now we come to the opening thematic question speaking a fourth fundamental difference, which we discuss further below:
 Can you hear the shape of a quantum computer? If universal quantum computers can be built the answer is no. I conjecture that the answer is yes.
 Are complicated quantum states based on complicated multiqubit interaction “physical”?
Universal quantum computers (must) allow complicated quantum states that involve multiqubit interaction. NonFT quantum evolutions imply severe restrictions on pure states that can be approximated. This is what Conjecture C is trying to express.
And finally,
 Can devices based on quantum mechanics lead to superior computational power?
Universal quantum computers allow superior computational power (including an efficient method for factoring). NonFT quantum evolutions are likely to leave us with as the maximum generally feasible class.
In the original post, Conjectures 1 and 2 address item 1 here, Conjectures 3 and 4 address item 2, and Conjecture C addresses item 5. Now we discuss items 3 and 4. The prospect of ‘yes’ on item 6 is behind both the expectations and hopes of those building quantum computers and the concerns of those who are skeptical.
Can You Hear the Shape of a Quantum Computer?
Let me first talk a little about geometry and discuss point number 4. The basic model for quantum computers is combinatorial, and geometry plays no role. Also in my conjectures that propose why (universal) quantum computers fail, geometry plays no role. In fact this was one of my principles in this pursuit. A basic question is:
Can you learn about the geometry/physics from the states/evolutions of your quantum computer?
For classical computers the answer is no. If universal quantum computers can be constructed, the answer for quantum computers must be no as well.
Namely, the quantum evolution run by our universal quantum computer would not give us any hint about its geometry. A universal quantum computer would allow us to simulate a highdimensional quantum process on an array of qubits with a geometry of our choice. Consider, for example, Alexei Kitaev’s fourdimensional selfcorrecting memory. This is a remarkable quantum analog of the 2D Ising model, and it is an outstanding open question whether 3D objects with similar properties can be constructed. Once we have a universal quantum computer we will even be able to realize Kitaev’s 4D model on an array of qubits that looks like this (src):
If my conjectures are correct then there are plenty of quantum evolutions and quantum states that cannot be achieved at all. In addition, I expect that the evolutions and states that can be reached will depend on the geometry of your quantum device and the geometry and physics of your world.
One philosophical reason for why I would tend to think that the geometry can be “read” from the states and evolutions of a quantum device is the following. I see quantum mechanics as the theory where the buck stops. If the universe is a large quantum computer of some sort, and if we cannot hear the shape of a quantum computer, then where does geometry come from?
A somewhat similar story can be told about time. Let’s leave it for another time.
SpecialPurpose Quantum Emulators
Scott Aaronson gave a nice description of the central problem for realizing universal quantum computing, in his IEEE Tech Talk article on February 7th, explaining his backing his words with a $100,000 open wager:
The central problem is decoherence, meaning unwanted interactions between the computer and its external environment, which prematurely “measure” the computer and destroy its fragile quantum state. The more complicated the quantum computation, the worse a problem decoherence can become. So for the past fifteen years, the hope for building scalable quantum computers has rested with a mathematicallyelegant theory called “quantum faulttolerance,” which shows how, if decoherence can be kept below a certain critical level, clever errorcorrection techniques can be used to render its remaining effects insignificant. (Emphasis added)
I outlined the word “if.” When it comes to generalpurpose quantum computers the skeptical explanation is that, very simply, decoherence cannot be kept low; in fact it scales up. Therefore, universal quantum computers cannot be built. My approach is that this should be demonstrated by studying the behavior of specialpurpose machines for emulating quantum evolutions. My conjectures apply to specialpurpose machines and to natural mechanisms to create quantum evolutions.
The crux of the matter is understanding the behavior of specialpurpose devices for creating quantum evolutions. Realizing this nullifies the sentiment (or argument) that damaging noise models manifest some conspiracy or malice by nature. A bicycle ride around the block requires a different device and hence carries different risks from a manned mission to Mars.
Aram’s ThirdRound Response
Quantum mechanics has long benefited from critics and skeptics. Erwin Schrödinger himself came up with his famous cat to highlight the seemingly absurd consequences of the new theory of quantum mechanics; now experimentalists boast of creating “cat states” in trapped ions and other implementations. Entanglement, arguably the most radical feature of quantum mechanics, was first described in its modern form by Albert Einstein, Boris Podolsky, and Nathan Rosen in a paper in 1935 that sought to highlight the problems with this new theory. Now EPR pairs play the same role in quantum information that NAND gates do in circuits.
More recently, quantum computing (and specifically Peter Shor’s factoring algorithm) met with criticism in the mid 1990’s on the grounds that decoherence would prevent quantum computers from getting off the ground. For example, Bill Unruh argued in 1994 that the maximum number of steps taken by a quantum computer should be , where is the temperature, which corresponds to a natural noise rate. Shor’s paper introducing faulttolerant quantum computation (FTQC) credits the role of critics:
I would like to thank Rolf Landauer and Bill Unruh for their skepticism, which in part motivated this work.
Unruh was convinced by this paper, and has dropped his objections. Landauer unfortunately is no longer alive, but his protégé Charlie Bennett has gone on to become a leader in the field of quantum information, as well as an independent discoverer of quantum errorcorrecting codes.
But not all skeptics are convinced. Despite the provable extension of the FT threshold theorem to ever more general scenarios, and the lack of any theory of noise that is both consistent with observation and inconsistent with FTQC, the idea of quantum computing remains controversial. However, the task of the skeptics is now a difficult one. Attempting to give a theory that would exclude quantum computers while keeping quantum mechanics reminds me of the famous saying of Jesus:
It is easier for a camel to pass through the eye of a needle than for a rich man to enter heaven.
What I like about this quote is that it expresses the idea that something may be in principle possible (a rich man going to heaven), but there are so many difficulties along the way (how one acquired one’s wealth, the fact that it hasn’t been given to those in need, etc.) that we are left with either the vivid impossibility of the camel, or the painful alternate reading of trying to thread rope made of hemp or camel hair.
Similarly, Gil and other modern skeptics need a theory of physics which is compatible with existing quantum mechanics, existing observations of noise processes, existing classical computers, and potential future reductions of noise to subthreshold (but still constant) rates, all while ruling out largescale quantum computers. Such a ropy camelshaped theory would have difficulty in passing through the needle of mathematical consistency.
Some Specifics
Let me get down to specifics. Gil suggests that even if FTQC is possible in general, one might be able to define a “sixth world” of NFT, in which quantum computing is implemented without using faulttolerant methods. But faulttolerance is not a special switch that can be turned on like the noisecanceling feature of headphones. It is simply a recipe for putting together a series of quantum gates (or in more complicated versions, also measurements, classical side computation, and adaptive modification of the quantum gates being applied). Allowing QC but not fault tolerance is a little like allowing generalpurpose classical computing, but forbidding the Fast Fourier Transform (FFT).
Could one define the sixth world of “NFFT” (for “No FFT”)? Clearly not. It is impossible to say whether, lurking somewhere in a complicated computation, the ideas of the FFT have been encoded. Similarly, NFT cannot hope to be meaningfully defined. Finally, what about hearing the shape of things? I don’t see how the analogy quite fits, but in any case, it seems not to go in the way of the skeptics. Even for systems that “compute” only a simple linear response, one cannot always hear the shape. Here is a picture of two isospectral drum surfaces with different shapes (src):
It is not clear to me what the right analogue of this question is for computation. Does the spectrum of the drum become replaced by the function computed by a circuit? If so, certainly we have different classical circuits computing the same function, and as a corollary, we have different quantum circuits computing the same function. After all, classical computing is a special case of quantum computing.
Open Problems
How does the analogy with Mark Kac’s problem play out? Note that the spectrum of an ordinary undirected graph is even further from characterizing its structure than Kac thought might be the case for surfaces. Nevertheless spectra of graphs have been fundamental to understanding expanders, which in turn have driven many important recent results.
For a particular issue, does quantum teleportation already nullify the possibility of a systematic dependence between the quantum evolution and the geometry of the quantum computer?
[fixed link to Unruh paper]
Trackbacks
 Cricket, 400, and Complexity Theory « Gödel’s Lost Letter and P=NP
 Threading the needle of mathematical consistency  The Quantum Pontiff
 Grilling Quantum Circuits « Gödel’s Lost Letter and P=NP
 Some Updates  Combinatorics and more
 ITS NOTTED ? OR WHAT « KATRINA KAIF TURcotte AJAY MISHRA SANJAY DUTT SAREE AND LEHENGA CHOLI LESSONS
 Quantum Repetition « Gödel’s Lost Letter and P=NP
 Lucky 13 paper dance!  The Quantum Pontiff
 Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP
 The Quantum Debate is Over! (and other Updates)  Combinatorics and more
 Black Cat in Black Box  Are You Shura?
 Symplectic Geometry, Quantization, and Quantum Noise  Combinatorics and more
 Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work.  Combinatorics and more
 A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers  Combinatorics and more
 My Quantum Debate with Aram III  Combinatorics and more
 My Quantum Debate with Aram III  Combinatorics and more
 Open Collaborative Mathematics over the Internet – Three Examples  Combinatorics and more
Well, from quantum computation one should be able to figure out quantum mechanics, which is the fundamental theory. But why would one expect to be able to figure out the geometry of the system itself from the kind of computation it does.
An analogy: if some entity passes the Turing test, you cannot conclude that it is a biological human.
Where the geometry of the universe comes from is very much an open question. We do not even know how many dimensions there are. But this does not imply that quantum mechanics is false!
When Kacs posed his drum question, were isospectral graphs already known?
Yes—as of 1957. The first “Open Problem” was written by me, the second by Gil except that I added the words “For a particular issue” and “already”. Kac need not have considered isospectral graphs relevant. My point is that even in a context where a Kaclike question is a nonstarter (indeed, almost all trees have a cospectral nonisomorphic mate) the subject is still rich.
May as well add that the intro is a mix of me and Gil, while all the intervening sections are by Gil and Aram through several rounds of editing. A few phrases are mine but almost the only addition is that I supplied the “painful alternate reading” of Mt. 19:24 and added the word “ropy” later—I like the image of some professional and general actions being like trying to force rope through a needle.
Russell Impagliazzo’s “A personal view of averagecase complexity” is (IMHO) one of the most wonderful papers ever written, in part because it encourages us to fondly hope that we might live in more than one of Russell’s five worlds.
This fond hope motivates my presentlyopen question on TCS StackExchange “Does P contain languages whose existence is independent of PA or ZFC?”; the question is purposed toward a Hartmanisesque 21st century of complexity theory which the restricted set of gnostic Turing Machines (TMs) presents to us (provably) Impagliazzo’s Cryptomania, whereas the (larger) set of transcendental TMs presents to us Impagliazzo’s Algorithmica (yet neither provably nor refutably, hence merely hopefully and/or effectively … which for practical purposes is preferable).
I have discussed with Aram Harrow and Steve Flammia a similarly bipartite compromise resolution of the Harrow/Kalai debate, in which Gil Kalai’s conjectures hold true for a fixed, timeindependent Hamiltonians on fixeddimension statespaces, while Harrow’s conjectures hold true for timedependent Hamiltonians on statespaces equipped with arbitrarilymany ancilla qubits; eventually I hope to post here about geometrical simplications associated to the timeindependence assumptions (once I have assimilated the arguments offered in regard to the present post).
Needless to say, in thinking about these questions getting the definitions right is comparably challenging to proving useful theorems; indeed success at any of these tasks in general requires success at all the others.
In both cases and for Russell Impagliazzo’s article too, an immersing literary metaphor is provided by Kafka’s wonderful story An Imperial Message. Best wishes to all Gödel’s Lost Letter readers who long to receive even a whisper or a rumor of that message! :)
Dear John, I think that the issue of various intermediate possibilities regarding P/NP or other complexity classes and cryptographic possibilities is indeed interesting. We even discussed it here in some early post discussion. (In my post, these computational complexity questions were only considered as illustrations for my twoworlds description.) Personally, I am more interested in problems in P that finding the Palgorithm is hard while in NP and I prefer not to go to the undecidability domain. However, I regard such a study mainly as a tool to understand the difficulties in studying the situation, and I dont hope fondly at all that such a “compromise” possibility will prevail.
I have a similar sentiment regarding the question of the feasibility of universal quantum computing . In the first post I described various intermediate interpretations of my conjectures, and also, as I said, while I tend to think that universal quantum computers are not feasible, I don’t have a firm belief on this matter. But I don’t hope for a “compromise” at all. If the answer in the rather clearcut scientific question of universal quantum computers is like a baby, I certainly prefer Aram to have this baby rather than cutting it between us. :)
\textit{\textbf{Can you hear the shape of a quantum computer? }If universal quantum computers can be built the answer is no. I conjecture that the answer is yes. (…)
For classical computers the answer is no.}
Indeed, your line of thought seems to exclude classical computation as well. Oh well. :)
Dear Jiav, For a tensorproduct state (which exhibits no interactions) indeed there is no reason to expect that the state carries any information on the geometry.
I’m saying “if this line of thought was true then objects heavier than air could not fly” and, with all due respects, your answer looks like “No, no, we know it can fly.”.
Let’s go beyond analogies: if one starts with quantum state a 0> + b 1> (a and b unknown), then it’s provably impossible to reach a quantum state such as 000…> + b 111…>. So here’s an example of a quantum evolution that cannot be achieved at all. Of course this does not make universal quantum computation impossible. Please don’t answer “No, no, we know the nocloning theorem does not prevent quantum computer”. :)
The point is you can’t exclude quantum computer based only on the impossibillity of some quantum evolution. You must also show that these evolutions are critical to universal quantum computers specifically. Do you think you can fill this gap?
Dear Gil Kalai, If we can’t reach some quantum state then universal quantum computers can’t be built. If we can’t reach some integers then universal classical computers can’t exist. I conjecture we can’t achieve BB(42).
Jiav, indeed the reality of universal quantum computers implies that every quantum evolution we can imagine with feasible space and time can be realized. This is, of course, the most basic distinction between reality before and after FTquantum computers. It is also the most basic place where we should examine if “obvious” insights for classical computers extend to the quantum case. Cris Moore and I briefly discussed this in the first post.
So indeed you can exclude universal quantum computers based on the impossibility of one specific evolution. I am not sure I follow your other points. (In particular I dont know what BB(42) is.) I should add that (unlike the computational complexity distinctions) most of my conjectures are not asymptotic in nature.
“So indeed you can exclude universal quantum computers based on the impossibility of one specific evolution. ”
I think one needs to be a bit careful here. With a given finite set of quantum gates there is only a discrete set of gate networks that can be built, so one cannot reach every quantum state in finite time. However that is not what a universal quantum gate is supposed to do.
A quantum gate is universal if a finite gate network can approximate any quantum state efficiently. Since the outcome of a quantum computation is continuous in terms of the final state it is sufficient to come close to the ideal final state.
This also gives a kind of robustness which digital computation often does not have. I once went to a lecture on quantum computing where the speaker demonstrated that one could delete parts of the network for Shor’s algorithm and still get the correct answer from the algorithm with high probability, since the deleted parts had sufficiently small influence on the final state.
I completely agree, Klas, with your first comment regarding what “we can realize” precisely means. And I am curious about the last one. (It was very nice meeting you at Lund.)
I can’t recall the name of the lecturer right now, but I’ll post it if I remember it when I’m less tired (late evening here now).
It was very nice meeting you too Gil.
Gil, one could perfectly imagine, with feasible space and time, a quantum evolution from a0>+b1> to a00>+b11>. The existence or non non existence of quantum computers has of course zero impact on the fact it’s impossible to get a general procedure to do this task.
Would you qualify this as an impossible quantum evolution? If no, how could one know, facing some impossible state just discovered, that it’s not the same or a similar trick in disguise?
PS: BB stand for busy beaver I guess you suspected it as you now talk about feasible space and time :)
Surely it’s time for a Gasarch style poll.to put this to bed once and for all.
Russell Impagliazzo’s “A personal view of averagecase complexity” is (IMHO) one of the most wonderful papers ever written, in part because it encourages us to fondly hope that we might live in more than one of Russell’s five worlds.
This fond hope motivates a presentlyopen (bounty!) question on TCS StackExchange “Does P contain languages whose existence is independent of PA or ZFC?”. This question is purposed toward a Hartmanisesque 21st century of complexity theory in which a restricted set of gnostic Turing Machines (TMs) presents to us (provably) Impagliazzo’s Cryptomania, whereas a (complementary) set of transcendental TMs presents to us Impagliazzo’s Algorithmica.
Similarly, I have discussed with Aram Harrow and Steve Flammia a definitionallytuned strategy for resolving the Harrow/Kalai debate, in which Gil Kalai’s conjectures hold true for a fixed, timeindependent Hamiltonians on fixeddimension statespaces, while Harrow’s conjectures hold true for timedependent Hamiltonians on statespaces equipped with arbitrarilymany ancilla qubits. Eventually I hope to post here about the geometrical simplications that are associated to Hamiltonian / Kählerian / Lindbladian dynamical flow under the timeindependence restriction (once I have assimilated the arguments offered in regard to the present post).
Needless to say, in thinking about any of these questions, getting the definitions right is comparably challenging to proving useful theorems and carryingthrough useful engineering calculations; indeed success at any of these tasks in general requires success at all the others.
For all of these questions, and for Russell Impagliazzo’s five worlds too, an immersing literary metaphor is provided by Kafka’s wonderful story An Imperial Message (a Google search finds it). Best wishes to all Gödel’s Lost Letter readers who persistently hope to receive even a whisper or a rumor of that message! :)
To follow up on our private discussions, I’ll mention what I said there, which is that I don’t see in principle a difference between timedependent and timeindependent Hamiltonians. A timeindependent Hamiltonian can describe a clock, which can control other interactions.
Aram, that is a good point that I am still thinking about.
If we confine ourselves to a finite set of Lindbladians that are strictly timeindependent, then it is clear that we can specify oscillators and thermal reservoirs, and it is plausible that we can specify (for example) XOR gates and thus circuits, and plausible perhaps even that we can specify entire FT quantum computing architectures, but it is *not* clear (to me) that the resulting dynamical systems (in the quasiergodic longtime limit) have at any given moment a morethatinfinitesimal change of registering the result of a hardtosimulate quantum computation.
This argument is perhaps more easily appreciated within the restricted context of linear quantum optics. That is, it might be infeasible to simulate the statistics of the output state that is associated to (say) a 50photon coherent input state, and yet if we have to wait exponentially long (even in the real world) for such input states to be generated, then in what sense can a simulation that is restricted to more generic input states be reasonably assessed as deficient?
These are tough issues, and the definitions are similarly hard to get right as the theorems …
Other tough issues arise connected with renormalization. It seems to be surprisingly tricky to specify even a simple (classical) logical inverter using timeindependent Lindbladians … one needs to couple to the inverter qubits to a thermal bath. A full description of all the elements of an FT computer, realized as a compatible set of strictly Lindbladian, strictly timeindependent, strictly Markovian quantum operations, might plausibly be booklength … and even then there would be renormalization effects to worry about.
Moreover there might well be more than one natural set of operators … see for example John von Neumann’s “NonLinear Capacitance or Inductance Switching, Amplifying and Memory Organs“, USPAT2,815,488 (1954), for an explicit “von Neumann architecture” whose physics (both classical and quantum) is very different from today’s computers.
Thus, to make progress, it is necessary to conceive simultaneously both the desired theorem(s) and the required definitions(s); obviously this isn’t easy.
Von Neumann was uncommonly gifted at this theoretical/definitional synthesis, hence Rényi’s aphorism
.It is reasonable to ask, “In quantum computing, what is it that we want to prove?” And this question surely has more than one good answer.
Aram, as a followup, I am slowly assembling (mainly for my own amusement) a set of quantum gadgets, with each gadget specified by a timeindependent Hamiltonian/Lindbladian generator, such that in aggregate the gadgetset suffices to simulate the dynamics of any (nongravity) experiment … including FT computers … strictly as a stationary Hamiltonian/Lindbladian driftdiffusion process of Carlton Caves type.
Note: the required set of Hamiltonian/Lindbladian gadgets is pretty large, including: classical and quantum logic gates, amplifiers, fanouts of classical logic states, analogtodigital and digitaltoanalog converters, squarewave generators, arbitrary waveform synthesizers, field generators, etc. It is very interesting to work out how each of these functions can be accomplished by timeindependent Hamiltonian/Lindbladian generators. In the end, I had to evolve a Hamiltonian/Lindbladian description of (essentially) every box on our spin microscope’s rack!
A complete set of timeindependent gadgets enables us to pursue three lines of investigation simultaneously, each with wellposed mathematics:
(1) we can frame Gil Kalai’s postulates in terms of stationary dynamical processes (that is, drifts and diffusions of ItoStratonovich type),
(2) we can consider the geometry of the (nowstationary) dynamical flow associated to these processes, and
(3) we can pullback the gadget Hamiltonian/Lindbladian drifts and diffusions onto product states of polynomial dimensions, and consider how the correlation functions of the dynamical trajectories are thereby altered.
The resulting framework (aside from its practical utility) encourages to think about a middle ground for Kalaitype conjectures, in which *generic* physical systems require only polynomial dimensions to simulate accurately, but for a *restricted* set of physical systems (namely, FT computers) exponentially many dimensions are required.
And there is also the intriguing possibility of phase transitions, in which for sufficiently large noise levels, *any* physical system becomes feasible to simulate with polynomial resources, in consequence of an noiseleveldependent ordertodisorder transition in the levelset of unravelled trajectories.
Obviously the preceding ideas are a workinprogress.
These considerations are steadily increasing my appreciation, that there is plenty of truth, and plenty of good mathematics and new physics, to be found on both sides of the Kalai/Harrow debate. Thank you both very much for it.
Perhaps a small clarification is required about where the border between the universal FT quantum computers and NFT quantum devices lies in the content of geometry. The NFT picture is that for quentum evolutions that can be emulated the evolution can impose some constraints on the geometry and vice versa the geometry will impose some constraints on the evolution. This means that, for general quantum processes, you can learn about the geometry from the evolution. I do not expect that the evolution determined the geometry uniquely. In contrast, with universal quantum computers every quentum evolution can be emulated on every array of qubits. I suppose that the NFT picture describes better the reality of today’s experimental physics.
Gil, please let me express my appreciation and thanks of all that you and Aram Harrow (and John Preskill and Steve Flammia and Dick Lipton and Ken Regan, and many others) have contributed to this fine debate.
With particular regard to geometric considerations, in the caption of Figure 2 of John Preskill’s survey “Quantum computing and the entanglement frontier” (and at greater length in the body of Preskill’s survey) we read:
It is mathematically natural to inquire “What is the shape of this subset of Hilbert space?”
Pursuing this line of inquiry requires that we deal with a great many technical issues that are associated with the unravelling of quantum trajectories. To simplify these issues it is both geometrically and physically natural to restrict our attention to Hilbert spaces of fixed dimension (as contrasted with e.g. an unbounded reservoir of qubits and/or quantum fields), and to continuous dynamical processes (as contrasted with e.g. projective measurements), and to Hamiltonian / Lindbladian generators that are timeindependent (as contrasted with e.g. timedependent quantumlogic gates).
For essentially geometric reasons, it is by no means evident that simulating this restricted set of dynamical systems is infeasible with classical computational resources … and yet this set of processes is sufficiently large that it is challenging to specify an engineering process, or a biological process, or a chemical process, or a particle physics process, that cannot be reasonably accommodated within it.
Needless to say, scalable quantum computational processes (those known to me, at least) reside outside this restricted set. Yet providentially — for at least the next few decades — these are the quantum processes that we have the least practical interest in simulating.
In consequence, the door stands wideopen (AFAICT) to a nearly optimal world of 21st century STEM enterprises, in which every dynamical process of immediate practical interest can be efficiently simulated (at least in principle), and yet no fundamental barriers obstruct the eventual realization of scalable quantum computation (at least in principle).
What you write sounds reasonable, but also like something that would apply equally to classical computers. For example, signals in a classical computer travel in a manner that is constrained by geometry, so geometry puts _some_ constraints on computation, and computation can _sometimes_ tell us about geometry. Where does quantum mechanics change any aspect of this story?
Indeed Aram, all of these geometric considerations *do* apply equally to classical and quantum dynamical systems.
To see this, first let’s purge the statespace of intrinsic geometry. At the quantum level, this means purging the Schrödinger states from the Hilbert statespace; similarly in classical level this means purging the tautological oneform (Google it!).
Of course this means our statespace formalism can no longer supports (quantum) pathintegrals, and neither does it support a (classical) Lagrangian action. Nonetheless sufficient dynamics remains to support classical and quantum computations. For example, we can model a set of quantum qubits (resp. classical Bloch moments) with arbitrary Hamiltonian couplings, such that the quantum dynamics is Schrödinger as usual (resp. the classical dynamics is Bloch as usual).
In this picture, the topologyanddimension of the statespace are encoded in qubit couplings. In practical engineering calculations we it is natural to readout this geometry by simulating transport phenomena; here the point is that (e.g.) Onsager relations reflect spatial geometry and vice versa — this Onsagerstyle spaceemergence mechanism is effective equally for classical Bloch moments, and for Hilbertspace quantum spins, and for productspaces that are intermediate between the two.
From this purely dynamical pointofview, “space” is a coarsegrained notion that emerges naturally / dynamically / computationally / thermodynamically / concretely from a finegrained Hamiltonian description that can be classical, quantum, or hybrid, but that in any event has no intrinsic spatial elements.
Whether Nature synthesizes “space” by a similarly dynamical mechanism is not known to me (or anyone AFAICT). But in engineering simulations, this mechanism for dynamically synthesizing (coarsegrained) spatial geometry from (finegrained) spin couplings is the most computationally efficient and mathematically natural algorithm that I know.
And so, from this dynamical point of view, it definitely is the case that if we listen carefully to Onsagertype fluctuations, we can “hear” the shape of classical *and* quantum computational processes … and perhaps even the shape of spacetime itself.
Dear John& Aram, certainly the possibility raised in the last paragraph of John’s comment (which is, more or less, the intermediate interpretation from my first post) is very appealing.
I also agree with the comments that you both have made regarding classical processes. (Some of the items on my NFTworld description applies in certain cases for noisy classical processes and may have some heuristic value there.) Certainly, in many classical processes we witness that geometry plays a role and if we restrict our attention to special classes of classical processes we may “hear” some or all of the geometry from the process.
However, as we know, universal classical computation allows to simulate any program on memory of arbitrary (available) shape. This is so obvious that we don’t give it a second thought. The question is if this is possible in the quantum case. A yes answer which follows from feasibility of universal quantum computers represents a major difference between the reality before and after quantum computers.
Tech. note: link to Unruh paper does not work, should be http://arxiv.org/abs/hepth/9406058 (not quantph)
Thanks—someone else also gave it by private email.
Question about item 2: Why may not be a relation between state and “noise” for FTQC? E.g., Zurek decoherence theory seems accepted in FTQC world and claims that states commuting with environment Hamiltonian survive (einselection). Yet, for different Hamiltonians it may be both entangled or nonentangled states and so argument against idea about “bigger noise” for entangled states is arisen.
Dear Alexander, this is a good point. Can you elaborate a little?
I would prefer do not retell Zurek ideas, yet he wrote a lot of works, so I am pointing a rather random one from RMP 2003 http://rmp.aps.org/abstract/RMP/v75/i3/p715_1 (online http://arxiv.org/abs/quantph/0105127) “from top of my desk”. It is also an illustration, how he is actively using language of quantum information theory there (it was not so in his first works, because QIT did not existed yet).
The pointer states survive decoherence (cf. noise) better – it is a relation between state and noise. But pointer states may be entangled or not. It depends on physical system. So entangled states are not necessary “more noisy”.
Just reposting…
http://slac.stanford.edu/pubs/slacpubs/13750/slacpub13992.pdf
Aram wrote: “Bill Unruh argued in 1994 that the maximum number of steps taken by a quantum computer should be , where is the temperature, which corresponds to a natural noise rate.
This is very relevant to our discussion last time where constant depth quantum computation was offered as a “Shor/Sure separator”. It looks that Unruh’s conclusion may well apply to presentday controlled quantum evolutions (and also quantum evolutions in nature) and can serve as a useful border line between nonFT quantum evolutions and FT quantum evolutions.
I think it’s uncontroversial that Unruh’s (nowrenounced) arguments apply if you just let your quantum data sit there without any error correction. (If you look at his paper, this is what he is analyzing.) In this sense, his arguments apply equally well to classical information.
I guess you could say that your conjectures apply to the case when you let your data sit there, with no active or passive error correction. In that case, you can write your nqubit density matrix as , and when you apply depolarizing noise of strength , the coefficient will be multiplied by , where denotes the number of indices where is nonzero. In this way, you could say that entangled state decay faster. But this is the same as saying that classical highorder correlations decay faster in the presence of bit flip noise.
Classically this certainly doesn’t tell us that errorcorrection fails because it puts bits into correlated states, and it’s equally unjustified to use this to draw conclusions about QECC failing.
By the way, there’s a very nice discussion of this sort of “Fourier analysis of quantum states” in
http://arxiv.org/abs/0810.2435
Right, Aram, I am not talking about Unruh’s argument but about his conclusion.
Let us relate the distinctions made in this post between the worlds with and without quantum faulttolerance, to Aram’s very first analogy. Aram opened his “classical faulttolerance test” as followed:
This is perfectly correct. If we want to prove that 3SAT requires exponential time we need an argument that does not apply to 2SAT and XORSAT. Today, after many years, we still do not have such an argument. But not having such an argument does not mean that we should believe that 3SAT behaves like 2SAT. Indeed, while the proof is a still far, we have strong reasons to believe that 3SAT is hard; a world where 3SAT is easy presents a very different reality than ours.
A similar thing can be said about quantum faulttolerance and classical faulttolerance. Proving that faulttolerance quantum computing is impossible will require more experimental efforts and theoretical progress. But the fundamental differences, as those described in this post, between a world with faulttolerant quantum computing and our reality give good reasons to believe and certainly to explore the possibility that quantum fault tolerance is not possible.
I’m not so sure those differences are fundamental. Suppose that our existing world had turbulence, DNA, brains and other complicated phenomena, BUT no electricity or magnetism. Or we lived in a place where we couldn’t find any metal or other conductors. You could even imagine living in the 18th century, but with a 21stcentury understanding of theoretical computer science.
Then classical computers would be nearly impossible to build (assuming we found no other mystery technology), and you might conjecture that largescale Turinguniversal devices didn’t exist for some fundamental reason.
I think the evidence against largescale quantum computers is similarly as strong as the evidence against the evidence against largescale classical computers in the 18th century. Clearly complicated quantum/classical phenomena are happening in the world, and clearly we can control these a little bit. Why should there be a law of nature saying that computation can happen, but just not the way we want it to?
Aram and Gil are advancing (IMHO) good arguments on both sides of quantum computing skepticism versus confidence. Perhaps FT quantum dynamics is much like NavierStokes dynamics?
That is:
(1) We have both physical reasons and mathematical reasons to suspect that neither framework is physically exact, nor perhaps even mathematically wellposed.
(2) Yet for practical everyday purposes we repose great confidence in both frameworks.
(3) With regard to computational simulation, both frameworks belong to a complexity class that is very difficult to simulate.
(4) Yet in practice, simulation capacity advances at a fasterthanMoore’s Law pace that has been sustained for decades.
(5) And finally, we have very little understanding of why progress in simulational capacity has been so rapid, or why it has been sustained through so many Moorestyle doublings, or how many more doublings we may reasonably foresee … and it would be very desirable to attain such an understanding.
For these reasons, the impression has been growing (in me) during the Kalai/Harrow debate, that the thesis atissue — together with all three of the simulationrelated Millennium Prize problems (NavierStokes, YangMills, and P vs NP) — are surely among the most difficult questions that we can ask regarding computational simulation … but they are not the most interesting questions we can ask, and neither are they the most practically important questions we can ask.
The search for better questions is (it seems to me) similarly important to the search for better answers, and so we are all of us very fortunate that forums like TCS StackExchange and Math Overflow now offer a open and wellmoderated venue for formally posing such questions, and weblogs like Gödel’s Lost Letter and P=NP and The Quantum Pontiffs offer such an enjoyably collegial venue for discussing such questions.
Finally — and most importantly — on a planet equipped with many tens of millions of mathematicians, scientists, and engineers; a planet that is facing very many, very diverse, very urgent challenges; it is vital that we sustain as many interesting and important open STEM questions as feasible, for the simple reason that only by doing so can we make best use of our planet’s enormous (and growing) pool of STEM talent.
And therefore, we are very fortunate that uniformity of opinion as to “Which questions are most important?” is neither necessary nor feasible nor desirable.
Summary: the vigor of the Kalai/Harrow debate is indicative of the vigor of the 21st century STEM enterprise, and so we have good reasons to rejoice that a definitive resolution of the debate is not likely. :)
I don’t want to belittle the powerful analogy between quantum computers and classical computers which is a major driving force for quantum information, but there are some important distinctions worth mentioning. It is correct that building largescale digital computers involved scientific and technological developments that 18th century people could not anticipate. However, lets imagine, as Aram proposed “living in the 18th century, but with a 21stcentury understanding of theoretical computer science.” We could have run manually largescale classical algorithms. As a matter of fact, powerful algorithms were already known and used at that time. We also would have compelling reasons to believe that certain natural computing devices (like our own brains) already come close to the abstract notion of universal computer.
Dear Aram, I am also not sure that my distinctions are fundamental but I think these distinctions are interesting and worth exploring. We can discuss them one by one. For example, should we take it for granted that high dimensional quantum processes can always be simulated in low dimensions?
You wrote: “Clearly complicated quantum/classical phenomena are happening in the world.” This is precisely the issue. There is nothing clear about this “clearly”, (like many other appearances of the expression “clearly”). For example, if we accept Conjecture D from the previous post (which is very close to Unruh’s old conclusion,) then approximately pure quantum evolutions are (approximately) boundeddepth. It may well be the case that quantum evolutions in nature which do not exhibit classical errorcorrection/FT are less complex compared to complicated classical processes.
You correctly said that we can control quantum evolutions a little bit. In my opinion, the issue is not just human’s ability to control quantum evolutions but rather the nature of feasible approximations in quantum mechanics. (If you wish, it is about nature’s ability to control quantum evolutions.) This reflects on the laws for natural quantum evolutions and also on our ability to describe quantum processes as well as to control them.
Aram, we are in agreement on this matter. In fact, this is a main reason why I find the question fascinating.
Dear John, thank you for your many good comments and for your amazing optimistic spirit (and for often reading my mind). You raised quite a few good issues (some of which are mentioned on our long agenda.) Let me mention two that I hope we can discuss further: The issue of classical simulation of natural quantum processes, and the connection to classical noise and denoising; at the end, classical noise may turned out to be a more complex issue than quantum noise. (I wished I could also relate to your comments on quantumlike evolutions on nonflat geometries, but I am lacking the background so I will go about it offline.)
While the time for summation is approaching, I do hope that we will still have the opportunity to focus on smaller fragments of the large issue and discuss them separately, one at a time (and delay the summation to the end).
Let me also note that the most recent thread on “the quantum pointif” is, to a large extent, a “meta thread” regarding this debate. Also, let me draw again attention to John Preskill’s survey entitled “Quantum computing and the entanglement frontier” (Rapporteur talk at the 25th Solvay Conference on Physics, “The Theory of the Quantum World,” 1922 October 2011), which is highly relevant to many issues discussed in our debate (and also quite humorous).
Gil, when I was a graduate student I somehow had the mistaken impression that noise was all about “the physics of dirt” as Pauli infamously called back in the 1920s. Yet in subsequent decades a large community of illustrious mathematicians and physicists proved what Pauli perceived as dirt is, in reality, contains large quantities of pure mathandphysics gold: Onsager for reciprocity in transport theory, Wilson for renormalization group theory, KAM for KAM theory, Laughlin for exact (!!!) quantization of resistance, Kraus for measurement theory, etc.
Perhaps we may reasonably foresee that among the most interesting and strategically significant implications of FTQC will be “21st century mathandphysics gold that shines forth from quantum noise.” Thus history suggests that if FTQC skepticism inspires us to a clearer, deeper, broader understanding of quantum noise dynamics, that might be a very good outcome.
Good point about not taking for granted that complicated quantum evolutions exist. Of course we cannot rule out that BQP is in BPP, or (more relevant to your conjectures) that “quantum polynomial time with realistic noise” is in BPP.
On the other hand, a large fraction (perhaps the majority) of DOE supercomputer time is devoted to quantum simulation. John S. is right that using clever math we can improve these simulations a lot in many cases of interest. Nevertheless, simulating the proton is roughly at the limits of our ability and simulating the 26 electrons of iron is far beyond it. That is, we want to know properties of the proton or the iron atom, in principle we should be able to just solve Schrodinger’s equation, but in practice we can determine parameters with far greater accuracy experimentally than we can numerically. (Of course quantum chemists don’t simulate molecules from scratch; they use an experimentallyobtained “advice string” containing observed parameters.)
So nature certainly does quantum things that are too complicated for our best minds plus best computers to simulate from first principles. This doesn’t contradict your conjecture that all these complicated quantum calculations in nature might be somehow incomparable. But barring some conceptual breakthrough along the lines of BPP=BQP, nature is certainly full of hardtosimulate processes.
Aram, a student reading that assertion would be nonplussed by articles like (as one example among thousands) “Phase diagram studies on iron and nickel silicides: highpressure experiments and ab initio calculations.”
More broadly, in fields as diverse as aeronautics, synthetic biology, and materials science, in recent years it has become the general practice — no longer the rare exception — that experimental coverage of the space of hypotheses and/or design parameters is sufficiently sparse relative to simulational coverage, that experiments serve mainly to validate/verify/tune the simulational surveys that (nowadays) have become the main engines of STEM progress.
So rapid has been the onset of this simulationdriven paradigm shift, and so confusingly transformational are its consequences for STEM enterprises, that perhaps it is fair to regard faith in FTQC (and Aaronson/Arkipov linear optics experiments too) as a natural reaction against this paradigm shift. That is, it is natural to hope for a future in which at least some experiments — however few! — are provably not amenable to simulation by classical resources in P.
Confidence in FTQC thus implies the reasonable hope that some physically realizable dynamical systems are not simulable with computational resources in P … and personally, I share this hope. But I also share what is becoming the de facto consensus view, that nonsimulable dynamical systems are uncommon, and that few of them (possibly even none at all) are of appreciable practical interest to presentday STEM enterprises.
That is why a binary blackversuswhite outcome of the Harrow/Kalai debate is neither necessary, nor feasible, nor even desirable (IMHO): because there is much good in both pointsofview.
I think that what the paper you linked to (and q. chemistry in general) calls “ab initio” still uses approximations to avoid dealing with the quantum mechanics of the core electrons. This of course yields reasonable approximations, as simulations often, in practice, do. But the approximations are often much less accurate than we can obtain with experiments.
Here is a review.
http://rmp.aps.org/abstract/RMP/v75/i3/p863_1
See table 6 for some comparison of numerics (which they call “theory”) and experiment.
Actually, the story with the core electrons is an interesting one. The stability of the core (and of course the nucleus) is what makes numerical quantum chemistry at all possible. But this is not (as far as I can tell) because of correlated decoherence, so if nature avoids a hard computation, it’s not for the reasons that Gil suggests may derail quantum computing.
As for the rise of simulation across science, I agree that this is an important point. But we the things we choose to study are a not a random sample of the difficult problems in the world. We study what our tools are good at solving. For hundreds of years, we tried to solve differential equations analytically. The probability of a penandpaper breakthrough that would solve the problem of turbulence is generally thought to be nil. But computers are a new tool that we haven’t yet found all the uses of, so it’s natural that progress in science is being made today be exploring this new frontier.
Now it may be that quantum computers never solve a useful problem, and that our heuristics, approximations and experiments for quantum chemistry are good enough for all economicallysignificant applications. However, I believe my point stands that nature is solving problems with quantum mechanics that are harder than we can solve.
Just to return briefly to the paper you linked to, it seems that for the key parameters, the numerical values obtained are 25 std. devs. away from the experimental values. The authors describe this as follows:
(DFT = density functional theory, a leading classical approximation algorithm for quantum system)
So, it is impressive that simulations are so good, but our supercomputers are still no match for 26 electrons, even when 24 of those electrons are sitting in the core not doing very much.
Aram wrote:
I do not think that the evidence against largescale quantum computers is as strong as that. There is a fair possibility that building largescale quantum computers will not require further scientific or technological revolutions of the kind Aram talks about. But it is also possible that we still miss large pieces of the puzzle. In this case, perhaps, as Aram suggests, something as great as understanding electricity and magnetism, or building conductors with no metal around is needed to build quantum computers or, alternatively, something as great as understanding that solving 3SAT in not feasible is needed to realize that they cannot be built.
Aram, without doubt your views are absolutely correct … and (IMHO) so are Gil Kalai’s! :)
As for the yearbyyear increase in ab initio simulation accuracy relative to experimental accuracy, isn’t this comparable to the yearbyyear increase in computer chess skill relative to human player skill?
We are all of us familiar with the (transformational) endresult of that evolutionary competition.
Oops … the above remark is intended as a response to Aram Harrow’s excellent comment, with whose various points I agree wholeheartedly, while at the same time largely agreeing with the thrust of Gil Kalai’s arguments.
John, you are very agreeable. :)
It’s a good question where the trajectory of quantum simulation will go. I think that we can expect many more speedups in the future from theoretical advances than we can from Moore’s law. We might even be able to calculate the mass of the proton before long. Definitely it’s an extremely interesting and useful area to work on. But I don’t think my fundamental point changes: the computational questions involved are very very hard, and sometimes intractable.
Aram, I agree very sincerely with the above post too, and just to say something semicontroversial, you are absolutely right (IMHO) to foresee that advances from better simulation algorithms are likely to exceed even the advances from faster/larger hardware.
A key broad question (regarding which I am in doubt) is this: For how many more doublings, and at what doubling cadence, can the 21st century feasibly sustain simultaneous STEM advances in (1) computational hardware speedandsize, (2) Shannon capacity in communicationandsensing, (3) nanoscale fabrication capacity, (4) algorithmic efficiency, and (5) openandcurated dataset size?
It is evident that advances in any one of these five strategic areas acts to sustain progress in the other four, and so (as it seems to me) it may be technically feasible to sustain (let us imagine) twenty more overall STEM doublings at a fairly rapid cadence … say, a fiveyear doubling cadence that extends to the end of the 21st century.
But is it socially feasible to sustain such a doubling cadence? That is where I am in doubt. It’s not humanity’s STEM capacities that I doubt, but rather our ability to synthesize social narratives that evolve compatibly, and comparably rapidly, with the doubling cadence.
Soberingly, “go slow” strategies are infeasible, in view of our planet’s burgeoning population, whose evergreater expectations are exerting evergreater pressures upon planetary resources that once were naturally abundant, yet in the future we must learn to synthesize.
Thus the issues associated to the Kalai/Harrow debate are not academic; rather they are strategic in the most vital and sobering sense.
Gee, now I wish that in the above I had written “narratives and enterprises” in place of just “narratives”, because a narrative without an enterprise is just a story, and an enterprise without a narrative is just a business.
Stories and businesses are individually OK (needless to say) and yet the 21st century’s great challenges require the larger, stronger, and more enduring power of great narratives that are realized in great enterprises.
The Kalai/Harrow debate has within it the kernel of great narratives and great enterprises, and that is why it is an important debate.
Aram, can you explain what you mean by simulating “from first principles?”
Good question.
I mean, “first principles” means you start with somethign like this
http://www.rentalbarn.nl/viewshound/higgs/StandardModelLagrangian.png
and a short description like
“lowest energy configuration of 26 protons, 30 neutrons, and 26
electrons” (well, I guess technically, the protons + nuetrons arent’
really fundamental.)
But in practice, we have to do some phenomenology. So we take protons
and neutrons as discrete particles, ignoring their internal structure,
and we estimate their parameters (mass, charge, etc.) from
experiments.
Then we have to also use our brains a bit. If we just try to
discretize everything and diagonalize a huge matrix, it’ll be too
hard. But we can solve the nucleus first, and then the core
electrons, and then the valence electrons, making some plausible
approximations. It gets a little tricky, because some of the
approximations come from experimental observations, but they mostly
also can be justified from first principles even if they weren’t found
that way.
Of course it’s hard to prove upper bounds on “using our brains”. We
can always potentially come up with something more clever. All I want
to exclude is direct physical measurement of the quantum system in
question (like iron atoms).
Aram, in what sense are the following two (exceedingly commonplace) quantum simulation assumptions allowable in a “first principles” calculation?
•
• innershell Fe electrons do not participate in chemical binding
It seems reasonable (to me) — and entirely compatible with common nomenclature — that both assumptions are allowed to a “first principles” simulation.
Also, your remark
is a noble sentiment & hopefully we are far from approaching this upper bound.
Sure, well, first principles change. But we can definitely derive the principle that core electrons don’t interact much with valence electrons, so we shouldn’t make that an assumption. At least right now, we don’t know how to derive alpha from anything more fundamental.
Also, I’m not sure how much error it introduces to assume core electrons are decoupled, but it might be one of the weaknesses of our current numerics.
It is even harder to prove lower bounds.
Another consideration relating to classical and quantum dynamical simulations has a natural analog in computer chess.
Imaginr that chess code A is superior at blitz timecontrols, while chess code B is superior at classical timecontrols, per the LaTeX table that follows (hopefully!):
Which chess code is better? That depends on circumstances.
Similarly in dynamical simulation, my personaly observation has been that in synthetic biology especially, researchers nowadays mainly play “blitz”.
That is, synthetic biology researchers increasingly prefer to run many thousands of moderately accurate dynamical simulations, than dozens of highly accurate simulations. The reason is that the verifying/validating experiments of synthetic biology typically are done inparallel by robots, rather than sequentially by humans.
Thus the future of quantum simulation may perhaps focus upon larger numbers of very fast dynamical simulations that each are only moderately accurate, as contrasted with smaller numbers of slowbuthighlyaccurate simulations.
Interesting. But I don’t think it happens that strongly in chess—because brute force is king.
It happens a little—e.g. Rybka comes with different customizable “personalities” and they’re said to make a difference. It does worry me that computerized life sciences “plays blitz”. My chess work has both modes…
Nice table!
Yes, in the arms race with influenza viruses, both human researchers and our viral opponents are now deploying massively parallel evolutionary strategies.
In essence, human researchers match millions of blitzgame robotic experiments against the influenza virus’s septillions of mutagenic replication cycles … and yearbyyear, the human researcherpluscomputerplusalgorithm “centaur” plays more strongly against our viral opponents.
Fortunately! :)
Let me relate in some detail to this comment by Aram:
This is an important point. Here is a little thought experiment. Suppose that you write the computer program required for the computation for the 26 electrons of iron. It is hard to tell from your description if there is a definite such program (“from first principles”) which is based on computation in quantum field theory but let us suppose that such a computerprogram exist and is essentially unique, and that the running time is the only issue, and indeed that the running time is well well above what digital computers can perform. Now suppose that an oracle can give us the outcomes of this program. And suppose also that we can experimentally probe the answers (or that another oracle gives us the correct answer in nature.) Here are three possibilities:
a) These computations give accurate results (compared to experiments) which require computationally superior quantum computers.
b) These computations give accurate results but they can be derived by classical computers based on some mathematical shortcuts (and/or based on probing some unknown parameters).
c) The outcomes given by the clever oracle will give an incorrect description of reality .
What would your choice be?
Also, as an aside, what about the following possibility
d) All the (correct) calculations regarding the 26 electrons of iron can be carried out with a bounded depth quantum computer controlled by a classical computer.
In particular, do you think that the quantum processes describing the iron atom involve some quantum errorcorrection?
Walter Kohn’s Nobel Lecture “Electronic structure of matterwave functions and density functionals” (Rev. Mod. Phys, 1999) discusses these matters at considerable length, without however distilling its conclusions down to a readilyquoted soundbyte.
This in part reflects the remarkable fact, that even though the KohnSham article itself, “SelfConsistent Equations Including Exchange and Correlation Effects” (Phys. Rev., 1965), is the mostcited article in the entire history of physics (14,972 citations), and even though almost 50 years have passed since it was published, we don’t really understand its mathematical foundations all that deeply.
In consequence, no one today — whether skeptic or nonskeptic — can confidently assert that the feasibility (or not) of FTQC has no bearing on this great and enduring mystery of mathematics and physics.
Continued.)
More precisely, the question is about the existence of approximatelypure complicated quantum evolutions.
The possibility that “quantum polynomial time with realistic noise” is in BPP, is indeed the topend of my point of view. Certainly, this is something to be seriously discussed. (I suggest to take for granted in this debate that BQP is different from BPP.)
This is a crucial point. When you talk about “Schrodinger’s equation” you do not really mean it. We have to restrict our attention to a few degrees of freedom and neglect many others. Here, we are facing a problem which is very similar to the issue of noise in controlled systems except that the noise is in describing and not in controlling. As far as I understand the art and science of how to deal with the neglected degrees of freedom (which account for external environment, internal structure of the particles, and other things) is a crucial part in the relevant physics computations. In any case, this issue is very relevant to our discussion.
Note that our computational task is different from nature’s computational task. Nature has a huge program with a huge number of variables which can be computationally quite simple. We want to probe the behavior of a small number of variables. It is not at all clear than this can be done just “from first principles” and it is not at all clear what is the computational difficulty.
Yes, I agree with that!
http://arxiv.org/abs/1207.6131
Over Goole+ Aram gave a link to an old debate about quantum computing from 2000 entitled “when will quantum computers become practicle“.
The subtitle given to this post was “Computation cannot hide the physics.” In this context we mainly talked about geometry and here is a little comment also about physics and time.
Over the Quantum pontiff, Joe Fitzsimons, in response to a question I asked about fusion, gave a wonderful example:
We can discuss three levels of simulation for mixing two buckets of different colors. As Joe said, If we do it with real paint reversing the evolution is impossible. If we simulate it on a classical computer reversing the evolution is possible. And now what about emulating on the quantum level the microscopic process of mixing two real paints. Can we emulate with equal ease the dynamics forward and backward in time? Or perhaps reversing the evolution is not possible? Without quantum fault tolerance it may well be the case that reversing in time a quantum evolution is not always possible. In any case, this is a fascinating question.
BTW, my last WP post just related with “mixing paints” on classical and quantum computers with gifanimated pictures …
Hi Gil,
I’ve only just seen this now, but let me respond. You should be able to simulate the dynamics both forward and backwards in time for a closed system with the same level of accuracy, independent of fault tolerance. The reason for this is that the evolution will be unitary so the operators for simulating evolution both forward and backward in time will be related by the Hermitian conjugate. Given a quantum circuit which simulates the evolution forward in time, then taking the Hermitian conjugate just corresponds to taking the Hermitian conjugate of each gate and reversing the order of the gates. A gate should be pretty much exactly as hard as its Hermitian conjugate to perform, since you can switch between them by relabeling your basis. So you need to perform effectively the same gates in either case, just with their order reversed. If you assume the Hamiltonian is roughly constant in time (which it is for the paint mixing example), or even rounghly periodic with period much less than the time scale of the simulation, then you not only get the same gates, but also a similar ordering on the gates (exactly the same for the case of a constant Hamiltonian).
I guess the reason that you suggest it may not be equally easy to simulate in both directions is because of coupling to the environment, and you simulate an open system rather than a closed one. However this is not really fair. You need to be careful about how you set up the computational problem. In simulating the time reversed dynamics you need to keep in mind that the unmixed state of the paint is only one of an exponentially large number (in terms of the number of particles) of states which can legitimate evolve into the final state. Thus a noisy simulation is just as good in each direction.
Thanks for the answer Joe,
I agree that for noiseless quantum evolutions it seems equally hard to run an evolution backward and forward. Now, suppose that we can identify a class of pure evolutions that can be approximated without quantum fault tolerance (As I am trying to do) . Should we expect that this class will be closed to the operation of reversing time? And if yes in what way? For example, the class of quantum evolutions of small depth which we considered in this context is closed under timereversing. I am fond of the idea that a preferred arrow of time will emerge from the assumption “no quantum fault tolerance” but in any case this is for me a fascinating question.
Joe and Gil, there are considerable difficulties and subtleties that are attendant to the notion of timereversal in open quantum systems, and there is a correspondingly large physics literature upon this subject. In this regard, please let me commend the recent survey article by Jan Dereziski, Wojciech De Roeck, and Christian Maes in “Fluctuations of quantum currents and unravelings of master equations” (arXiv:condmat/0703594v2). See especially their article’s “Section 4: Quantum Trajectories”, and the extensive bibliography provided.
Regrettably, the ideas in the physics literature are not expressed in the naturalized and universalized idioms that mathematicians find respectable, and so these ideas are not appreciated (especially by mathematicians) as widely as their virtues merit.
That is why on Physics.StackExchange, in answer to the Scott Aaronson’s question “Reversing gravitational decoherence”, I have posted a response that translates (provisionally) our existing physics understanding into a more nearly naturalized and universalized mathematical language.
So constructed, a naturalized summary of our present physics understanding amounts to this:
These ideas are by no means new, yet on the other hand, we are far achieving a fully naturalized and universallized appreciation of them. Good! :)
As an addendum, my colleague Rico Picone and I are spending the night in the lab, wrestling with a quantum physics issue that is deep, fundamental, and yet eminently practical too … localization.
Our present experimental efforts are directed to mitigating a localization problem that apparently is utterly mundane: the unwanted transport of vibrational energy down the length of a radiofrequency cable, from an obviously(?) classical system (namely, a power amplifier) to an obviously(?) quantum system (namely, interacting spins/qubits in a magnetic gradient).
It is evident that the quantum physics of localization processes is only apparently mundane, because whenever we analyze carefully the theoretical issues that are associated to decoherence in scalable quantum computing, or in gravitational physics, or in nonlinear formulations of quantum mechanics, or in quantum spin microscopy … indeed, in pretty much any area of fundamental or applied quantum dynamics … we find that the quantum dynamics of localization processes plays a central role.
There is of course no shortage of articles and textbooks relating to the quantum dynamics of localization, and to the closely associated physics of global conservation and local transport. Quite the reverse! There are tens of thousands of relevant articles and hundreds of textbooks.
Given the vast body of extant literature, how can the Aram Harrow/Gil Kalai debate synthesize a lucid conclusion? One reasonable strategy is to borrow from modern mathematics the ideals of naturality and universality, and to express the extant ideas of quantum physics in this language.
That is why — inbetween wrapping RF cables with foam vibrationdampers — I have been tuningup a Physics.StackExchange answer to Scott Aaronson’s question “Reversing gravitational decoherence“. A summary of the answer is straightforward:
This answer illustrates that there are tremendous 21st century opportunities for experimental, theoretical, and mathematical research relating to the substance of the Kalai/Harrow debate, and moreover, shows us also that this research has both fundamental and practical implications for every enterprise that seeks to press against the quantum limits to sensitivity, speed, efficiency, channel capacity, and power density.
All of this is good news for young 21st century researchers, and that is why (IMHO) Gödel’s Lost Letter and P=NP has performed an immensely valuable service in hosting this debate, and why Aram Harrow and Gil Kalai deserve our thanks for so ably illuminating the technical aspects of the debate, and so effectively arousing everyone’s appreciation and enthusiasm for it.
Conclusion It is good news that we don’t know who is right, Gil or Aram. Because in finding out, we will discover how to conceive, create, sustain, and accelerate the great enterprise of the 21st century.
Thank you, Aram and Gil and Dick and Ken … and thank you especially Rico Picone, for your focused efforts this night, to turn these abstract theoretical ideas into practical engineering realities and productive 21st century enterprises! :)
These are very interesting comments, John, thanks a lot. I also found the discussion on physics stackexchange you referred to interesting. I could identify with Scott Aaronson’s definition of what the crux of matters ” for which systems IS thermodynamics relevant?” although I dont know formally what he means by “thermodynamics relevant”. Also the Zeilinger et al’s buckball experiment mentioned there seems relevant here and I would be happy to learn more about it and in which sense it describes a system for which thermodynamics is irrelevant.
Yes, Gil, the question “For which systems is thermodynamics relevant?” has for centuries been an extraordinarily fruitful source of good math, physics, engineering, *and* enterprise. And surely a natural 21st century expression of this fruitful question is
It is not at all clear that that those thermodynamical analyses that postulate spatially localizable thermodynamical potentials — which (AFAICT) encompasses pretty much the entire literature of thermodynamics — are wellposed for (1) strictly unitary dynamical flow on (2) exactly flat complex statespaces of (3) exponentially large dimension. That is why thermodynamic analyses in the literature generally relax — either explicitly or implicitly — oneormore of postulates (13).
Once we notice that mathematicians, physicists, and engineers invariably relax these postulates, we are led to reflect that perhaps Nature does too! :)
John, I am still missing something basic here: in what sense isn’t it the case that thermodynamics is always relevant.
Gil, one example of a quantum dynamical system for which thermodynamical questions are not wellposed is the question “What is the heat capacity of the contents of this box?” asked of a box has some finite amplitude to contain (a macroscopic) Schrödinger’s Cat.
Thermodynamical sensibility can be naturally restored to this class of questions, by requiring that the answer encompass a description of the physical process by which the heat capacity is determined … these measurement processes necessarily include nonunitary dynamics and/or nonflat / lowdimension statespaces that (either explicitly or implicitly) decohere the Cat.
Concretely and unambiguously, the localizing decoherence that restores sensibility to thermodynamical predictions — as discussed in the Physics.StackExchange answer to Scott’s question “Reversing gravitational decoherence“ — can be naturally provided by specifying that the fluctuations that enter into Onsagertype derivations of thermodynamic transport coefficients not be operator expectation values, but rather be the datastreams that are naturally associated to unravelled Lindbladian processes.
Viewed as nonunitary stochastic dynamical flows, Lindbladian measurementandcontrol processes act to shrinkandcurve the (effective) quantum statespace, and thus “localize the cat”, and thus restore sensibility to thermodynamical predictions for quantum dynamical systems. This computational framework proves to be exceedingly useful in practical quantum systems engineering, and moreover it provides a satisfying viewpoint for appreciating the consistency — indeed the natural unity — of the (vast) thermodynamical literature, the (vast) quantum dynamical literature, and the (vast) quantum information theory literature.
Needless to say, all of the above ideas and computational idioms are extant in the literature, but they are not presented in any single, selfcontained textbook or survey article (as far as I know). The Kalai/Harrow debate provides a wonderful venue for thinking about them! :)
Two updates: 1) A lovely post with a good discussion over Shtetl Optimized about the manyworlds interpretation (MWI) has several tangent points to our debate. The post and comments emphasize questions like: is it possible to have two conscious minds in superposition? and does a ‘no’ answer violates linearity of quantum computers? Does ‘free will’ violates quantum mechanics? Etc. The issue of irreversible decoherence which is central in our debate is of importance in this discussion over Scott’s blog as well. One outcome of our debate and some offline discussions is that I understand better now the point of view that QC skepticism is actually QM skepticism (but I still disagree with this point of view), and Scott’s new post reminded me that I have planned to comment on this matter, probably when our debate resumes.
Another update: Robert Alicki (who spend the year in the Weizmann Institute) uploaded a new version of his paper that discusses quantum computers as perpetual motion machines of the second kind. The new paper contains some new directions and Robert would love to get feedback from people.
Plenty of Gödel’s Lost Letter readers remain greatly interested in the Kalai/Harrow debate.
The selfstyled “quantum engineer” John Bell was fond of recounting Paul Dirac’s assessment of Dirac’s own The Principles of Quantum Mechanics as “A pretty good book, but the first chapter is missing.”
We all hope that the Kalai/Harrow debate will provide content for that stillmissing chapter! :)
Let me add another item to the six items on the fundamental differences between the world with and without quantum faulttolerance.
7. Can you tell the arrow of time from a realistic quantum evolution? If quantum faulttolerance is possible then the answer is “yes:” for any quantum evolution that can be realized by your quantum computer, the reverseintime evolution can also be realized (as easily). If quantumfault tolerance is impossible there might be unitary evolutions that can be wellapproximated while their timereverse mirrors cannot be approached. (Here is a somewhat related old blogpost on quantum computation with “drunkentime”.)
I meant of course: If quantum faulttolerance is possible then the answer is “no:”
Hi Gil,
How would you reconcile a difficulty to implement the time reversed version of a realistic evolution with Stinespring dilation? We can always view any quantum evolution as unitary on a larger space, and it seems extremely likely that you would be able to implement the operator corresponding to that unitary inverted and the additional space traced over. It doesn’t invert the evolution, but you do get a time reversal symmetry just with stochastic evolution.
Thanks, Joe. “it seems extremely likely that you would be able to implement the operator corresponding to that unitary inverted,” Why is that?
Well, if you can do it on a gate by gate basis, then you can do it for any circuit. Further, most gates you may want to do are selfadjoint in the ideal case, and indeed you can pick universal sets where all gates are self adjoint (i.e. Hadamard + Toffoli gates).
Sure, if you can have (or once you will have) universal quantum computers then you will be easily able to reverse every evolution. (And not just to reverse you could easily run every evolution in any reordering of the steps that you desire.)
The six points in the first section of the post and this suggested seventh point are meant to describe the differences between the world without (or before) universal quantum computers and the world with universal quantum computers.
Actually I am not sure regarding what can be achived today in experimental quantum physics if it is possible to put the finger on what kind of quantum evolutions cannot be reversed in time. (Or, more precisely, what kind of pure quantum evolutions can be approximated while their timereverse mirrors cannot be approximated, in *today’s* experimental physics.)
I think perhaps you have misunderstood me. My comments were meant to apply to nonideal gates. Actually, we know for a fact that we can reverse many of the actual gates we apply, since time reversing Hamiltonian evolution is fundamental to spinecho experiments and decoupling pulses.
To give a concrete example, consider any superoperator which can be expressed with Kraus operators which are all selfadjoint. Then If you can perform that operator, you can also perform the timereversed version of it (since they are identical).
Ok, I see. You say that if we have a quantum system built with gates that are self adoints and non ideal then for such a system you will be able to implement (non idealy) as easily an evolution and its timereverse mirror.
(Even it today’s experimental physics and without quantum fault tolerance.)
This sounds reasonable.
What I dont understand from your first remark is why do you think that (without/before having universal quantum computing) this is the case for general realistic quantum evolutions. Do you think that gatebased quantum devices are able to exhibit every experimentallycreated quantum evolution in today’s physics? Or in a future physics without quantum fault tolerance and universal quantum computing? Also, what is the role in your argument of viewing your imperfect quantum evolution as a unitary on a larger Hilbert space.
Gil and Joe: are you possibly speaking past each other? I think Joe’s “timereversed” operations are analogous to the transpose of a stochastic matrix, rather than its inverse (which is of course unphysical or even nonexistent for any stochastic matrix that is not a permutation).
Gil: Can you elaborate on why the arrow of time cannot be discerned if FTQC is possible? Yes, if FTQC is possible, then we can achieve approximately reversible unitary operations on some protected subspace, even while doing other irreversible things elsewhere (to detect and correct the errors, for example). But the same is true classically. I can simulate a reversible classical computation in software on my desktop computer, which is built on irreversible gates. Yes there is a protected reversible evolution that “exists” and looking only this protected evolution, the arrow of time would be perhaps hard to discern. But if I look elsewhere, say at my electricity bill for running the computer, then there would be many signs of useful work being turned into unusable heat, and of entropy increasing.
More generally,
(a) “arrow of time cannot be observed” seems like something that should apply to ALL evolutions, whereas “FTQC” even in the optimistic case that proponents like me believe would only be to a protected evolution, analogous perhaps to the java virtual machine running in my browser.
(b) I fail to see why quantum is special. If FTQC is impossible, but FT reversible computing is manifestly possible (remember that we only need simulate the results of the computation, and this can be done with microscopically irreversible gates), then how is it possible for any statement about an “arrow of time” to turn on this distinction?
Hi Aram, yes, that is exacty what I mean.
Well, I said “likely” because I don’t have a full argument, but rather am basing the position on a mishmash of things. The argument about selfadjoint Kraus operators is one such thing, but there are others. For example, if you accept that all evolution is Hamiltonian (albeit on an enlarged system), then you need a reason for why evolution under H is possible but H is not. If I take H, then relabel my Pauli matrices, then I can get evolution under H+2Tr(H), which is equivalent to evolution under H. This would seem to indicate that evolution under either should be roughly equally hard to achieve.
Further, for any H we do know how to approximate evolution under H arbitrarily well with decoupling pulses.
Hi Joe and Aram, let me get the quantifiers clear and start with the geometry issue that is discussed in the post itself. If FTQC is possible then you cannot infer from a quantum evolution on the geometry of the device that implement it. What I suggested in the post is that in todays physical reality and in a future reality without FTQC, for some quantum evolutions (and states) we can infer some information on the geometry of the quantum device. (When I say “some information” I don’t mean “all the information” so the two isospectral figures from Aram’s reply actually demonstrate my proposed picture. For this pair of figures the common spectrum gives a lot of geometric information albeit not the full geometry.)
And now to the arrow of time: If FTQC is at place then the class of pure quantum evolutions that you can approximate by a universal quantum computer is invariant under reversing time. Here I am talking about approximated pure quantum evolutions, so this necessarily refers to the protected part of your quantum computer, Aram, and does not refer to the other parts reflected in your electricity bill.
If FTQC is not at place or is even not possible at all then I raise the possibility that for some quantum evolutions you can infer the arrow of time. I don’t propose that for every quantum evolution you can tell the arrow of time. Certainly you cannot tell the arrow of time if the evolution itself is timesymmetric. I only suggest that there might exist pure evolutions that can be approximated and that their timereversed mirrors cannot be approximated.
Ok, that makes sense.
But then the hypothesis of FTQC merely would expand the range of such humancreatable reversible processes from reversible classical computing to reversible quantum computing. So then I don’t see what you can say about the arrow of time from knowing whether the hypotheses of FTQC are valid. Probably I am missing something obvious here.
Aram: “But then the hypothesis of FTQC merely would expand the range of such humancreatable reversible processes from reversible classical computing to reversible quantum computing…Probably I am missing something obvious here.”
Let me see if I understand what you are saying and perhaps also what you are missing, Aram. On the level of computational complexity FTQC will “merely” expand the range from universal classical computing to universal quantum computing; But computational complexity is a coarse lens to look at quantum evolutions and it abstracts away much of the physics. The whole point is that FTQC, if possible, has consequences which are not just computational theoretic. For example, in the post I proposed that FTQC will enable us to create Kitaev’s 4D code on an IBM planar shape of qubits, a task which looks “really really hard” and perhaps even “ridiculously hard” (these are terms coined by John Preskill in this recent paper on the entanglement frontier). From a computational complexity point of view it does not make any difference if you realize Kitaev’s 4Dcode on an IBMshaped planar domain or in any other way.
Impossibility of FTQC (or even FTQC not being activated,) opens the possibility that some quantum evolutions can tell something about the geometry, and similarly that some quantum evolutions can tell the time arrow. (Again, I refer to approximately pure evolutions, leaving out the “electricity bill.”) I suppose that the reality of experimental physics is that not for every (nearlypure) quantum evolution that we can demonstrate we can also demonstrate its timereverse mirror, but I am not sufficiently knowledgable in experimental quantum physics to tell. (But perhaps some readers are.)
I should mention that I cannot point out a pure quantum evolution for which my conjectures on the behavior of nonFT noisy evolutions imply or suggest that the evolution is realistic but when time is reversed the resulting evolution is not. (This is an interesting challenge.)
Dear Joe, regarding your comments, if I understood you correctly your logic is:
1) Circuits with (ideal) selfadjoint gates are universal.
2) Therefore, even without FTQC, every noisy quantum evolution (that can be created at all) can be demonstrated with such a circuit with noisy (nonideal) gates.
3) But for such noisy circuits if you can simulate an evolution you can also simulate its timereverse.
1) and 3) seems reasonable but 2) is incorrect.
Yes, FTQC says we can simulate the 4d toric code in 2d. But we already know that we can simulate it, with exponential overhead, on a classical computer. The question is only whether we can simulate it with merely polynomial overhead.
Certainly there are things we can’t reverse, like two gases mixing. This is true in both classical and quantum physics. (We might be able to reseparate them, but that would not microscopically reverse the process. So features such as a curl of smoke would not be recreated.) We might simulate the mixing of two gases on a classical computer. If we did so in a microscopically reversible way then we could also simulate the gases “unmixing.” Ditto for simulating them on a QC, assuming a largescale FTQC were to exist. But this would not say anything about our ability to reverse things outside of a computer simulation.
You seem to be saying that by simulating something, we have “created” it. But I can do a 19thcentury simulation of a system just by using pencil and paper. Such simulations are interesting and useful, but I don’t see what their mere existence has to do with physics.
Dear Aram, I think that when you say “simulate” you do not mean the same thing as me. (At some point I used the word “emulate” to make a clear distinction.) FTQC tells you that you can create a quantum state which approximate 4d toric code state on a planar array of qubits (in fact, you can even approximate any unitary evolution on the 6 encoded qubits on this array). You can even create it on a IBMshaped array. This is not a computational complexity issue. You cannot achieve it on a classical computer even with an unlimited time nor you can achieve it with unlimitted amount of paper and pencils and time and people.
I agree with your remark that being able to reverse a universal quantum computer computation does not automatically mean the ability to reverse the simulated physical process. There are views that believing in quantum computing linearity means that, in principle, every quantum process can be timereversed but I don’t agree with these views. What I am saying is that potentially, studying quantum evolutions without FT, may exhibit inability to timereverse already on the level of approximately pure evolutions. But I am not sure that this will be the case.
The reason I think that the classicalcomputer and quantumcomputer emulations are not so far apart is that if you use a QC and the desired state to be emulated is , then the state of the computer will not be, even approximately, . Rather, it can be described as a noisy version of an encoding of . That is, you apply an imaginary unitary encoding operation to \psi\rangle$, and then apply noise to each qubit.
The same story can be told about the exponentialtime classical emulation, except that now the encoding is no longer a unitary, or indeed linear, map.
I agree that the quantum emulation feels closer to the original physics than the classical emulation does. But this seems like a rather imprecise statement, and not something that can be formalized.
Aram (emphasis added): “The same story can be told about the exponentialtime classical emulation, except that now the encoding is no longer a unitary, or indeed linear, map.
I agree that the quantum emulation feels closer to the original physics than the classical emulation does. But this seems like a rather imprecise statement, and not something that can be formalized.”
This is great, Aram, in the last paragraph you say that the distinction cannot be formalized and in the previous paragraph you offer a way to formalize it! :)
This reminds me that once I had a paper with an incorrect “Theorem 2,” and a counterexample was an (initially rather unmotivated) example from the paper itself, a few sentences earlier.
good point. :)
Actually I have also some remark to make on the first part of Aram’s comment: ” …if you use a QC and the desired state to be emulated is , then the state of the computer will not be, even approximately, . Rather, it can be described as a noisy version of an encoding of . That is, you apply an imaginary unitary encoding operation to , and then apply noise to each qubit.”
The way I view the situation is similar and I will only change in the last sentence which assume that the noise applies only after your imaginary encoding.
Note that this point of view is described with no reference to intentions or desires.
Here is a version of Gil Kalai’s conjecture (with which I largely agree) that has been tuned for thermodynamic naturality:
I’m still working on assimilating the (quantum) Third Law fully into the above schema, and I remain hopeful of (eventually) posting some mathematically wellfounded results! :)
Thanks are extended (again) to all who have participated in and/or generously hosted this (literally!) *wonderful* HarrowKalai debate. :)
Dear John, it will be interesting to see what will you come up with. As there are several fruitful avenues for building quantum computers I suppose that there can be more than one fruitful avenue for unbuilding quantum computers :). Regarding my approach: one of the very basic principles for my papers was not to make any (prior) distinction between the microscopic and the macroscopic, and, more generally, not to make any reference to the geometry of the quantum device. My conjectures should apply for the microscopic and macroscopic worlds alike.
LOL … the American idiom is “There’s more than one way to skin a cat” (i.e., build a practical quantum computer).
Supposing this statement to be a Great Truth helps us to appreciate its Great Dual: “There’s more than one way to regenerate catskin” (i.e., more than one method for simulating dynamics on Exp(n)dimension statespaces with P(n) computational resources).
For me, the HarrowKalai debate has been a fun journey toward a catholic embrace of both aspects of this Great Quantum Duality. For which, thank you Gil! :)
“To skin a Schrodinger cat…” this is a ghastly idiom, John.
Hmmm … that phrases like “skin a cat/bear”, “bitter end”, and “tough row to hoe” are centuriesold had not occurred to me … phrases like “worth is salt” are of course millenniaold … and my own father taught me the cattlecall “ho, bos!” that (when I looked it up) turned out to be of IndoEuropean origin … and thus (possibly) twenty thousand years old and more.
One wonders (it’s offtopic to be sure): are there any stillused expressions in mathematics whose etymological origins date to antiquity? The calculus, perhaps?
In regard to this week’s (very interesting!) exchange of ideas between Gil and Aram and Joe [Fitzsimons], relating to general themes of quantum stateevolution, reversibility, localization, etc., I would like to point to a parallel discussion of the Third Law.
In an enjoyable series of articles, that can be backtraced starting from “Minimum universal quantum heat machine” (arXiv:1209.1190), Robert Alicki and coauthors argue cogently that the Third Law does *not* necessarily apply to qubit cooling schemes that commonly are envisioned for FTQC (and that are essential even to initializing qubit registers, refreshing ancilla qubits, etc.).
Meanwhile, in a comparably enjoyable series of articles, that can be backtraced starting from “Does the Third Law of thermodynamics hold in the quantum regime?” (arXiv:quantph/0609159), Robert O’Connell and coauthors argue that the Third Law *does* necessarily apply to qubit cooling schemes that commonly are envisioned for FTQC.
Considered sidebyside, these two bodies of research convey a cautionary message: there is (at present) no consensus regarding the validity (or not) of the Third Law, for the practical reason that (as it seems to me) our presentday quantum theoretical frameworks for grappling with realworld quantenfeldtheoretisch Dreckeffecten are — by evidence of the recent literature — not adequate even to hypothesize (much less analyze) even the fundamental thermodynamical principles of quantum processes.
So likely is it, that our present knowledge suffices to analyze the (far richer!) realm of quantum informatic dynamics? Perhaps one lesson to emerge from the KalaiHarrow debate, is that a very considerable measure of humility is in order! :)
Hi John, I remember that there was a similar controversy regarding Onsager’s regression hypothesis, and there were conflicting views (and conflicting papers) if it extends to the quantum case. Very informally, Onsager’s law is saying that the statistics for the fluctuations obeys the same law as the statistics of the signal. This law looked relevant to quantum FT, not just for talking about nonequilibrium statistical mechanics, but also as expressing the idea that the “laws/symmetries for the noise” follow the “laws/symmetries for the signal” which my conjectures 1 and 3 also express. (You also referred to Onsager and Onsager’s law several times in this debate.) Unfortunately, I never trully understood Onsager’s law, inspite of your very nice answer to my physics stackexchange question about it.
Dear Gil. It has been one full century since Walther Nernst formulated his Third Law of thermodynamics (in 1912), and during that century many thousands of articles and books have endeavored to (a) establish the Third Law in full rigor and generality and/or (b) extend the Third Law to a Fourth Law and/or (c) establish that the whole enterprise is a mistake! :)
These attempts have undoubted yielded both important insights and scientific glory (including ThirdLawrelated Nobels for Nernst himself (1920), Albert Einstein (1921), William Giauque (1949), Lars Onsager (1968), and Ilya Prigogine (1977)).
And yet on the other hand, the witty preface to Peter Landsberg’s textbook Thermodynamics and Statistical Mechanics (1990) is commonly quoted today:
It seems to me plausible, that substantial advances in our present (partial and problematic!) understanding regarding the feasibility of FTQC will concomitantly advance our present (partial and problematic!) understanding of the Third Law, and vice versa. Hence arises my own overlapping interest in FTQC and the Third/Fourth Laws (regarding which I hope to post more someday).
Gil, if these questions were easy, we wouldn’t still be struggling with them after one full century of work! :)
LOL … as a followon to the above historical remarks, if someone (definitely not me!) were to review the Third Law literature from a quantuminformatic pointofview, including in particular, a thoroughgoing survey to the literature of quantum computing from a thermodynamic perspective, then many folks (definitely including me!) would read that survey with great interest.
And if that survey were provocatively titled, along the lines “Skepticism of quantum computing considered as the 21st century’s renewed quest for a Fourth Law of Thermodynamics”, this would help sustain what has been (it seems to me) a healthy and enjoyable tension in the Kalai/Harrow debate!
For which mathandscience enjoyment, thank you Gil and Aram! :)
I’m apologize, but I still missing something. The main ingredient of quantum computer is measurement, and nobody discuss measurement per se, this is still magical theoretical (“Armcherian”) construction without experimental support. For example, I did not see any experiment distinguishing mixed state of quantum system, with particular state for each incarnation of the same system. I even do not see the possible experiment to distinguish between them. For example, we say that atoms in laser are in particular state – ground or exited to particular level, we somehow do not talk about mixed state. While in QC people say about mixed state. Q. How can you be experimentally sure that your particular preparation is mixture of states, and not a particular state with probability corresponding to supposed amplitudes? That prevents scaling by making computation effectively random, subject to the amount of particles passing through the system.
Mkatkov, the narrow answer to your question is that density matrices (and Kraus operators, and Lindblad generators) can be unravelled nonuniquely yet experimentally indistinguishably.
Consider for example two buckets of spin 1/2 particles. Bucket A is filled with spins that are individually prepared as either up or down, with equal probability … Bucket B is filled with spins that are prepared pointed in random directions on S_2, with the directions distributed uniformly … and no experiment can distinguish these two buckets.
A broader answer to your question is suggested by Ivor GrattanGuinness’ thoughtful review of Clifford Truesdell’s book The Tragicomical History of Thermodynamics 18222–1854.
Is it plausible, one wonders, that the Kalai/Harrow debate (and the Feynman/Deutsch/Bennett//Shor/Preskill/Aaronson etc. contibutions) may someday feature in a book titled The Tragicomical History of Quantum Computing 1982–20XX?
As one path toward appreciating this question, it is instructive to digest Robert Alicki’s recent work relating to quantum thermodynamics (for example “Quantum bath refrigeration towards absolute zero: unattainability principle challenged”) in parallel with Robert O’Connell’s recent work relating to quantum thermodynamics (for example “Quantum thermodynamic functions for an oscillator coupled to a heat bath”).
These two lines of quantum research *both* are terrific, and yet it is (obviously!) mighty tough to evolve a unified appreciation of them.
In regard to unified mathematical appreciations, it is encouraging to reflect upon the greatly diminished incidence of tragicomical articles relating to (classical) Hamiltonian mechanics and (classical) ergodic hypotheses that attended the advent of powerful mathematical theorems (KAM theory) that were framed in natural mathematical formalisms (symplectomorphic flow).
With luck and diligence, quantum computing research perhaps may evolve along a similar trajectory during coming decades, and arrive at a comparably clarified appreciation of the issues that have been the focus of the Kalai/Harrow debate. Thus we reach the following tentative conclusion …
Conclusion The evolution of mathematical tools that are increasingly natural and powerful is a hopeful sign that quantum computing’s tragicomical era may be coming to an end. :)
Let me make a remark from a wider perspective related to our timereversing discussion over this thread last month. Here are three important conceptual claims that were raised in this debate.
1) Since QM supports universal quantum computation, impossibility, in principle, of universal quantum computation implies a breakdown of QM (or at the very least major alteration of QM).
2) Since QM is invariant under time reversal, every natural process can be, in principle, timereversed, unless there is some breakdown of QM.
3) Quantum computers are possible because classical computers are.
Overall, I regard these three claims as unconvincing and believe that they are ultimately incorrect. Still those are serious conceptual issues deserving to be studied, and they cannot be easily dismissed. Of course, one of the difficulty is that these claims are informal, and some of the terms like “in principle” and “breakdown” are rather vague. (We discussed amply the third issue, we discussed the second over this thread, and, as this debate concentrated on quantum fault tolerance, we did not discuss the first claim much. I think that I understand it better now, and may come back to it sometime.)
I would like to draw attention to two, related but more specific claims that Aram and Joe made, which, I believe, are simply flawed. Here they are:
A) (Aram: this comment) It is likely that the threshold for classical computation in the quantum circuits model is below the threshold for quantum computation. Therefore, the only way left possible for quantum computers to fail while classical computers prevail is that the noise falls just in the interval where classical fault tolerance is possible while quantum fault tolerance is not possible. This possibility is farfetched to imagine!
B) (Joe: this comment and its follow ups) Since quantum circuits with selfconjugate gates are universal, regardless of the issue of quantum faulttolerance, quantum processes can be time reversed.
The main flaw in arguments A) and B) lies in the assumption that quantum circuits are universal quantum devices. This is correct in the abstract scenario of noiseless quantum computation or in a scenario where quantum circuits which allow FTQC can be built. But there is no reason to carry this insight over to a reality without (or before) quantum fault tolerance. If you do not assume that qubits/gates architecture will lead to universal quantum computation, then there is no reason to believe that qubits/gates architecture will even have the ability to emulate every quantum evolution that can be demonstrated by some other devices. The two arguments by Aram and Joe contain this as a hidden assumption.
The model of quantum circuits is a hypothetical physical device. When we try to implement it (and indeed there are glorious attempts) there is no guarantee that we will even be able to implement quantum evolutions that we can achieve by other experimental means. For example, it is possible that every attempt to realize a quantum circuit which demonstrates symmetry between Xerrors and Zerrors will not only fail as a universal quantum computer but will also fail as a universal classical computer.
This mistake is related to various other arguments throughout the debate. For example, another (rather good) argument against my conjectures 3 and 4 is based on the (correct) assertion that there is no mechanism to detect quantum entanglement. This argument and thinking about the environment of the quantum circuit that “detect the entanglement” is a very restricted way to think about the situation of noisy quantum devices that largely takes for granted the ability to realize quantum evolutions using quantum circuits which are quite close to the abstract model.
Gil, are you suggesting quantum processes cannot in principle be time reversed? Certainly there are field theory processes which are hard to reverse (spontaneous emission etc.), but in standard nonrelativistic quantum mechanics, it is generally possible to time reverse dynamics. Quite aside from my point about selfadjoint operators, people do this all the time with spinecho experiments etc.
Further, if you only care about classical inputoutput pairs, it should be clear that calculating the dynamics in either direction is equally hard. To see this note that in one direction the time evolution is given by the exponential of iHt, where as in the other direction it is given by iHt. If we could perform a universal not gate N, then we would have exp(iHt) = N(exp(iHt)N. Although a universal NOT is not possible, if we only care about classical inputoutput pairs, then we can simply flip the input and output with X gates to have the same affect. But this is just a change of coordinates. So the dynamics must be equally difficult in both directions.
The added exception is both logically necessary and physically plausible, eh?
Dear Joe,
Thanks for your comment. As for the question “For which pure quantum evolutions that can be experimentally approximated, also their timereversal can be experimentally approximated?” My guess is quite close to your description: Namely, that this is the case for some quantum evolutions and not the case for others.
A few more comments: first I think that we have to be careful not to confuse between arguments and conclusions. When I disagree with your argument it does not always mean that I disagree with its conclusion.
Second, several arguments in our discussion took for granted, explicitly or implicitly, quantum circuits as concrete devices capable of emulating every quantum evolution that can be demonstrated by other methods. Making such an assumption is incorrect.
Coming back to reversing time, when you consider timereversal in the context of quantum circuits you can distinguish between three levels.
1) Quantum evolutions described by noiseless circuits manifest universal quantum computation and also can be timereversed with no difficulty. (For some people this is enough to regard universal quantum computation as possible and timereversal of every natural process as always possible.)
2) The standard assumptions on decoherence are not invariant under time reversal. They assume that qubits and gates are subject to information leaks (and other forms of noise). Decoherence breaks the invariance under reversing time. However, the standard noise assumptions allow via faulttolerance (logical) quantum evolutions which are noiseless and thus allow timereversal.
3) My conjectures can be thought of as adding some “humps” to the standard models of noise in order to cause quantum fault tolerance to fail. (Or to express that quantum fault tolerance is not activated.) Actually, all these “humps” are invariant under time reversal. Imposing my additional humps on the noise model may open the door to describe pure quantum evolutions that can be well approximated forward in time but not backward in time. But I don’t know how to do it.
Two more remarks which show that the issue of time reversal is quite mysterious.
4) Our best shot for a Sure/Shor separator, emerging from this debate, is that the only approximately pure quantum evolutions that can be realistically approximated are (approximately) bounded depth. This class of quantum evolution is invariant under time reversal.
5) FTQC will allow you not only to implement a sequence of quantum steps in reverse order, but will also allow you to implement this sequence in any order. (Similarly you will be able to realize the toric 4D code on an array of qubits in the form of an IBMlogo under an arbitrary bijection.) With quantum fault tolerance you can scramble time (and space) at will.
I did not understand the second part of your comment. What do you mean by classical input/output and what do you mean by “calculating the dynamics is equally easy.” Do you, again, base your comment on a realization of a quantum circuit and regard it to apply to any natural quantum evolution?
Gil Kalai, your recent MIT seminar “A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers“ surely deserves mention here on Gödel’s Lost Letter. Both the lecture itself and the audience remarks are truly excellent. Thank you, Gil.