Quantum Repetition
Aram Harrow and Gil Kalai debate “Conjecture 1″
William Wootters and Wojciech Zurek were office-mates in 1979 as graduate students at U.T. Austin in John Wheeler’s group. A paper by Nick Herbert of the Fundamental Fysiks Group crossed their desks with an idea toward harnessing quantum mechanics for faster-than-light communication. Fascinated though skeptical, they realized the scheme made an implicit assumption. It assumed the ability to make a perfect copy of an arbitrary quantum state , i.e., a qubit—except Wootters didn’t coin that term until 1995. Independently of Dennis Dieks in 1982, they proved that only special—not general—qubit states can be cloned. The name “No-Cloning Theorem” was supplied by Wheeler.
Today we continue the debate between Gil Kalai and Aram Harrow on scalable quantum computing, focusing on how ideas about repetition impact Gil’s “Conjecture 1″ from the series’ initial post.
The debate was expected to conclude last spring, but ideas and material to sustain it have developed in preprints with refutations, new queries, copious comments sometimes preceded by private exchanges, and further discussions on the participants’ own fora. We note especially that John Preskill has posted a full paper, “Sufficient condition on noise correlations for scalable quantum computing,” extending work we previously referenced. Today’s post was to be the conclusion, but as it passed 20 LaTeX pages we declared this part of it a fourth “round” instead. We apologize that some of it is necessarily “entangled” with the yet-to-appear conclusion, which will follow shortly.
In his first response post, Aram described a classical error-correction test, and then began an approach against Gil’s Conjecture 1 using repetition codes. In this post they discuss these two matters further. We first present three different ways of repeating things, the last a thought experiment by Gil that set the initial terms of Aram’s analysis. Then comes Aram’s detailed modeling and critique of Conjecture 1, followed by a response by Gil. Finally we present Gil’s summation on the issue of why classical computers are possible.
Rep-A, Rep-B, and Rep-C
Given an object , the simplest fault-tolerance idea is to make independent copies: . That way if fewer than of them are corrupted, the majority vote of the others will preserve the original. Let’s call this building-block idea Rep-A.
When the object represents a probability distribution (classical or quantum) then things become more involved. If is a classical biased coin or a qubit what does it mean to repeat ? What is the precise object we are repeating? Here, the no-cloning theorem (which also has a classical counterpart) is of some relevance.
One can do Rep-B: Suppose is a qubit with unknown value . We can use entanglement to create
Nature allows this as a general quantum operation. This idea extends to more complicated classical and quantum error correcting codes.
In Gil’s thought experiment is a classical biased coin . The Rep-C idea is to represent this by . The “decoding” of a state of bits gives to `1′ its expected weight over the bits. Aram will subsequently discuss the quantum version.
We will see how these ideas play out in arguments over Gil’s Conjecture 1, which was originally stated as follows:
Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding.
Gil: a classical analog of a quantum computer
Aram’s analysis of Conjecture 1 builds on this thought experiment offered by Gil. This is an example to show that had we lived in a classical noisy world where the classical analog of Conjecture 1 holds for the noisy microscopic scale, noiseless classical memory and computation were nevertheless possible.
The classical analog of Conjecture 1 (let’s call it Conjecture 1′) asserts that when we use Rep-C to encode a classical random bit we get, with a substantial probability that is bounded away from 0, states that decode into codewords of the form where is distributed in a neighborhood of . (Conjecture 1′ is not true in general, but what we see in a minute is that it is not harmful to the repetition code.)
The point is that Rep-C, even under a noise that satisfies Conjecture 1′, allows noiseless classical information and classical computation. For that you represent `0′ by when is greater than and `1′ by when is smaller than .
Aram: Conjecture 1 and the repetition code
A central part of Gil’s skepticism is his conjecture 1, which essentially asserts that quantum error-correcting codes cannot work as advertised. Here is my restatement of that conjecture.
There exist a universal constant with the following properties. Let be an encoding map (specifically, a quantum operation) that maps one qubit to qubits. Let be a pure single-qubit state (the “intended state”) and a procedure that, in the absence of noise, would prepare the state . Let be the effect of in the presence of physically realistic noise. Then there exists a distribution satisfying
and a noise process such that instead prepares a state close to
There are a number of questions about this conjecture that one might raise:
- Why the noise process ? I added this to make the conjecture more realistic. As we will see in the example, it may be that outputs only states that are extremely unlikely to be physically realized (even without any extra skeptical assumptions), so without , the conjecture would not have a chance.
- What is ? If is completely general (and can depend arbitrarily on and ) then the statement of the conjecture becomes vacuously true. What I have in mind is that will be independent de-polarizing noise; that is, each qubit is replaced with the maximally mixed state with probability 1/100 and left alone with probability 99/100. This roughly corresponds to the sort of noise model that optimists (like me) posit, and that standard proofs of the FT Threshold Theorem assume.
- Why is a quantum operation? Such codes are the only ones used in proofs of the Threshold Theorem, and it is hard to imagine FTQC without them (although see below for some discussions of relaxed versions of this). Still this restriction should only make the conjecture easier to prove/harder to refute.
- What right do I have to put words into Gil’s mouth? This is a fair point. :) Gil might prefer to formulate the details of his conjecture differently. Of course, if he does, I’ll respond to that, and round and round we’ll go.
One consequence of being a quantum operation is that it commutes with taking the expectation over . If we then define the map (which itself could plausibly be a quantum operation, such as de-polarizing noise with strength ) by
then the state asserted in Conjecture 1 can be rewritten more simply as
By contrast, the conventional/optimistic view is that states of the form can be prepared, without . Thus Conjecture 1 is tantamount to asserting that noise will inevitably strike before a state is safely encoded. Of course this would be true if we followed the naive strategy of preparing outside of an error-correcting code, and then performing the encoding map; since we cannot perform this encoding arbitrarily quickly, there will always be a constant amount of noise that occurs before the state is protected. (Note that FT computing, whether classical or quantum, does not use this strategy, which is a point that is overlooked by Robert Alicki’s paper quant-ph/0411008.) But Conjecture 1 asserts that the same effect will plague any protocol .
Two repetition codes
Let us attempt to apply Conjecture 1 to the simplest code: the repetition code. We have to be careful because in fact there are at least two ways to define the repetition code as a quantum code. The one that I believe makes the most sense is to add qubits in the state, and perform controlled-NOT gates from the input qubit onto each of the new qubits. This will map
where and . Extending this to a quantum operation, we obtain
This “Rep-B” idea is only one interpretation of the repetition code. It can correct up to bit-flip errors, but is vulnerable to even a single phase error. (A standard theorem of quantum error correction states that it suffices to consider only these two types of errors.) (source for image)
The other idea, cloning, is simply to copy the input state times, viz.:
This cloning map is not used in any FT scheme, classical or quantum, probably because it does not lend itself well to error-correction. For any error-correcting code, we would like a procedure to project a state (i.e. density matrix) onto the linear subspace of valid code states. Operationally this corresponds to measuring whether a (detectable) error has occurred. For Rep-B, this corresponds to checking whether our state is in .
For Rep-C, this operation is trivialized: since is the maximally mixed state on qubits, then any state is compatible with this. Thus, we can never look at an -qubit state and say that it is not a valid encoding of some state under Rep-C. This weakness stems in part from the fact that Rep-C is a nonlinear map, while decoding or error-detecting is necessarily linear. As a result, I believe that (contrary to Gil’s arguments) Rep-C does not make sense even for classical FT, unless it is redefined until it essentially becomes Rep-B (e.g. the domain is restricted to be the states and it is extended linearly to all states).
Classical error correction
It seems worth reviewing here how classical FT computing is workable using Rep-B.
- Each logical bit is represented by physical bits which are ideally in the state or , but in practice will merely be close (in typical Hamming distance) to these states.
- Fresh logical bits are introduced to the computation by initializing blocks of physical bits to the state .
- If the computation is randomized, then instead we can initialize a block of physical bits to the state .
- Gates are performed transversally, to minimize the propagation of errors. That is, to store the AND of logical bits and in a logical bit , we assume that these bits are stored in physical bits , and we then set for each . In this way, errors will be magnified by at most a constant multiple.
- In-between each gate, we will do error correction to attempt to restore each bit to a valid code state. Physically, this is achieved by a bistable flip-flop (in RAM) or a ferromagnetic material (in a hard drive). The former is “active” in that it requires a continuous supply of power, while the latter is “passive” in that it will correct errors simply by being in contact with a room-temperature thermal bath (and samples of magnetite from the sea floor have demonstrated that this sort of memory can last for at least millions of years).
(For Rep-C, steps 1, 2 and 4 are the same, but steps 3 and 5 fail.) One not-so-obvious point is that is never performed. Rather, we develop schemes to directly prepare states such as .
FT quantum computing works in essentially the same way as the above strategy, except that the repetition code is replaced by a code that can correct a large number of both bit-flip and phase errors.
Applying conjecture 1 to the repetition code
What happens if we apply Conjecture 1 to Rep-B? First, observe that after encoding, the states become extremely sensitive to phase errors, since a single phase error on any qubit will affect the phase of the encoded qubit. Thus, outputs of can effectively be assumed to be classical mixtures of and . Indeed, applying to the output will also ensure this.
But what if we have “pre-encoding” noise that is, say, a de-polarizing channel of strength ? Then attempting to prepare will inevitably result in a state of the form
This means that if we attempt to initialize a bit in a RAM or a hard drive to a ’0′ state then there will be a probability that it instead yields ’1′. The situation is even more absurd when one considers other versions of a repetition code. Consider an abacus with beads consisting of atoms, each of which can be either slid to the left or the right. Conjecture 1 would imply that if we attempt to slide a bead on the abacus to the left using our best possible technology, then we will inevitably accidentally place it on the right with probability .
This should be enough to dispose of Conjecture 1. After all, it is stated as applying to all codes. If an exception is built in for Rep-B (which is the key code behind classical computing), then it starts to look rather camel-shaped.
Indeed, the conjecture cannot even be repaired by saying that “for all , there exists a for which is hard to prepare.” Because for the repetition code, preparing the encoded state is simply a matter of flipping a coin with a very precise bias, and then preparing many qubits in a way controlled by this coin flip. Again this is something that could have already been achieved with an abacus.
Trying Rep-C
What about applying Conjecture 1 to Rep-C? This is the situation that Gil discusses when he says that Conjecture 1 is compatible with classical computing. Indeed for Rep-C, noise before encoding behaves pretty much the same way as noise after encoding. So, Conjecture 1 is compatible with . Can we say that if Conjecture 1 doesn’t hold for every code, it at least holds for some code? Perhaps, but I don’t view Rep-C as particularly representative of error-correcting codes, because if we use any code other than the repetition code, then only the “-B” framework makes sense. Moreover, given that Rep-C cannot distinguish between pre-encoding noise and i.i.d. post-encoding noise, it seems hard to use it as evidence in favor of Conjecture 1.
I believe that Conjecture 1 is likely to fail for most interesting maps . In general, if a system has a two-dimensional topologically protected subspace, then mixing within this subspace is exponentially suppressed. Like Conjecture C, I believe it is too broad. But there may well be more specific classes of states that noise effectively prevents us from creating.
Gil: Conjectures 1 and Aram’s classical error correction test
A review of Conjecture 1
Conjecture 1 asserts that encoded quantum qubits are noisy. What do we mean by a noiseless encoded qubit? If is an encoding of with qubits we require that can be determined from up to an error that we cannot distinguish from zero in probability that we cannot distinguish from 1. Quantum error correction achieves this (for every ) when we let the number of qubits used in the encoding grows. (For this formulation you may need to rely also on the Kitaev-Solovay theorem.) Conjecture 1 asserts that this is not possible.
The case of the repetition code
In the context of quantum fault tolerance and the threshold theorem the only repetition code which is relevant is Rep-B. I always regarded Conjecture 1 for the repetition code as obviously true. Indeed, it is correct that when or determines uniquely, but for any other value of there is a whole (one-dimensional) “cloud” of states that we cannot distinguish from . So the repetition code supports having a stable ground state and a stable excited state but does not support noiseless superposition of these states. This example indeed shows (and I thank Aram for clarifying it) that when I say “the probability” in Conjecture 1 you have either to consider the worst case or to take also at random in computing the probability. Aram claims that Conjecture 1 is violated by Rep-B for every . Let’s look at that.
Aram’s formalization of Conjecture 1
A large part of my work with many backtracks and difficulties was to translate conjectures from English to mathematics. This is not a simple matter at all. (This is an opportunity to mention Greg Kuperberg who found various loopholes in some of my early attempts.) So it feels nice to see Aram trying to formalize Conjecture 1, and overall, Aram mathematical formulation of Conjecture 1 is reasonable except that there is one term which is left in English:
“… instead prepares a state close to
What does “prepare instead” mean? The way I would formalize this English part, which agrees with what Conjecture 1 is all about, is that with probability larger than the encoding for after the noise and the encoding of after the noise cannot be distinguished. This is not Aram’s interpretation. Aram’s interpretation of Conjecture 1 is simply incorrect. If I understand correctly what Aram says he would regard also the trivial code which encodes every by as a counterexample to Conjecture 1. Again, this is incorrect.
Let me respond to Aram’s challenge and try myself to formulate Conjecture 1 (it can either stand alone or complement the missing part in Aram’s formulation).
Conjecture 1 (no quantum error-correction): Denote by the trace-distance. There exists with the following property. Consider a realistic device which encode a single qubit with qubits. Suppose that denotes the encoded state of the single-qubit state . Then there are two states and so that and
Conjecture 2
The strong principle of noise (Conjecture 2), which is more general than Conjecture 1, asserts that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. Both of these conjectures require some formal description but are interesting and lead to fruitful insights even without such a description. Let me mention Aram’s second thought experiment which was a very original attempt at a counterexample for conjecture 2. Rather than trying to hide a noiseless evolution inside a large noisy one, Aram suggested to enlarge a noisy evolution into a larger noiseless system. Looking carefully at this suggestion the conclusion (see this comment by Aram concluding a long discussion) was that it cannot work.
Conjectures 1 and 2 do not describe noise models but can be seen as requirements from yet hypothetical realistic noise model. In the next post, I will describe a Hamiltonian noise model which is meant to express formally the idea of “no quantum FT” and to support Conjectures 1-4. We will also mention a suggestion regarding the rate of noise which is relevant to both Conjectures 1 and 2. When we talk about “what is a qubit” we assume that we can manipulate the state of the qubit in a fairly general way and we need to relate the rate of noise with the qubit’s evolution.
Aram’s classical error-correction test
Why do we have (essentially) noiseless classical memory? Why are classical computers possible? This is indeed the heart of the matter. Here is my point of view.
We start with the strong principle of noise that says that you cannot find a noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. A special case of this general principle is Conjecture 1 which asserts that qubits must be noisy; For any encoded qubit we have a substantial mixture of the intended codeword with a “cloud” of unintended codewords.
How does Conjecture 1 allow for noiseless bits? We now need the following repetition principle:
Let and be two distinct probability distributions. Represent ’0′ by a sequence of samples from and ’1′ by a sequence of independent draws from . If is large enough then ’0′ and ’1′ are noiseless bits. (This is true even if in the presence of additional noise of various kind and even if the independence assumption on the samples is relaxed in various ways.)
It follows that a device able to create large samples from two distinct probability distributions enables noiseless bits, and and when we (or nature) control such a device then noiseless classical computation is possible.
Aram’s analogy with 2-SAT and 3-SAT
Aram started his three-post reply last winter with the following analogy: “If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing.”
This is correct. Proving that 3-SAT needs exponential time requires an argument that does not apply to 2-SAT and XOR-SAT. Today, after many years, we still do not have such an argument. But this does not mean that we should believe that 3-SAT behaves like 2-SAT. Indeed, while the proof is a still far, we have strong reasons to believe that 3-SAT is hard; a world where 3-SAT is easy presents a very different reality than ours. Similarly, proving that fault-tolerance quantum computing is impossible will require more experimental efforts and theoretical progress. But the fundamental differences, as those described in the previous debate post, between a world with fault-tolerant quantum computing and our present reality give fair reasons to believe and certainly to explore the possibility that quantum fault tolerance is not possible.
The distinction between classical and quantum error-correction is a major issue whether you are skeptical about the feasibility of universal quantum computers or not. In my opinion, the repetition principle in a reality that obeys Conjectures 1 and 2 draws an appealing picture for the distinction between classical error-correction and quantum error-correction. Aram proposed a different explanation which relies on both the repetition code and on the difficulty to find in 3D quantum forms of passive memories and some commentators offered other explanations. I find these explanations less satisfactory.
How to model the memory of digital computers?
Is Rep-B or Rep-C more appropriate?
Here is an interesting question that came up as a by-product of my discussion above with Aram on the repetition code. Using Aram’s terminology the question is, “Is the microscopic structure of digital memory better described by Rep-B or by something like Rep-C?” When we think about the memory of a digital computer as a vast array of electrons with up spins and down spins where `0′ is represented by (say) 0.01 fraction of up spins and `1′ is represented by a 0.99 fraction of up spins then the Rep-B description suggests a very tight fluctuation for the number of up spins. The Rep-C description is more relaxed and allows the fraction to fluctuate in an entire interval like [0.009,0.012]. Of course, the digital memory implement a repetition of a specific state and not of a general state, so the question is if we should consider the state of the digital computer repressenting ’0′ as subject to small independent noise or as where is some fixed state near which is itself subject to noise.
There is a more general issue here. I expect that in various stochastic/Hamiltonian models for noise we witness Gaussian concentration of the number of errors. Of course, such a concentration is not required for quantum error-correction. I tend regard this feature as unrealistic and therefore such models, in my opinion, leave out a crucial aspect of noise which may or may not allow fault tolerance. See this comment. It will be interesting to understand whether Preskill’s recent assumptions on correlation imply concentration for the number of errors.
Open Problems
Do the assumptions on correlated noise in the models in this paper by Preskill in 2006 with Dorit Aharonov and Alexei Kitaev, and by Preskill now, imply concentration of the number of errors?
Is the microscopic structure of digital memory better described by Rep-B or by Rep-C? Do the fluctuations in the number of up spins in digital computers memory behave like the square root of the number of spins? Does the no-cloning theorem have any bearing on these questions?
[amended formula for revised Conjecture 1]
Trackbacks
- Quantum Repetition « Gödel's Lost Letter and P=NP | NanoTechnology Revolution | Scoop.it
- Lucky 13 paper dance! | The Quantum Pontiff
- What Is The Object? « Gödel’s Lost Letter and P=NP
- Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP
- The Quantum Debate is Over! (and other Updates) | Combinatorics and more
- No-Go Theorems | Gödel's Lost Letter and P=NP
- My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I | Combinatorics and more
- My Quantum Debate with Aram III | Combinatorics and more
- Theorems From Physics? | Gödel's Lost Letter and P=NP
- Navier-Stokes Fluid Computers | Combinatorics and more
I first read of the Fundamental Fysiks Group in the book How the Hippies Saved Physics (review) given me as a gift. The opening lines are:
“To most residents of Vienna, April 21, 2004, probably seemed like any other spring day in the Austrian capital. Students mulled over books in cafes; tourists delighted in the Hapsburg-era gardens, museums, and opera houses; and businesspeople scurried through their appointments. Amid the bustle, however, something magical happened. The city’s mayor and the director of one of the city’s largest banks collaborated on a breathtaking experiment. Working with physicists from the University of Vienna and a spin-off company, the mayor and banker performed the first electronic bank transfer using quantum cryptography.”
In tribute to both Aram Harrow and Gil Kalai, here is a Great Truth perspective upon their debate, presented solely via quotations and philology, and founded upon the defining principle that the opposite of a Great Truth is another Great Truth.
As a statement of Aram Harrow’s Great Truth, we embrace this passage from Nielsen and Chuang’s Quantum Computation and Quantum Information:
Similarly, our assessment of Gil Kalai’s Great Truth descends from
Now we adapt the medical maxim “Physicians are defined by the tools at their disposal” to dual maxim “Physicists are defined by the tools at their disposal”, and we consider both the mathematical toolset of Nielsen and Chuang’s Quantum Computation and its (absent) Heisenberg dual toolset.
Examples are readily discerned: in Nielsen and Chuange we find Lie algebras but not Lie derivatives; operator commutators but not Poisson brackets; unitary rotations but not symplectic flows; decoherence but not localization; communication but not transport; fluctuation but not dissipation, ground state but not vacuum; entropy in its informatic sense but not in its level-set sense; and (remarkably) renormalization is not found in Nielsen and Chuang at all.
These mathematical lacuna help us to appreciate Paul Dirac’s self-criticism of his own 1930 textbook The Principles of Quantum Mechanics
Even today, 82 years later, we quantum systems engineers are still struggling — quite concretely and publicly, for example on the Physics StackExchange — to conceive Dirac’s missing first chapter.
Perhaps with regard to quantum mechanics, we are all of us in the same position in respect to Nielsen and Chuang’s textbook, that Vladimir Arnold was in regard to classical mechanics as expounded by Landau and Lifshitz. As (Fields Medalist) Sergei Novikov tells it:
Conclusion It is natural for our rational sympathies to reside with Aram, and our aesthetic sympathies to reside with Gil, on the grounds that if quantum computing proves to be infeasible, we are likely to appreciate this infeasibility via mathematical tools that are not deployed in the Nielsen and Chuang toolset … and everyone likes to learn new mathematical tools.
Summary Aram’s arguments help us to appreciate the Great Truth that the 20th century has blessed us with many Great Textbooks about quantum dynamics that provide is with good reason for confidence that quantum computers are feasible … whereas Gil’s arguments help us to appreciate the dual Great Truth that no Great Textbooks have yet been written about quantum dynamics, which of course is good reason for skepticism that quantum computers are feasible! :)
To the above might be added, that the Dutch Genootschap Voor Meetkunde En Kwantumtheorie (Fellowship of Geometry and Quantum Theory, GQT) concretely presents (as it seems to me) a creative fusion of Aram’s thesis and Gil’s thesis, that has been outstandingly productive of mathematical, scientific, and even engineering innovation.
I made not very clear comment about repetition codes and exotic qubits some time ago. More later I wrote about that a short introductory text and currently it is here. May be it still has some relevance in present context?
Thanks, Alexander. Here is a link to your original ’99 paper Error-correction with Euclidean qubits.
Now let’s try to give a concrete answer to Aram’s question:
It’s not so easy to steer between the Scylla of Gil’s conjecture being vacuously true and the Charybdis of it being vacuously false.
One reasonable and well-known answer is “Noise processes associated to vacuum interactions are problematic for quantum computation.” This class of Scylla-versus-Charybdis answers is illustrated by Scott Aaronson’s Physics StackExchange question Reversing gravitational decoherence and the various answers to it, including especially John Preskill’s highly rated answer.
Before wading deeply into technicalities — which in any case are better covered in the various answers to Scott’s question on Physics StackExchange — it is well to reflect that if Nature *did* want to raise a mathematically subtle obstruction to quantum computation, then she could hardly do better than to incorporate into Hilbert space a continuum of states possessing not mere global Newtonian invariance, but rather locally relativistic invariance, together with bosonic gauge interactions that coherently amplify emission/absorption into that vacuum.
Isidor Rabi famously asked of the muon [a heavy version of the electron] “Who ordered that?” Nowadays in the 21st century, perhaps we ought to be asking of the vacuum state of general relativity, the question that Scott Aaronson (in effect) was asking: “Who ordered that??”
Perhaps the answer may be: “Nature created the gauge-invariant relativistic vacuum as an entropically necessary obstruction to (among other things) scalable quantum computation.”
Now, a very natural aesthetic reaction is that vacuum interactions pollute the 20th century’s beautiful edifice of quantum information theory with (in Pauli’s 1931 phrase) Dreckeffects [dirt physics] … “und in Dreck sol man nicht wöllen” as Pauli famously advised young scientists.
With the benefit of eighty-one years of hindsight, we nowadays appreciate that Pauli’s scorn for quantum Dreckeffects expressed a Great Truth, whose opposite also is a Great Truth. That is why 21st century quantum systems engineers “wöllen in quantum Dreck” with great enthusiasm and considerable practical success, and this happy experience accounts for engineering sympathies with Gil’s thesis!
As a followup-up, the above Pauli quotation apparently derives from a passage in a 1931 letter from Wolfgang Pauli to Rudolf Peierls that more completely and accurately reads “Der Restwiderstand ist ein Dreckeffekt und im Dreck soll man nicht wühlen” (of which my uncertain translation is “Residual resistance is a dirt effect, and one should not dig in dirt”).
To the extent that a full appreciation of Gil Kalai’s conjecture requires that we “dig in dirt,” a natural question is (to borrow Voltaire’s happy phrase) “What quantum garden shall we cultivate?” Here it seems to me that a quantum garden of manageably modest acreage, that is well-bounded and that encompasses the key elements that motivate the Kalai conjecture, is the scalable production/observation of photon number states (as motivated by Aaronson and Arkhipov’s arXiv:1011.3245, for example).
So perhaps in the next few days I will post some vacuum-physics observations relating to the field-theoretic dynamics of this relatively small, and much-studied, yet still-mysterious quantum garden.
This is deep yet fertile humus, eh? Thank you, Aram and Gil, for inspiring us to dig in it! :)
I think the relativistic vacuum is a barrier to our calculations being simple, but I don’t see how it follows that it’s a barrier to scalable quantum computing. If it were, wouldn’t we have seen some kind of anomalous decoherence, or even energy shifts, somewhere along the way? You can’t have a multi-qubit error term that only shows up to decohere quantum computers, and doesn’t have any physical effects when you do other physics experiments.
Aram, in the context even of single-qubit cavity QED experiments, anomalous decoherence and energy shifts are ubiquitously observed.
Reasonable grounds for Kalai-style skepticism arises, when we reflect that scalable quantum computation requires the simultaneous optimization of multiple figures-of-merit among multiple interacting qubits — frequency stability, photon-emission timing, photon-detection efficiency, and high-Q cavity dynamics, for example — that historically have been optimized only independently.
Experience teaches that we can individually seek to approach perfect emission timing, and perfect detection efficiency, within QED cavities of arbitrary low energy losses, for arbitrarily large numbers of photon-sources and photon-sinks … yet it is less obviously evident (to me) that the innumerable field-theoretic Drekeffekts, that are known to be associated to made-of-atoms dielectrics, metals, and semiconductors, permit us to optimize all of these figures-of-merit simultaneously … even in principle.
After all, it took many decades to appreciate that we can measure position to arbitrary accuracy, and momentum to arbitrary accuracy, and yet (for deep reasons) we now appreciate that we cannot measure both to arbitrary accuracy. It is far from self-evident (to me) that Nature’s field-theoretic dynamics has no further surprising relations in store; hence my personal sympathy for Kalai-style conjectures.
The “anomalous” magnetic dipole moment is only an anomaly with respect to the classical theory; it fits the standard predictions of QM perfectly, with no extra terms required in the Lagrangian. I was referring to the possibility of new terms in the Lagrangian that would decohere quantum computers, which I find not so likely.
And I agree that simultaneously optimizing many experimental components will be a tremendous engineering challenge, but it is hard to imagine any in-principle difficulty. Take optics: yes it is hard to optimize photon-emission timing as well as photon-detection efficiency. But one of these refers to the photon source, and the other to the detector. It’s hard to imagine how two components, sitting on opposite sides of the lab, could be tunable individually but not jointly.
In the context of cavity QED, this intuition fails generically for high-finesse (that is, low-decoherence) cavities.
AdvLIGO provides a spectacular example: the coherent high-finesse light-beam linking two mirrors, two kilometers apart, exerts an (unstable!) Newtonian spring on the mirrors whose mechanical stiffness exceeds that of an equivalent two-kilometer bar of solid diamond.
More prosaically, a near-unit-efficiency photon-source diode, and a near-unit-efficiency photon-detector diode, at opposite ends of a high-finesse simple-mode optical fiber, most definitely exhibit strong cross-coupled quantum dynamical effects, that great complicates the quest to simultanously optimize source efficiency and detector efficiency.
Conclusion It is far easier to specify mutually decoupled, near-perfect sources, mirrors, cavities, and detectors as abstract mathematical operators, than it is to design physical devices, constructed from real-world periodic-table atoms, that couple to a shared field-theoretic vacuum state.
Historical example In communication theory, it is mathematically convenient to specify ideal bandpass filters (perfectly flat, infinite roll-off, zero phase distortion), and early electrical engineers exhibited great ingenuity on seeking to approach this limit. It was not until the mid-1920s that the Kramers–Kronig relations established that the natural mathematical abstraction of ideality was infeasible physically.
To complete the above cycle of physics quotations relating to the crucial role of Pauli’s “Dreckeffekts” [dirt effects] in quantum computation, let us reflect upon two of Lars Onsager’s maxims:
Here Onsager — who was an avid gardner — in effect is advising us that “cultivating the garden” of quantum computation requires that we “wühlen im quantenfeldtheorisch Dreckeffekten”, and moreover to efficiently “munch this humus” we must “forge new weapons” (mathematically speaking).
In regard to Onsager-style mathematical weapons-forging, the Netherlands Genootschap Voor Meetkunde En Kwantumtheorie (Fellowship of Geometry and Quantum Theory, GQT) presents (IMHO) a natural extension of the toolset of Nielsen and Chuang’s Quantum Computation and Quantum Information as follows:
• Traditional QM/QIT/QSE toolset Lie algebras, operator commutators, unitary rotations, decoherence, communication, fluctuations, ground states, informatic entropy, vectors, spaces, derivatives, and projections.
• GQT’s extended QM/QIT/QSE toolset Lie derivatives, Poisson brackets, symplectic flows, localization, transport, dissipation, vacuums, level-set entropy, tangents, manifolds, differential forms, pullbacks, isometries and symplectomorphisms, and renormalizations.
We notice in particular, that the GQT mathematical toolset does not supplant, but rather naturally extends, the now-traditional Nielsen/Chuang toolset for quantum dynamical analysis.
An appreciation of the extended GQT mathematical toolset ignites within us a similar (very healthy!) dissatisfaction to that which Vladimir Arnold felt for Landau and Lifshitz’ Mechanics. Per Sergei Novikov’s account:
Similarly, as we assimilate GQT’s extended mathematical physics toolsets, we become dissatisfied with the initial chapter of Landau and Lifshitz’ Statistical Physics, which summarizes “The Fundamental Principles of Statistical Physics.” We are dissatisfied because this initial chapter awkwardly treats classical and quantum statistical mechanics separately, whereas the Nielsen/Chuang/GQT mathematical toolset naturally describes classical and quantum dynamics as limiting cases of a unified class of dynamical flows. This naturalizing perspective is exceedingly helpful in optimizing engineering algorithms for efficiently simulating quantum dynamics, and of course it is crucial to the development of string theory … or M-theory … or whatever theory emerges in the 21st century as the successor to 20th century quantum mechanics.
This unifying, clarifying, mathematically naturalizing 21st century perspective motivates an appreciation of Landau and Lifshitz that is similar to Dirac’s appreciation of his own textbook:
Summary The Harrow/Kalai debate helps us to appreciate three reasons why progress toward practical quantum computing has been dismayingly slow compared to the quantum computing roadmaps of a decade ago: (1) we have encountered dauntingly many quantenfeldtheorisch Dreckeffekten, (2) progress toward Nielsen/Chuang/GQT mathematical unification has been slow and there are dismayingly few good textbooks, and perhaps (3) our natural human enthusiasm for “eagle-soaring” quantum breakthoughs has been associated to an underappreciation of the virtues of Onsager-style “humus-munching.”
This enlarged quantum perspective is (IMHO) very encouraging, as it presents to us a unbounded 21st century venue for unprecedented quantum experiments, stronger theories, deeper mathematics, novel technologies, and vast enterprises. In particular, we are authorized and encouraged to write, not appendices to 20th century physics textbooks, but new first chapters. For which, please accept this appreciation and thanks, Gil and Aram and Gödel’s Lost Letter! :)
Great post, John! :)
Wake me up when your quantum computer is ready! :)
Let me make one further remark regarding Aram’s “classical error correction test”. When it comes to actual modeling of noise I leave this issue aside, and most of my conjectures/models are harmless for classical computing to start with. Conjectures 3 and 4 have rigorous formulations (see the paper “When noise accumulates“) which make sure that they don’t have any baring on classical computation. In the next and last post I will briefly describe my model of smoothed Lindblad evolutions (from the same paper), and, again, it is based on noise which does not harm classical memory and computing from the start. I hope that at a later stage we will be able to show that even when the noise models are enlarged and put classical memory and computation in danger, classical information and computation prevail.
This has some similarity to the situation for the threshold theorem. To the best of my memory, two of the first three papers (by Knill, Laflamme, and Zurek, and by Kitaev) take classical computation for granted. (Of course, we can take classical computing for granted but letting classical computing emerges from the noisy model of computation speaks for the integrity of the noise model itself. ) The proof by Aharonov and Ben-Or had the advantage of getting both classical and quantum computing “from scratch.”
There is an important missing or hidden assumption in Gill’s Conjecture 1 that both
initial and final states of the encoded qubit must be stable. While the initial state can be
a stable thermal equilibrium one, often very close to a pure ground state, the stability of
the final state is an essence of FTC. The final state of an encoded qubit with the lifetime
of, say, 10^{-20 } sec is pretty useless for any purposes.
Only quite recently I realized that the simplest way to understand the main problem with
QFTC is to notice the fundamental conflict between stability and reversibility of informa-
tion processing (see for the details Quantum memory as a perpetuum mobile? Stability v.s.
reversibility of information processing, Robert Alicki arXiv:0901.0811 (last version 2012 to
appear in Open Sys. and Inf. Dyn.)).(BTW, I think I deserve the 100 000 USD prize of
Scott Arronson).
The main argument is the following. We have learned from the numerous examples that
the mechanism of state protection against the most fundamental and inevitable thermal
noise is similar for classical and quantum information carriers and is described by the
presence of free energy barriers separating different states. In classical case this mechanism can be illustrated by the simplest model of mean-field Ising ferromagnet, in quantum domain is much more involved (4D Kitaev model is probably the simplest realization).
A classical Ising bit can be visualized as a fictitious particle moving in a symmetric double-well potential under the influence of friction and random Langevine force satisfying
fluctuation-dissipation relation. This friction force leads to energy dissipation during a gate which changes a bit state. This amount of dissipated energy (provided by an external work
source) is at least of the order kT N (N -number of spins encoding a bit) much larger and
more realistic than the Landauer bound kT ln2 !
Classical, strongly dissipative irreversible gates are, of course, exactly those used in our
computers. Reversible classical computers based e.g. on “billiard-ball models” would
not work effectively due to lack of stability and devastating influence of chaos and noise.
Adding stabilizing friction would destroy their operation principle. On the other hand, for
a protected encoded qubit decoherence accompanying energy dissipation destroys unitarity of quantum gate and hence makes scalable QFTC impossible.
Robert, please let me say that your post raises many good points. And yet for reasons that arise naturally in the coding of large-scale quantum simulations, no-go arguments based upon free energy have a (potentially) exploitable loophole, that is associated to the intimate interplay between the notions of free energy, entropy, and spatial localization.
In a nutshell, for strictly computational reasons that are detailed on Physics StackExchange, it can happen — and does happen — that multiple qubits can be encoded in quantum trajectories for which no spatially localized entropy density function is well-defined. In consequence there is no well-defined free energy density function (which is the thermodynamic dual of entropy density), and neither is any other spatially localized thermodynamic potential funtion (e.g, a chemical potential) well-defined.
For these spatially non-local qubits (and perhaps only for spatially non-local qubits?) it seemingly makes sense to talk of temporally stable qubits. This thermodynamical insight is of course well-known to the quantum computing, and is well-described (for example) in John Preskill’s on-line notes Topological quantum computing for beginners.
Needless to say, it is tricky to analyze noise processes for non-local qubits, and neither do we have much experimental experience with them, and so it may be that we don’t (as yet) appreciate the noise mechanisms associated to spatially nonlocal qubits all that well.
Conversely, the noise processes that thermalize classical bits serve also to spatially localize them and to temporally stabilize them; and so in regard to classical qubits there is (AFAICT) not much controversy regarding our quantum dynamical understanding of their stability.
Robert, I find your perpetual-motion arguments fascinating and would like to fully understand them sometime.
As for what you write here, I agree that dissipation is necessary for the continued functioning of any computer, quantum or classical, and that the amount of dissipation should be proportional to (# of (qu)bits) * time * temperature. However, prescriptions for doing this are part of the standard FTQC approach, which calls for measuring qubits, and constantly bringing in fresh qubits in the |0> state. And yes, the physical gates are not unitary, but thanks to QECCs, the encoded dynamics *are* approximately unitary.
So it seems to me that dissipation is fully compatible with quantum computing, just as it is with classical computing. This was a surprising new insight in the mid 1990′s when QECCs and the FTQC threshold theorem were discovered, but it seems to me they have now addressed this concern.
“I agree that dissipation is necessary for the continued functioning of any computer, quantum or classical, and that the amount of dissipation should be proportional to (# of (qu)bits) * time * temperature. ”
Yes Aram, but you believe that a quantum gate restricted to encoded qubit (i.e. the information carrier degree of freedom) can be made arbitrarily close to unitary while for classical computers gates on bits are always very far from the Hamiltonian ones (you do not have continuous in time reversible dynamics on discrete classical spaces).
The illusion of QECC and FTQC is based on:
a) too weak (finite norm assumption) or too classical (exponential decay of correlations) models of noise,
b) the idea of “fresh qubit supply” which was never demonstrated using first-principle models and, I think, is incompatible with Hamiltonian models of open quantum systems.
For both FT classical and quantum computers, the encoded gates are not continuous. I guess this is clear for classical computers; you fliip a bit from 0 to 1, and there’s nothing continuous about it. But it’s also true for FTQC; you can do a certain set of encoded gates, like Cliffords and pi/8, and NOT a continuous set of gates.
As for (a) and (b):
(a) If noise correlations are so high, then why isn’t this a problem for classical computers? Also, the infinite norm correlations are a technicality, since we never actually have an infinite number of photons in any mode.
(b) What about optical pumping? T1? In any system where we observe Rabi oscillations, we have to somehow initialize some polarization. The only difference in FTQC is that this happens in the middle of the computation. I don’t understand why it should be incompatible with Hamiltonian models of open quatnum systems, and if it is incompatible, then that seems to be a sign that if anything it is the models that are wrong.
For both FT classical and quantum computers, the encoded gates are not continuous. I guess this is clear for classical computers; you fliip a bit from 0 to 1, and there’s nothing continuous about it. But it’s also true for FTQC; you can do a certain set of encoded gates, like Cliffords and pi/8, and NOT a continuous set of gates.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The idea of discrete gates is an idealization, both quantum and classical dynamics are continuous in time. This idealization is particularly missleading when termodynamical arguments are used.
Thermodynamical quantities are PATH DEPENDENT. Bit flip is described in a physical theory by a continuous stochastic evolution during which entropy is produced, heat and work are exchanged between the information bearing degrees of freedom and environment.
This is completely hidden in a discrete gate model.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As for (a) and (b):
(a) If noise correlations are so high, then why isn’t this a problem for classical computers?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
As I wrote, strong dissipation does not harm classical computation based on irreversible gates. On the contrary it is necessary to mantain stability and speeds-up computations.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Also, the infinite norm correlations are a technicality, since we never actually have an infinite number of photons in any mode.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
You do not need even a single photon to have infinite amplitude quantum field fluctuations which cause e.g. spontaneous emision in vacuum. Even if they are not strictly infinite they are cutoff- dependent i.e. determined by a very high energy scale where quantum electrodynamics breaks down and other interactions must be taken into account. You cannot treat a very large parameter as a small constant in any reasonable physical model.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
(b) What about optical pumping? T1? In any system where we observe Rabi oscillations, we have to somehow initialize some polarization. The only difference in FTQC is that this happens in the middle of the computation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I do not understand the relation between Rabi oscillations and fresh qubit supply.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55
I don’t understand why it should be incompatible with Hamiltonian models of open quatnum systems, and if it is incompatible, then that seems to be a sign that if anything it is the models that are wrong.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Try to construct a Hamiltonian model of fresh almost pure qubit supply. Firstly, you have to preserve them in a thermal environment using a protecting Hamiltonian which produces a large energy gap (>> kT) between a ground state and excited one. Secondly, to integrate this almost pure qubit with the computer you have to switch-off the protecting Hamiltonian. Whatever you do, slow or fast process, the qubit becomes dirty.
Perhaps it can be useful to add the link to Robert’s paper: Quantum memory as a perpetuum mobile? Stability v.s. reversibility of information processing.
Continuous interferometric observation-with-feedback yields a single-qubit thermal density matrix that approaches an equilibrium temperature of ±0 (that is, perfect and states) … in the utopian limit of unit photon detection efficiency and optimal feedback gain, that is.
Indeed, in regarding a nanoscale cantilever as (effectively) a large- qudit, this is the process by which we experimentalists dynamically synthesize ground-states, and it is routine to achieve qudit noise temperatures that are small fractions of ambient temperatures.
Conclusion The on-demand dynamical synthesis of initialized ancillary qubits/qudits is not (AFAICT) an insurmountable obstacle to large-scale FTQC … provided that the “FT” part works as-designed! :)
In fact, even classical computers achieve this, albeit at high per-bit noise rates.
Yes, in a utopian world!
Here is another interesting issue which is related to the discussion while not having a clear baring on the debate.
Classical evolutions that do not support computation (or, in other words, how to define classical non -FT evolutions?)
While both Aram and I are in agreement that classical information and computation are possible, a question that I find fascinating is how to formally describe non fault-tolerance classical evolutions. A related question is: Given a class of classical evolutions (perhaps also with certain restrictions on the initial conditions) when can we say that this class supports universal classical computing. Some versions of my conjectures/models may be relevant here, and for this purpose we want to fail the classical errror-correcting test! Why this can be important? I would think (ok, speculate is a better word,) that when a class of processes allows universal classical computation and supports fault-tolerance then this can potentially be used to demonstrate various types of self-defeating behavior, Maxwell daemons, and perhaps be relevant to issues of well-posedness and regularity.
I can imagine error models that do not support FTCC, but it’s hard to imagine a formal definition of “classical computation without fault tolerance.” As I mentioned in an earlier post, we can’t define precisely “classical computation without the FFT” (fast Fourier transform), and the same should go for any technique that we want to exclude.
On the other hand, analog computing is a good intuitive picture of non-FT classical computing. I just doubt whether this can be made rigorous.
Aram, what is the problem with the formal definition? Let’s consider some Turing complete model represented as a discrete deterministic system with transition between some set states. After that extend the system with adding “erroneous transitions” with some probabilities. Etc., etc.
When it comes to non-FT quantum evolutions, a reason I am optimistic that a formal description is possible is that I have concrete proposals for such a description (that we discuss here over the last months). The classic case indeed looks harder but it also looks possible. There are many cases where we can put on formal grounds various restricted classes of evolutions or computations. We can model “computation without backtracking” via monotone circuits, we can model “quantum computation without cooling” via irreversible noisy circuits, we can model computations without exotic bit operations by arithmetic circuits, and there are many more example. As for computing without FFT (and here I am not sure if you want, Aram, to exclude specifically FFT or any form of DFT) this does not look like a good analogy for our discussion, but a cute question otherwise.
To agree with Aram (as I have understood his posts), it is considerably less easy than one might at first sight imagine, to rigorously distinguish (classical) analog memory/computation processes from (classical) digital memory/computation processes.
For example, we have modern flash memories that contrive to store multiple error-corrected binary bits in multiple charge levels on individual capacitors. These flash memories belong to the same technological family as charge-coupled optical sensing arrays that store quasi-analog images as in the quasi-continuum of charge levels if similar capacitors. Where in this family of devices is the rigorous dividing-line between digital flash bits versus analog CCD images? Don’t ask me! :)
In computation too, this same blurring of analog-versus-digital exist, per the family of computers described in US Patent 2,815,488 Non-Linear Capacitance or Inductance Switching, Amplifying and Memory Organs (1957), whose inventor was … John von Neumann himself! And yes, these quasi-analog, quasi-digital computation devices were for a time commercialized as parametron computers.
Among the seminal articles that discuss parametronic computation are two by Rolf Landauer (“Dissipation and noise immunity in computation and communication”, Nature 1988, and the follow-up article “Dissipation and noise immunity in computation, measurement, and communication”, Journal of Statistical Physics, 1989), which in turn was generalized to the quantum domain by Seth Lloyd’s “Any nonlinear gate, with linear gates, suffices for computation” (Physics Letters A, 1992).
Summary In distinguishing “analog” from “digital”, perhaps it is well to keep in mind the legal maxim:
Both in the classroom and in the courtroom it is convenient to distinguish between analog and digital sensing, switching, memory, and computational technologies … and yet all parties should keep in mind that the mathematical and physical foundations for this distinction are highly imperfect.
I also agree here. I never understood the distinction between “analog” and “digital” even on the intuitive level.
Gil,
I am not sure how useful this idea is in terms of the current discussion, but it might be useful on an intuitive level.
Find a old fashioned slide rule and use it to solve a couple of problems.
LOL … and yet, a slide-rule in which the physical motions are mediated by gears becomes (in essence) a digital device! :)
This illustrates how distinctions that are algebraically natural and mathematically rigorous, sometimes are neither physically natural nor practically unambiguous.
John,
John,
In general I agree with you. The distinction between analog and digital computing devices is not as simple as a mechanical/electrical dichotomy.
I am not sure you are 100% correct in viewing all the fire control devices as being digital.
If the gears are used as counters for example, then the device is essentially digital.
If the gears transmit mechanical motion to a geometrically designed cam, then I think an argument can be made that the device is essentially analog.
Whatever the case, the amount of ingenuity that went into designing those devices was remarkable.
When you consider that the big guns on those old battleships could fire a shell that weighted as much as a small car and hit a target miles away, those devices are even more amazing.
Sorry abou the extra John. Not sure how it got there.
“On the other hand, analog computing is a good intuitive picture of non-FT classical computing. I just doubt whether this can be made rigorous.”
Why? Using the notions of “chaos theory” (Lyapunov exponents, K-S entropy) it should be possible and very helpful to understand the analogical problems in FTQC.
BTW for those who do not like the thermodynamical arguments there is another signature of failure of the idea of efficient QC.
It is known that some hard problems are equivalent to findig the lowest eigenvalue of a certain Sturm-Liouville operator (i.e. a ground state energy of a certain quantum system).
Assume, we want to measure N first digits of this energy (analog quantum computation equivalent to determination of the ground state). It means that the acuracy of the measurement is 2^{-N} and hence due to the Heisenberg relation we need 2^{N} time for such a measurement. Again bad scaling!
I think a lot of these arguments apply to various attempts at analog computing, classical or quantum. And I agree that this setting is unlikely to scale well.
But the reason why I doubt it can be made rigorous is that digital computing can be embedded in analog computing (i.e. I can build a pentium out of transistors). So it is simply not true that analog computers can never do anything FT; they can at the very least perform FT simulations of digital computers.
“But the reason why I doubt it can be made rigorous is that digital computing can be embedded in analog computing (i.e. I can build a pentium out of transistors). So it is simply not true that analog computers can never do anything FT; they can at the very least perform FT simulations of digital computers.”
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The situation is much more interesting and, I am sure, could and should be formalized using the notions of dynamical systems theory. There are essentially two types of classical dynamical systems: a) conservative, volume (measure) preserving , reversible (e.g. Hamiltonian) , b) non-conservative, dissipative, irreversible
a) Conservative systems are pretty useless for FT computation because of chaos (reversible computers must be strongly nonlinear, because for linear systems always 1+1= 2 and we need Boolean 1+1 = 0).
b) Dissipative systems possess stable attractors (e.g. fixed points, limit circles) which define their basins of attraction. Hence a phase space is decomposed into discrete set of basins of attraction which corresponds to a discrete set of states of a digital computer. Dissipative dynamical systems are physical models for the FT digital computation. The price for FT is digitalization.
The FTQC optimists want to have their cake and eat it. They want stabilization by dissipation but want also to preserve continuity in terms of superpositions of states.
Maybe I was too quick to mix two issues which are not the same so let me try to discuss them separately and raise a few questions.
Given a class of classical evolutions and perhaps also some restriction on initial conditions is it possible to talk about the computational power expressed by this class?
For example, I know that Conway’s game of life supports universal classical computing. Is it possible to say (or better, was already said) something of this kind for evolutions described by certain dynamical systems?, PDEs?
As I said, it looks (naively, perhaps) that when your class of evolutions is large enough to support computation then this can be used to describe self-defeating behavior, Maxwell daemons, ill-poised examples etc. (Again, this may be naive as well as a familiar thought/idea/approach and if so I will be happy to hear about it.)
The second related question which requires some stochastic ingredient and a notion of noise is if it is possible to formally describe the class (or a reasonable class) of noisy classical evolutions that does not allow classical error-correction and classical fault-tolerance. I agree that this might be hard (perhaps even harder than the quantum case). How can we prevent the creation of a single noiseless bit of information (and still do something interesting)? Aram wrote that analog computing is a good intuitive picture of non-FT classical computing. I will be happy to hear more on the intuitive level what Aram means here.
By analog computing, I don’t mean anything deep. I am thinking about something that might be used in analog audio processing. Say each wire holds a real number: x, y, z, etc., represented by a voltage. Then we can hook them up in ways to set z = x + y, or z = x * y, or z = 2 * x, etc. (The true functions will be more complicated, and maybe time-dependence could be part of it, but let’s ignore this for now.) This isn’t FT, because if x is replaced by x +- eps, then nothing in the circuit can say that this is wrong, and so errors will just build and build (even apart from the fact that nonlinearity can amplify them.)
Of course, this model includes the option of setting things up so that the only stable voltages are 0V and 5V, and thus it can simulate digital computation. In this case, it probably will be FT. So that’s why I don’t think there’s any precise way to define “non-FT analog computing.”
The question of describing the “computation power” of a specific class of evolutions (and initial conditions) seems interesting. I mentioned Conway’s game of life which supports universal Turing machines and this can be taken as some sort of “role model”. However, putting this question on formal grounds does not look easy especially for continuous or noisy evolutions. (Of course, in the study of dynamical systems there are various other notions of complexity which may apply and it is not clear that computational complexity notions are useful.)
Regarding classical non-FT evolutions, one possible suggestion for non-FT evolution is an evolution which, on any scale, admits a bounded-depth approximation. Another possibility would be to consider the classical analogs of my conjectures about correlation of noise (these analogs do not generally hold for noisy classical evolutions) as a basis for a definition of non-FT classical noisy evolution. It will be interesting to offer a notion of stochastic non-FT versions for some continuous evolutions which are based on standard PDE-s. Another point of view might be to regard non FT-evolutions as a class of evolutions where we cannot violate rules of thermodynamics at any scale. E.g., we cannot create Maxwell Daemons.
Getting a little wild, and picking on this comment by John Sidles we can consider the Navier Stokes equations. What kind of computation is supported by NS equations in two dimensions? in three dimensions? (See this post by Tao for a discussion of this notorious problem.)
I am aware of a paper by Andy Yao “Classical physics and the Curch-Turing Thesis” on somewhat similar ideas. (Like many other authors, Yao’s heart seem to be in the direction of achieving superior computational power rather than exploring and using inferior computational power.)
Not very long time ago it was found an universal cellular automaton on Penrose’s tilings http://arxiv.org/abs/1208.2771 It was result of recent interest to analogues of Conway’s Life in such an irregular (“noisy”) environment.
That’s interesting, Alexander. I also looked and found several relevant references to the question about computation (and computability) power of PDEs. Here is a MathOverflow question entitled Simulating Turing machines by ODEs and PDEs by Mariano Suárez-Alvarez. The question refers also to an earlier MO question on general theory of PDEs and to a blog post by Terry Tao on the “non self-defeating argument” and especially a discussion there on the comment thread.
Of course, if we consider continuous evolutions we should clearly specify the kind of approximation we allow. One thing that I still did not find is an answer to a question like: “What is the computational power of evolutions based on 2D NS equations,” and if Turing universality is related to classical questions regarding regularity. Also I did not find in this context issues of fault tolerance and error correction come to play.
May be classical analogue “self-organisation structures” may be compared with error correction?
Whether large-scale quantum computers are feasible looks very much like the P=NP problem to me: both questions have a definite answer but it’s out of our reach. In other terms, why not replace the questions:
“Is there an efficient algorithm for SAT?”
by
“Can a thinking being find an efficient algorithm for SAT?”
and
“Can quantum computers work on a large scale?”
by
“Can living beings build a large-scale quantum computer?”
I would like to discuss another matter that I raised in the post and to look critically at John Preskill’s paper. A good place to start is this comment by John:
I would like to explain why I regard the correlation assumptions offered by John as unrealistic and I think that it can be useful to clarify this issue. Ironically, if I am correct it will also show how large is the leap that John talks about in the last sentence and how much wider are the classes of noise that support FTQC are. So this is a technical matter that skeptics and believers can take both ways.
I expect that the mild conditions on correlation in John’s paper (as well as the earlier paper by Aharonov-Kitaev-Preskill) imply a central limit theorem for the number of errors. In particular, the number of errors will decay exponentially above and below their expectation, and the standard deviation of the number of errors behaves like square root of the number of qubits.
I regard this feature as unrealistic. I would expect that for a realistic noise the standard deviation for the number of errors is linear in the number of qubits. (I even guess that this is the case for digital memories, but on this point both Aram and Robert Alicki disagree.) As I said, you do not need such square-root n concentration of the number of errors in order to have FTQC. Quantum fault-tolerance only requires that the probability for the number of errors to go over the threshold decay fast enough. (And polynomial decay is fast enough.) But this also suggests that John’s (as well as earlier) modeling miss the first-order effect regarding the behavior of the number of errors and that there is something basic about noise missing from John’s model.
For digital RAM the error rate has been studied empirically at Google. Here is the abstract, and there is a PDF link as well
http://research.google.com/pubs/pub35162.html
In section 5 they look at the scaling for the number of errors when the RAM size is doubled. For several platforms they find no increase in the number of non-correctable errors. For other platforms they find an increase by more than a factor of 2.
This is probably explained by the latter RAMs being based on a large number of small memory chips and the stable ones being built from a smaller number of large chips.
Thanks for the comment, Klas. I referred in my comment to the microscopic structure of a single digital bit and not to the number of bit errors in RAM.
If a ’0′ bit such a bit is represented by electrons where 0.01 of which have up spins, Aram expect that the standard deviation of the number of up-spin electrons will be like and I expect that the standard deviation will behave like a small constant times . (Referring to up-spins electrons as “errors” is based on Aram’s rep-B interpretation.)
So you were not talking about the way actual digital memories work when you said that your suspect that their number of errors is linear in the number of bits?
In a standard digital memory, either on a processor or in a memory chip, the bits are not encoded by electron spins. Instead the state of a single bit is given by the amount of charge in a capacitor-transistor pair. A RAM chip is built by simply having many capacitor-transistor pairs implemented on the same piece of semiconductor.
http://en.wikipedia.org/wiki/Dynamic_random_access_memory
I’m sure there are some numbers available on the charge stability of a single capacitor-transistor pair as well, even though I don’t have them at hand.
Dear Klas,
Indeed I was not talking (In the post and the comment above) about entire digital memories with large number of bits. I thought that this was clear from the context but thanks for clarifying it. When we discuss the repetition code in the context of digital memory the issue is how a single bit is represented.
Aram and I were talking about the microscopic representation of a single bit by a vast number of microscopic elements. I suppose that the charge you mention translates to some microscopic behavior of a vast number of particles representing a single bit, but indeed thinking about electrons and up-spins may well be an abstraction (or a very special implementation).
If the microscopic properties of the physical device representing a single bit depends on the stability of some charge then this will overall support my point of view.
A conversation with Aram led to the following postulate:
This postulate — of which I am about 85% confident (definitely not 100%) — gives me great respect for John Preskill’s FTQC theorems.
Conversely, this postulate focuses (my own) residual skepticism regarding the feasibility of FTQC onto two topics: (1) the physical limits to the spatial and temporal localizability of Lindblad generators, given that these generators are subject to the quantenfeldtheoretisch Dreckeffekten mentioned above, and (2) the non-symplectomorphic nature of the unravelled Hamiltonian/Lindbladian (stochastic) dynamical flow, which generically acts to compress dynamical trajectories onto effective quantum state-space manifolds of reduced dimensionality and generically negative curvature (relative to the large-dimension flat Hilbert state-spaces that are postulated in the 20th century’s canonical algebra-oriented texts, e.g. Nielsen and Chuang).
Skepticism of quantum computing arises (for me) because the above two considerations render it less-than-obvious that the effective state-space geometry of real-world FTQC computers has sufficiently many dimensions, and sufficient geometric robustness in the presence of the above-mentioned Dreckeffekten, as to support the large-scale FTQC processes that are necessary to complexity classes like BQP and/or QMA.
Provisional conclusion The same Preskill-style FTQC postulates that are plausible and natural in algebraic descriptions of projectively error-corrected computational processes on large-dimension flat Hilbert state-spaces, seem less plausible and less natural when these same computational processes are geometrically unravelled as Hamiltonian/Lindbladian (stochastic) dynamical flows on low-dimension non-flat effective state-spaces.
That is why (IMHO) we won’t really be confident that FTQC is feasible, until we unravel its theoretical foundations both backwards and forwards, both algebraically and geometrically … and then verify this naturalized and universalized quantum dynamical understanding by designing and operating working FTQC devices.
The spin interpretation is more appropriate to hard drives, where my understanding is that the stability is mostly due to ferromagnetism.
As for RAM, what we have been calling the number of bits refers to the microscopic sizes of the memory, so to resolve this point we’d want to know how the voltage fluctuations scale with the size of the wire (i.e. amount of current) or potential differences of the individual electrons (i.e. voltage).
It is always the same, normal fluctuations proportional to a square of a number of elements (electrons, spins, fotons, atoms , etc). The only exception is the point of phase transition where larger, abnormal fluctuations appear. This follows from the additivity of energy which assures stability of matter (at a reasonable scale, because due to gravity the energy of
the Universe is non-additive and Universe is unstable)
Another case to check if Rep-B or Rep-C are more realistic is to consider current devices that can create a large number of cat-states on pairs of qubits. If stands for an ideal cat state, Rep-C will predict that we will get something like where itself is a certain noisy version of , and N is some standard independent noise. Rep-B will predict something like .
(Aram’s text hints that we should reject the rep-C description because the cloning map is nonlinear. But this does not sound correct to me. )
Sorry, I probably shouldn’t have brought up the fact that the cloning map is nonlinear. You’re right that it’s not the most relevant objection. And of course, in a sense it’s strange to “object” to a mathematical function. But let me try to say more clearly why I think that Rep-B is a better map to think about in this case.
If you explain textbook FTQC, the story goes something like this
“Let E be a QECC, like concatenated Steane. Prepare a bunch of copies of E(|0>). Then do encoded operations on them. Between each step, do error correction. At the end, do a FT measurement.”
If you replace E with “Rep-B” then this exactly describes classical computing. If you replace E with “Rep-C” then various pieces of this don’t go through (as I mentioned in my post). So in order to discuss FTCC, I think Rep-B is the code to focus on. And FTCC is highly relevant to the discussion because I believe that the empirical demonstration of FTCC is strong evidence in favor of FTQC being possible in principle.
Well, I think the nonlinearity of the cloning map is relevant and it is good that you mentioned it, and overall how classical robust information (and computation) emerges from the noisy quantum “mess” is a bit of a mystery.
When you say: prepare a buch of copies of E(|0>). the question is what does it mean. To start with I suppose that |0> had no special role in the Bloch sphere (or else it represents already a robust classical entity).
When you prepar these copies do you prepare independent noisy copies of the same state |0> or independent noisy copies of a noisy version of |0>? The first possibility (which resembles rep-B) does not look realistic unless your initial |0> is robust to start with. The seond possibility which resembles rep-C looks more realistic to me but still the fact that the cloning map is nonlinear is distubing in this context.
I think we share the idea that “repetition” is the mechanism we know for how robust classical information (that also enables computation) emerges. But it is not even clear that talking on the repetition “code” is relevant. (Of course, once you have robust classical information, classical ECC are very relevant and realistic.)
I guess I was again unclear.
By “a bunch” I mean “k”, where k is the # of logical bits your program needs. But then the encoding map E(.) sends one qubit to n qubits.
For example, a 100GB hard drive might have k = 10^12 logical bits, but each logical bit is encoded in n = 10^8 spins, so altogether there are 10^20 spins.
There is nothing particularly special about |0> in this picture: it’s more that FT computing requires three FT ingredients: preparation, manipulation and readout. If a FT NOT gate is included, then it suffices to consider only preparation of E(|0>), and you don’t need to bother with E(|1>), or (in the case of quantum computing) with E(|psi>) for other |psi>.
Also, I’m not sure what you mean about |0> being robust to start with. Of course no single-qubit state is ever robust. As far as I can tell, information is only robust when it’s encoded in an error-correcting code. But we should be able to avoid thinking about the robustness of |0> by attempting to prepare E(|0>) “directly”, i.e. _not_ by first creating |0> and then applying the map E.
Ok , lets talk about 10^12 logical bits each composed of 10^8 microscopic elements. We can think about these microscopic elements as noisy spins or we can also think about them as noisy qubits. Now we try to think about the statistical properties of the microscopic elements for a single logical bit which represents ’0′.
Lets suppose that the microscopic elements are themselves qubits. Ideally ’0′ is represented by (i.e. E(’0′) is) a whole bunch of qubits at the same specific state . As you said, we dont need to create a robust but to go ahead and create only . So the way you see it is that we have a machine that creates which is 10^8 copies of a specific . You think that this can be achieved up to independent errors on the 10^8 qubits.
The way I look at it is that every such a machine will actually create a mixture between copies of various where is a noisy version of to start with (neither neither need to be explicitly created). On top of this you have also independent noise. So the machine I am thinking about is much worse than your machine. Nevertheless my machine enables robust logical bits.
The statistical properties of the microscopic structure of the 10^12 logical bits are more permissive in my machine.
My machine looks to me more realistic than yours. You correctly claim that your description fits nicer to the theory of classical error correcting codes that extends nicely to the theory of quantum error correcting code. I agree to that but so what?
Let’s say that E(|0>) means I write down “0″ on a piece of paper, and E(|1>) means I write down “1″ on a piece of paper. I think that yes, I can choose to write down “0″ or “1″ with very low probability of mistakenly writing the wrong number, although there will be small fluctuations in the shape of my pen stroke. (If you don’t want to think about free will, imagine that “I” am a well-trained scribe taking dictation.)
If you think that I’ll accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), (i.e. E(psi), for psi a noisy version of |0>) then this is essentially the same as me having a 1% chance of accidentally writing down “1″. Worse, since the conjecture applies to EVERY error-correcting code, if I imagine myself encoding 0 as 00000000 (again written in pen) and 1 as 11111111, then there will be a 1% chance that I will want to write 00000000, but instead write 11111111 (after all, you posit that the pre-encoding noise rate is a universal constant, independent of the code used). Additionally, if I imagine that a 1 is encoded as 10101010, then I have a 1% chance of writing this string instead, so the behavior cannot be separated from my intentions.
Put it all together, and you get why I think the conjecture is unworkable.
Aram, What you describe is classical error correction after robust classical information already emerges. I am not sure what conjecture you refer to at the end. Here, we are not talking about Conjecture 1 but about the mechanism that classical information emerges from quantum noisy mess. (Regarding Conjecture 1 I agree that rep-B is relevant, and I explained what was the gap in your formulation and why Conjexture 1 does work for the repetition code.)
> “I never understood the distinction between “analog” and “digital” even on the intuitive level.” (Gil Kalai)
OK, but this has a big consequence: computer science and its algorithms should be replaced by a wider, more general “theory of problem solving” that includes all kinds of methods – whether digital or analog.
“If you think that I’ll accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), (i.e. E(psi), for psi a noisy version of |0>) then this is essentially the same as me having a 1% chance of accidentally writing down “1″. Worse, since the conjecture applies to EVERY error-correcting code, if I imagine myself encoding 0 as 00000000 (again written in pen) and 1 as 11111111, then there will be a 1% chance that I will want to write 00000000, but instead write 11111111 (after all, you posit that the pre-encoding noise rate is a universal constant, independent of the code used). ”
Huh, I see another potential problem with your argument, Aram. When you say “essentially the same” don’t you implicitly assume that measurements commute with the evolution? When you accidentally prepare E(sqrt{0.99}|0> + sqrt{0.01}|1>), and I measure to detect the string of bits it represents I will not get a string of 1s.
No, my mistake, there is no problem of this kind with Aram’s argument above.
Serge,
Like you I thought Gil”s remark was strange. I imagined the analog wave/digital particle dichotomy was built into quantum mechanics.
That model is a bit anti-intuitive for most folks and I am not sure how you jump over that conceptual barrier.
On the other hand; it might be useful to escape the surly bonds of that dichotomy on a very abstract, theoretical level. The distinction may not be relevant.
When you goggle the terms analog and digital, the definitions range from the mundane mechanical/electrical distinction to the more difficult continuum/discrete.
One of the more interesting dichotomies goes something like this:
Smoke signals and symbols are digital with discrete elements. Any computation using symbols is basically digital and requires a language. Analog computing does not require a language.
At the risk of over simplification:
Theoretical guys use deductive logic and manipulate symbols to “prove” a boson exists. Analog guys build super colliders and manipulate things to demonstrate the boson exists.
Einstein is a curious example of a theoretical guy who looked for analog counter examples to quantum mechanics.
Maybe Gil meant that all discrete codes rely on the variation of a continuous magnitude, which is in turn discrete at the quantum level… For technical purposes the distinction digital/analog makes sense but it’s probably less useful in a discussion of quantum computing.
Serge and Blair, what I meant was that I did not understand how to put the distinction between analog and digital on formal grounds. It is often claimed that analog computers cannot work as well as digital computers, and again, I did not understand what precisely this means and how to formalize it.
Thanks for the clarification, Gil.
It might be amusing to try and develop a formal distinction between analog and digital computations by focusing on the reliability of each type of computing.
John Sidles’ metaphor about the scales of justice reminded me of Archimedes’ “Eureka” moment. If the king is worried about being cheated out of gold by the crown maker, you can balance the weight of the crown with x amount of gold and then balance the weight of the water displaced by the crown with the amount of water displaced by the x amount of gold.
An all analog exercise, no digits required.
For the sake of a philosophical argument, it might be interesting to think of the other extreme and view a deductive proof as a digital calculation that assigns only one of two values to a theory; for example, 1 if it can be proven to be true, 0 if it is proven to be false.
Each step of the proof can be verified; 1 if the step is valid, 0 if it is not. A digital exercise, no analog computing required.
View Heisenberg’s Uncertainty Principle as a possible limiting factor on pure analog computing.
View Gödel’s Incompleteness Theorem as a possible limiting factor on pure digital computing.
Do the Red Queen thing and think about some of the possibilities and impossibilities before breakfast or the second cup of coffee, whichever comes first.
Maybe it can be useful to clearly describe the various (related) questions that we discuss in the post itself and the comments.
1) The classical error correction test.
Aram devoted his first post to this issue that we both regard as important. In this post, I gave my response to this point.
2) Conjecture 1.
Aram described a new detailed mathematical description of my conjecture and explained how Conjecture 1 fails even for the repetition code. I explained that Aram’s formulation is incomplete and that his interpretation of the conjecture is incorrect, and offered a formal version of my own.
3) My critique on Preskill’s model: the distribution of the number of errors.
I expect that the kind of weak correlation in Preskill’s recent model (as earlier ones) implies Gaussian distribution, and square-root n fluctuations for the number of errors. This suggests, in my opinion, that Preskill’a correlation assumptions are not realistic, regardless of the issue of quantum fault tolerance.
4) How to describe the repetition process required for classical robust information to emerge in a quantum noisy world.
I expect that Rep-C is a better description compared to Rep-B and Aram disagrees.
5) The microscopic description of a single bit in digital computers.
I guess that square-root fluctuation for the number of up-spins is not realistic even for description of digital memory. (Here we are talking about the microscopic description of a single bit.) Aram and Robert disagree.
I forgot
6) Classical non-FT evolutions
How can we define and study (if at all) classical evolutions that do not exhibit classical FT, and how can we measure the computational power of specific class of classical evolutions.
In particular, Do 2-dimensional Navier-Stokes evolutions support classic FT? Can such evolutions be approximated (in all scales) by bounded depth (probabilistic) circuits?
Gil, as a further addendum, please let me commend the recent quantum research by Harvard’s Alán Aspuru-Guzik and colleagues, which (IMHO) substantially extend your classification of FTQC-relevant research issues.
In increasing order of practical specificity, three recent works (all open-source) by this group are:
• “
Computational complexity in electronic structure“  (arXiv:1208.3334, 2012) A readable summary of standard computational complexity theory, as viewed through the lens of quantum chemistry.
• “Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation“  (Scientific Reports, 2012) This article presents a novel view of quantum computation as Kohn-Sham dynamics.
• “Finding low-energy conformations of lattice protein models by quantum annealing“  (Scientific Reports, 2012) THis article applies the above computational framework to practical problems in protein-folding … using D-Wave computers! :)
An appealing aspect of this research (IMHO) is that the same algorithms that are proving to be heuristically efficient also are proving to be geometrically natural. Very plausibly, Nature is thereby sending a message to us … but what is that message? :?:
It was disappointing (to me) that (apparently?) little or none of this groundbreaking work was featured at this year’s QIS Workshop in Computer and Natural Sciences: 2012 … almost certainly this lack will be remedied in years to come, as these physical and mathematical ideas gain traction.
And in particular, this work contains ideas that (as it seems to me) are profoundly relevant to issues that are at the beating heart of the Kalai/Harrow debate.
My first blog discussion about quantum error correction and QC skepticism was on the quantum pontiff in 2006. This was my first (belated) encounter with blogs in general, and about a year after I got actively interested in quantum fault-tolerance. I was surprised to learn that Aram started his blog debates with QC skeptics even earlier! Over his own blog at the time “Scrofulous,” (what does it mean??) Aram criticized a 2004 paper by Robert Alicki, in a post entitled “Quantum error correction fails only when you don’t use it.”
Regarding this comment:
5) The microscopic description of a single bit in digital computers.
I guess that square-root fluctuation for the number of up-spins is not realistic even for description of digital memory. (Here we are talking about the microscopic description of a single bit.) Aram and Robert disagree.
Here, I must admit that I am quite curious about how things go and there are reasons to believe that the outcome willl go either way. The description that Aram and I discussed was of a logical bit in a classical memory (hard drive, say) which is based on many electrons each having an up or down spin. Note that this is different from the question raised on item 4 about classical information emerging from noisy quantum world, in that the electrons themselves already represent probabilistic classical information. (I.e., we expect that every electron is represented by spin up or spin down but not by a noisy superposition of the two.) So this is a scenario where we can believe more easily that normal distribution and square root fluctuations are the rule.
Nevertheless, I wonder if we cannot expect additional fluctuations which are proportional (with a small constant) to the number of electrons. Note that robust logical classical bits remain possible even under such fluctuations.
Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.
The paper proves a finite time blowup for an averaged Navier-Stokes equation. These equations come quite close in various aspects to the NS equation itself. The proofs of the finite-time blow up theorem uses some computational-looking gadget, and also some form of fault-tolerance. A plan for implementing similar ideas for the NS equation itself is mentioned.
Gil, please let me commend to Gödel’s Lost Letter readers your own (wonderful) observations regarding the (many) parallels of Naver-Stokes dynamics versus quantum dynamics, that are posted on your website Combinatorics and More as your essay “Navier-Stokes Fluid Computers“
I agree that Terry Tao’s findings are mathematically rich, scientifically interesting … and even (in the long run) may turn out to be of considerable practical importance to engineers and medical researchers (like me). Plus they are fun!
Aside Students (especially) who wish to learn about dynamical singularities may enjoy Donald Saari’s and Zhihong “Jeff” Xia’s lively and accessible introduction to this topic, which is available and free-as-in-freedom from Notices of the AMSas the article “Off to Infinity in Finite Time” (1995).
Thanks, John. One of the motivation for some of my comments/questions above for the computational power of various classes of evolutions is that the principle of “no fault-tolerance” can be of importance also in the classical case. (But perhaps it will be harder to put it on formal grounds in the classical case.) Such a principle has a thermodynamical flavour.
Gil, please let me agree with you that remarkably many open problems in quantum mathematics, science, engineering — and medicine especially! — have (in your excellent phrase and spelling) “A thermodynamical flavour.”
Obviously a great many clinical challenges in medicine — regenerative healing especially — are contingent upon a local entropic decrease. And it’s regrettable that practical thermodynamical challenges are not commonly phrased in language congenial to mathematicians. As Vladimir Arnol’d memorably said
It’s regrettable too that a very considerable portion of 20th century work by great scientists (Dirac, von Neumann, von Kármán, Onsager, and even Feynman too) remains unpublished and/or classified and/or held as trade secrets.
For me as a medical researcher, a very great practical advantage of the Kalai/Harrow quantum debate — an advantage too of the class of quantum challenges that includes FTQC and BosonSampling — is that state-secrets and/or trade-secrets do not dominate the mathematical and scientific discourse.
In this regard the 21st century is (so far) proving to be wiser than the 20th. Hopefully this will continue!