Okay, no sex, but a discussion about quantum computers.

Steven Soderbergh directed the famous movie: Sex, Lies, and Videotape. This 1989 movie won the Palme d’Or at the 1989 Cannes Film Festival, and is now in the United States Library of Congress’ National Film Registry for being “culturally, historically, or aesthetically significant.”

Today I want to explore claims about quantum computation.

With all due apologies to Soderbergh his title seems perfect for our discussion. There may be no sex, but there is plenty of sizzle surrounding quantum computation. We just finished a thorough and wonderful debate here on the feasibility of quantum computers–see this for part of the debate. It was ably moderated by Ken, and the two advocates were Gil Kalai, against, and Aram Harrow, for.

While there are still many interesting issues to add to the debate, our discussion is not about whether quantum computers are feasible or not. We will stipulate that they can and will be built, eventually in some future time. The issue is about the present:

What has been proved about them so far?

And sadly we will discuss: what is believed to be proved but has no proof.

Claims

There are many claims in the literature on quantum computers that I would like to address here. Some are right, some are wrong, and some are at best misleading. Let’s start.

Quantum Computers have been proved to be more powerful than classical. Wrong.

This has been repeated many times and is often claimed in the literature. But there is no mathematical proof that a quantum computer that runs in polynomial quantum time cannot be simulated in polynomial classic time. None. Let me repeat that. There is no such proof. It is an open problem whether

$\displaystyle \mathsf{P} = \mathsf{PSPACE}.$

If this is true, then quantum polynomial time equals polynomial time. Okay, most do not believe that this is true, but we are talking about what is proved. Nor is there any speedup theorem about improving the exponent ${k}$ in a general ${n^k}$-time algorithm when you have ${n}$ qubits vis-à-vis ${n}$ bits. Thus from a mathematical view point it is clear that there is no proof that quantum is better than classical. None. Zero.

Quantum Computers can harness exponential parallelism, trying every possible solution at once. At best half-true.

Quantum computers can create superpositions of ${2^n}$-many basis vectors, each representing a string in ${\{0,1\}^n}$ that can be a trial solution. However, the best-known quantum algorithm for finding a single solution still has exponential running time, order-of ${2^{n/2}}$, and this is tight in black-box models (see below). The allowed linear algebra operations restrict the way this parallelism can be exploited. Scott Aaronson in particular has expended much effort debunking claims that quantum computers are imminently effective at solving certain NP-hard problems.

Quantum Computers can factor integers faster than classical ones are known to. Right.

But misleading, especially when the “are known to” part is sloughed off. There is no proof that factoring cannot be done classically in polynomial time. None. The best factoring algorithms are indeed super-polynomial, but there is no mathematical proof that they are optimal. So tomorrow, or next week, or secretly already?, there could be a classical polynomial time factoring algorithm. Peter Sarnak, for example, is on the record as believing this. I am too. But beliefs aside, there certainly could be such an algorithm.

Quantum Computers have been proved to be more powerful than classical in the black box model. Right. But this is at best misleading; at worst ${\dots}$ There are proofs that quantum machines can out-perform classical in this model. But the model is unfair. The classic machine gets only oracle access to some black box function say ${f(x)}$. Its job is to find some ${s}$ so that ${f(s)}$ has some property. As with oracle results in complexity theory, it is often easy to show that this requires a huge exponential search.

What about the quantum machines? They can zip along and solve the problem in polynomial time. So this proves they are more powerful—right? No. The difficulty is that the quantum machine needs to have the function ${f}$ encoded as part of its quantum circuit. So the quantum computation gets the circuit representation of the black box, or the box is not black. The box is a white box—we can see the wires and gates that implement it.

This seems unfair, and is at best misleading. The fair comparison is to allow the classic machine the same ability to see the circuit representation of the box. The trouble now is the proof disappears. Without the black box restriction there is no mathematical proof that a classic machine cannot unravel what the box does and cheat in some way. So this is at best misleading. We also tried to see what happens if we open the box for the classical algorithm, here and here.

Quantum Computers cannot be efficiently simulated by classical, even for fifty qubits.Wrong.

I heard this the other day, and also it is stated on the web. For instance, Neil Gershenfeld was quoted by Julian Brown in his 2002 book Minds, Machines, and the Multiverse: The Quest for the quantum computer as saying,

“Round about fifty qubits is where you begin to beat classical computers. What that means is that with custom hardware tuned for computation with spectroscopy, you could just begin to graze the point where classical computers run out.”

Yes this was over a decade ago, but petabyte-scale computing was on the horizon then. Note that the interest in this question is quite reasonable, even though fifty qubits are way too few to implement any interesting quantum algorithm, certainly with the overhead of current fault-tolerant coding. The thought goes that fifty qubits may, however, be sufficient to do something interesting. Perhaps they can solve some physical question about a real system? A very interesting question. Let’s turn to discuss this question and scale in more detail.

The Fifty Bit Problem

The challenge is to figure out how we can simulate on a classical computer a quantum computation that uses at most fifty qubits. This is not nearly enough qubits to do anything interesting for cryptography, but makes for a nice question. The obvious ways to simulate such a quantum computation is not impossible for a classical machine, but is still not a simple computation. Thus the challenge: can we do this classically? Can we do a fifty quantum qubit problem on a laptop? Or on a server? Or in the cloud?

The obvious solution is to write down the initial vector of size ${N=2^{50}}$ and start applying the quantum gates to the vector. This immediately hits a problem, since such a vector is really large. But it is not that large. The size is “just” a petabyte—or actually 1/8 of a petabyte. The MapReduce cloud framework has already been used to carry out some basic algorithms on petabytes of data.

Quantum operations are notionally sparse, each affecting only a small constant number of qubits, generally up to three. Under the standard encoding, each qubit splits the long vector into a measurement set for a ${1}$ value and a set for the ${0}$ value, each of size ${N/2}$. However, under different encoding schemes the operations could be local. In particular, many important quantum states, before and after some common quantum circuit operations, are succinct. There is scope for doing simulations analogous to what happens with homomorphic encryption, whereby operators might work directly on the succinct functional representations.

It strikes us that simulating a fifty-qubit algorithm classically is a concrete large-data kind of problem, one that may be interesting for its own sake. This stands apart from recent work on general “de-quantization” of circuits and algorithms, for which Maarten van den Nest’s ArXiV page is a good place to start.

Open Problems

What is really going on with the information structures that are acted on by quantum computers? That they are ${2^n}$-long bit-vectors in any sense we have come to doubt. Is there a better general way to represent them?

Even with the ${2^n}$-vector representation, for ${n = 50}$ qubits, can we make classical simulations concretely efficient?

1. April 27, 2013 10:22 am

Don’t know much about quantum computation, but my ventures in graphical syntaxes for propositional calculus did turn up a logical operator whose evaluation process reminded me a little of the themes involved in the collapse of the wave function.

Will post a few links below, as it sometimes takes me several tries …

• April 27, 2013 10:48 am

Here is the essential information on minimal negation operators.

• April 27, 2013 2:16 pm

Boolean formulas constructed from minimal negation operators can be given graph-theoretic representation as “decorated” or “painted&radquo; versions of rooted cactus graphs.

Here is a place where you can see some pictures and a description of the Fundamental Evaluation Rule for cactus expressions of propositional formulas.

April 27, 2013 10:24 am

There is a serious misunderstanding in your explanation of the claim “Quantum Computers have been proved to be more powerful than classical in the black box model.”

You say, “The difficulty is that the quantum machine needs to have the function {f} encoded as part of its quantum circuit. So the quantum computation gets the circuit representation of the black box, or the box is not black. The box is a white box—we can see the wires and gates that implement it.”

Where do you get this idea from? The quantum machine also has black box access to the function f, just like the classical algorithm. A classical machine would have access to a black box which performs x –> f(x). Since this is not reversible, a reversible classical machine would get access to something like (x,b) –> (x, f(x)+b), where b is a bit.

A quantum algorithm gets access to exactly this oracle. However, it is allowed to make queries in superposition. If you take away this ability, then you would make the classical and quantum query complexity models the same, and of course no proof of a speedup is possible.

• April 27, 2013 11:02 am

What we (especially I) feel is “white” in the quantum algorithm is its ability to use the underlying linear algebra for interpolation. That’s the thrust of the “Quantum Chocolate Boxes” posts referenced in that section, which try to level the playing field by allowing the classical algorithms access to multilinear extensions. Doing so at least seems to create open problems in place of some proved separations.

You are right that the U_f operator is not pieced apart in the quantum circuits, but its matrix-or-gates have to be involved physically in the circuit for the linear-algebra relationships to hold.

• May 1, 2013 5:17 pm

For me, I see the oracle models as reflecting the fact that even if we have a description of a (classical) circuit, we often don’t know of anything better to do with it than to simply run it and look at the output. In that interpretation, the opacity of the “black box” comes only because of our inability to solve the halting problem, or more generally to extract information from circuits beyond their input-output relations.

The factoring / order-finding connection is a good example. The function mapping x –> A^x mod N is of course not a random function. But for all we know about number theory, it tell us no more about the order of A than a random function with the same period would. So it is a fruitful exercise to view it as an oracle with a periodicity promise. This perspective is what led Shor to come up with his algorithm.

April 27, 2013 3:02 pm

Reblogged this on Room 196, Hilbert's Hotel.

4. April 27, 2013 5:39 pm

This post reminds me of a post by Māris Ozols earlier this year: http://marozols.wordpress.com/2012/05/21/three-myths-about-quantum-computing/

• April 27, 2013 7:21 pm

The post by Maris Ozols reminds me of why I think computer scientists are so dumb.

Ozols points out in hundreds of words that the classical analogue of teleportation and no-cloning and dense coding is a classical Bayesian network. Duh. Last time I checked, Bayesian networks are very popular among certain computer scientists (the useful kind. Not the complexity theory dofuses)

• April 27, 2013 7:24 pm

Notice that Ozols seems to be totally ignorant of Bayesian networks

April 27, 2013 11:40 pm

You think maybe, perhaps, it is possible that with the most powerful supercomputers now available you might conceivably be barely able to simulate a quantum computer with 50 qubits. OMG, what about 60 qubits?!?

• April 28, 2013 4:16 pm

We hint well enough IMHO that we really mean classically simulating “O(50)” qubits.

6. April 28, 2013 9:01 am

One thing that quantum computers can do (possibly even with just log depth quantum circuits) is to approximate-sample the Fourier transform of a Boolean function in P.

Actually it looks to me that this can be regarded as an interesting complexity class that can be called P-FOURIER-SAMPLING. Come to think of it, it seems interesting given a class X of Boolean functions what is the computational complexity of approximating sampling the Fourier transform of functions in X.

And is BQP closed under this? if you consider all Boolean functions that can be described by a quantum circuit, can you sample their Fourier tansform also in BQP? (I hope this is not silly.)

• May 1, 2013 5:09 pm

Yes, you can do this. Amplify the BQP circuit so its error is much less than exp(-n). Use it (*) to create the superposition over each n-bit string x, where ket(x) has phase (-1)^x. (Sorry, not using funny symbols because you never know when they will break your post.) Then FT that.

The (*) step involves running the amplified BQP circuit on input x, applying a phase to the answer bit, and then running the amplified circuit backwards to erase the garbage it produces along the way.

• May 3, 2013 2:46 am

That’s interesting. Now suppose that you start with P-SAMPLING – distributions that can be approximated by probabilistic classical machine, And suppose that you close your class of distributions under taking Fourier transform. What class will you get? Lets call this class X. X is a subclass of QUANTUM-SAMPLING. Maybe it is also a subclass of polylog-depth quantum sampling (but I am not sure about it). Do we know that te ability to sample the Fourier-transform already imply some quantum power? Is it the case that X contains distributions that can be approximately sample by a log depth quantum coputer (or polylog depth, etc.)

7. April 28, 2013 12:37 pm

I still do not see what is different between classical computers and quantum ones. The only difference I see is the non-analyticity of the measurement, e.g. z z* is not analytic. In general it seems that physics in general, and QM in particular is arranged around non-analytic singularity, with gravitation, electromagnetic and strong forces being the epsilon approximation of this singularity.

April 28, 2013 3:14 pm

If quantum computers are indeed able to better simulate reality, then they might also have their say in the sex business…

April 28, 2013 3:44 pm

In communication complexity there are some exponential quantum-classical separations that are both proven and physically realistic (i.e., no white-box/black-box fudge)

10. April 29, 2013 1:49 pm

Thus from a mathematical view point it is clear that there is no proof that quantum is better than classical. None. Zero.

enjoy your blog esp on subjs like this but your writing sometimes veers a bit over what is proven, what is not proven, what is conjectured by others [the majority], what is widely believed, what is conjectured by you, & seems sometimes to be missing fineprint etc… eg you tend to be a little too optimistic compared to the majority eg on stuff like P=NP. the above quote does not seem to be phrased too strictly. what is clear is that no proof exists today either way. its important to underline that you are not asserting a no-go theorem, whereas the above statements sounds like it could refer/allude to one. you had a long blog about “no-go” theorems and the subject can be very confusing without very strict qualifications. basically much of what you are talking about in this post are “huge open questions” at the forefront of research. in other words the best statement on many is we dont know “yet” and then there could be some discussion of [despite that] what is widely believed….

• May 1, 2013 5:53 pm

Agreed. There is no proof that NEXP doesn’t have log-depth circuits. None. Zero. Cryptographers could be totally wasting their time.

• May 1, 2013 6:22 pm

joke? yes as far as the open questions in complexity theory wrt complexity class separations etc, they are remarkably wideranging; some have small, others have huge implications as Lipton pointed out eloquently in his blog “we believe a lot but can prove little”. as for the speed of quantum vs classical computation, it is quite respectable & judicious of Aaronson & Lipton et.al. (& Harrow) to try to debunk popular misconceptions that QM could be something akin to a computational “free energy device” with (in Aaronsons case) near outright ridicule of such a belief/conjecture. however, suppose QM is/turns out to be “somewhat” but not “dramatically” [theoretically] stronger than classical computing. that is arguably not at all a radical position.

• May 2, 2013 11:51 pm

Yes, my point was a joke, meant to agree with your point.

11. April 29, 2013 1:55 pm

as for the relationship between parallelism and QM agreed so-called “exponential parallelism” you refer to seems unlikely and is widely rejected by conventional wisdom, but imho the connection between parallelism & QM is still basically an open question & being addressed by active research….

12. April 29, 2013 2:09 pm

heres another somewhat closely related studied question that seems basically open. feynman asserted that QM computers would be inherently good at QM physics simulations, which seems intuitively plausible/correct, but imho its more an open question. What is the proof that quantum computers can efficiently simulate arbitrary quantum mechanical systems?

• April 29, 2013 3:54 pm

A simulation is useful as a computation only if it runs faster than the system being modeled, otherwise we could just wait around for the original system to “compute” its result. Looks like we have another Zeno-type problem here.

April 29, 2013 4:18 pm

I don’t read much of news about Quantum Computing. But can somebody enlighten me on the ETA for a 8-qubit computer?

14. May 1, 2013 6:36 pm

ps rjl your movie refs are always fun but boy are they verging on nonsequiturs at times with relevance to the topic hanging-by-a-thread. so! am taking license to be equally off topic as follows. other way cool soderbergh movies, highly recommended [caveat! like your own ref, defn not for the faint of heart]:

15. June 20, 2014 9:14 am

Well in theory they could change the atom nucleus position, so in future it could get 3^n in quantum computing power.
I guess you are mistaking but quantum is really the future and you said it yourself, even if now the power is somewhere at 2^n/2, because n is the number of atoms.
If you will think at the atoms as computer threads, for sure they will be slower than a normal computer thread, but that’s not that important, because you can add plenty of them. Imagine having over 50k atoms inside a quantum processor, for sure it will be much smaller than a normal pc and way faster. Right now with D-Wave technology they only compare a quantum pc with just about 512 atoms (I may be mistaking), and they see performance similar to a pc (not bad in my opinion considering that they are using just a few atoms).

Indeed it may not get faster than classical computers, but you are missing one BIG thing. You can increase the number of atoms, to have a greater number of threads than any normal pc.
With just 20000 atoms you will be able to reach in the current state somewhere at 2^10000 (2^20000/2 at the current evaluations) “threads” which is just a simple case. In future they will be able to add more atoms for sure.
So the quantum may not calculate faster if you compare single atoms, but the advantage is that you can add plenty of atoms and the parallelism power as “quantum” > average pc in that case.