# Is Factoring Really In BQP? Really?

* Is the factoring problem in the complexity class BQP? *

Peter Shor is the author of **the** paper on quantum computing—the paper that helped create an entire new research agenda. It was not the first paper on quantum computing, it certainly is not the last paper on quantum computing, but it probably is the most exciting one. Peter won the Nevanlinna Prize in 1998, among many other well deserved awards for this seminal work. The paper is known as the one that shows factoring is in

Today I have an embarrassingly simple question:

Is factoring in ?

This is one of those discussions that perhaps will show how wrong I can be. I try to get things right, but we are all human. Take a look at this Sunday New York Times’ chess column to see examples of chess blindness: one example is Magnus Carlsen, the top rated player in the world, making a “colossal blunder” that basically lost a Knight in one move. In my case the blunder may even be in a position where I made the right move 17 moves ago. Then I was involved with Umesh Vazirani and Ethan Bernstein over measurement and precision issues in their famous seminal paper on quantum Turing machine complexity, which formally defined . Perhaps my question today is just silly, but I am going to press forward anyway. Even if I am wrong, perhaps this will yield a nice intuitive answer to the other way of putting my question:

Is there a simple way of reducing search problems to decision problems in quantum computations, especially those that involve sampling?

Ken Regan and I have discussed this issue quite a bit and the comments to follow are joint. If this turns out to be just wrong or not even wrong, then I am probably the one to blame the most—if it’s a good question then thanks go to Ken.

** **

If you know the quantum complexity class , then you can skip ahead. But we thought it is important to be sure we all are together. Recall is the counterpart of —does that help? The complexity class is the set of languages that are accepted by a *probabilistic* machine that runs in polynomial time. Further when the machine accepts it does so with probability at least , and when it rejects it also does so with probability at least . The consequence of forcing accept/reject to be *bounded* away from is that this makes the class of languages “nice.” Note if a language is in , then it is possible to determine if a string is in to high probability: just run the machine repeatedly with different random bits and take the majority answer.

The class is the same except that now the probabilistic machine is replaced by a polynomial time quantum one. The machine also must have the property: accept/reject is always with probability at least . This again is the reason for the *bounded* in the name.

Let’s turn to factoring, one of our favorite problems.

** What Shor Proved **

Shor showed that there is a polynomial time quantum computation that can find a factor of a number. It succeeds with a reasonable probability, and can be repeated, if needed, to find all the factors of the number.

This is a brilliant result, a wonderful result, a game changing result.

** What Shor Did Not Prove **

Peter’s paper does not directly prove that factoring is in . Yet everyone knows that factoring is now in . Right? The claim is everywhere: it’s on wiki pages, quantum sites, in lecture notes, and elsewhere. But we do not understand the proof—actually one of us, guess who, thinks there is a problems with the “proofs.”

Let’s look at the issue more carefully.

** Factoring **

The factoring problem is about decomposing an integer into its prime factors. Thus given as the input, a factoring algorithm must output a list of primes

so that . Note if we insist that the list is ordered, then the answer is unique, since the integers have unique factorization. Technically this is a *function*: given an input it returns unique output. Thus if denotes the factoring function, some examples are:

Since factoring is a function we cannot say that it is in a complexity class—that makes no sense. It a type error like trying to take the square root of a character string: languages give only one bit and the factoring function returns many bits. This has nothing to do with the power of the complexity class, it is a straight consequence of their definition. Thus it is meaningless to say that is in or even in .

Shor’s paper itself speaks of only once, and only as a function class, but of course it is a language class. Thus we need to speak of factoring as a decision problem.

** How To Encode Factoring As A Language **

There are, as you probably know, many ways to encode a function into a language. This is, therefore, what people mean when they say “factoring is in .” Sounds easy—we just have to create a language that encodes and then show that the language is in . Some sites define the following language as their encoding of factoring. The language is:

Is this a reasonable language to encode factoring? It must satisfy, we claim, two properties:

- Given an there must be a polynomial time algorithm that computes if allowed access to as an oracle.
- Given an there must be a polynomial time algorithm that determines whether or not is in if allowed access to .

What these conditions really say is “the language captures factoring, but no more.”

The statement (1) is easily seen to be true: use binary search to find in polynomial time a factor of . Divide by this factor, and then continue until all the factors are found. Finally sort them and output the value of . Perfect—so far so good. The statement (2) is also easy to check.

** The “Proof” **

Here is a proof that is in . It is from a **site** on quantum computing—we know you can probably find it, but we thought we would not name it directly.

The decision problem for factoring is in : given and , is there a non-trivial factor of n smaller than ? This is easily seen to be equivalent to having a polynomial-time quantum algorithm to find a non-trivial factor of any composite number , with probability of outputting such a factor of at least . Shor’s efficient quantum algorithm for factoring can be used to find factors in this way, and so it follows that the decision problem for factoring is in .

Color us confused, since we, especially one of us, does not follow this argument.

Let’s consider the argument carefully with replaced by polynomial time. In this case the argument is rock solid. Suppose that there is the new super factoring algorithm that given finds all its factors. Then here is the proof that factoring is in :

- Let be the input.
- Run the polynomial time algorithm and factor .
- Check if any .
- If yes, then output accept; otherwise, output reject.

This proves that factoring is in provided there is a polynomial time factoring algorithm. Rock solid. No problem, if the algorithms are classic.

If the factoring algorithm is Shor’s, then we do not see why this will be in . There are several reasons that we are confused, but the main one is simple: A quantum algorithm makes one measurement and gets one bit. The above algorithm uses Shor as a subroutine, and his algorithm makes a measurement of many qubits. *This seems to be a problem.* It says to us that the proof is not correct, that it does not prove that factoring as a language is in .

** What Is Going On? **

Now we hasten to mention that the aspect of Shor’s algorithm needing to measure many bits is the basis of Leonid Levin’s critique. This has been viewed as an issue of whether the quantum states created in Shor’s algorithm are really physically feasible, see Scott Aaronson here. We have a more facile query: what if you can only measure one qubit and must then start over?

Here is an abstract description of the problem we see that does not mention quantum. Suppose in the functional (i.e., search-problem) case we have a “black box” that when we press a button at time gives us a random function value . This is a “multi-valued function,” really a *sampling* procedure. We also have an efficient deterministic procedure such that with high probability over a sufficient number of samples, gives us the correct answer. Shor’s algorithm can be viewed this way, and so far that’s fine.

Now, however, suppose that in place of we have a black-box that gives only one bit according to a language representation of the multi-valued function. Thus instead of getting , we get only one bit pertaining to . The problem is that if we press the button again at time , we may get a bit that pertains to some other value that is not the same as . If we could guarantee that repeated taps on would stay focused on then we could apply search-to-decision and recover the function value . Our problem in a nutshell is that we do not see how Shor’s algorithm carries over to this setting.

We are confused. The whole idea of quantum computing, the whole reason it is more powerful, the whole source of excitement is due to new rules. Old simple tricks from classic computing do not work anymore. You cannot erase—computations must be reversible—is one good example.

We think there are three possible situations:

- We are completely wrong. The proof is correct and we are just confused.
*Quite possible—would not be the first time.* - We are right, but. The above “proof” is the one written down in many places, but all quantum experts are aware of how to fix it. The real proof is so trivial that no one has ever bothered to write it down.
*Again quite possible.*But we would then suggest that it might be nice to write down a correct proof—no? - We are right. There really is a missing argument here.
*Possible?*

What references seem to be telling us is that the answer is deep within closure properties of rather than on the surface. But even here, the details cannot be found in the Wikipedia article on —rather the article on (a different class that equals ) proves such properties for and alludes that the proofs for are similar. Can we make this clearer?

** Open Problems **

Is factoring really in ? We believe that it is, and we believe that there is a proper proof of this fact. But we do not think the “proof” that is out there is correct. It would be great if we have simply overlooked a well-known proof. If you know one please tell us—somehow we think if this is the case we will be told.

I don’t really understand quantum computing all that well…but isn’t it the case that, given x and k, you could run Shor’s algorithm repeatedly until you have found all of the factors…and then accept if one of those factors is less than k, and reject otherwise? Doesn’t this solve the decision problem?

Sorry, just re-read the article more carefully and noted that you addressed what I said. I’m still confused, though…BPP is contained in BQP. Surely it’s possible to do the reduction if BPP is a subset of BQP…right?

Philip,

This is just find, but does not show factoring is in BQP. It shows that factoring can be done by a larger class of quantum computations.

LOL … Dick and Ken’s question reminds me of the reluctant penguin phenomenon … who’s going to be the first to dive into these waters?

Plunging in fearlessly, the orthodox explanations of why Shor’s algorithm establishes that factoring is in BQP have always satisfied me, via the following reasoning. Let us suppose we are given as input (

x,k), and after a lot of classical pre-processing, quantum register initialization, quantum processing, quantum error correction, measurement of as many ancilla qubits as we like, etc.—all precisely as described by Shor’s original article and elaborated by subsequent QIT researchers—when all the (polynomially many) quantum steps are completed, we end up with classical registers that store the quantities (x,k,F(x)).As the final step, the connection to BQP is established by classically computing a single (

k-dependent) classical decision bit, which is quantum-encoded in one final output qubit … and we are done.Is this right? Or am I about to be devoured by a leopard seal? :)

Again being very facile, our beef is that BQP as perceived by John Q. Public says at the end you measure 1 qubit and with 2/3 probability it gives the right decision answer—but in the process you describe, you’ve already measured many qubits. And per Levin’s critique, you’ve measured many at a time—though I realize there are steps around the further objection that this requires insane precision.

The positive query, which other events kept me from asking selected people beforehand, is has anyone formalized and studied what we might call “Bottleneck BQP”? This is where you measure 1 qubit and then everything’s zapped and you have to re-run, though based on results so far (i.e. “adaptively”) you can change the re-start configuration in some limited manner.

My (engineering-level) understanding is that every concrete design for a quantum computer, that has ever been seriously proposed, entails ancilla qubit measurements at intermediate stages of the calculation, as a necessary ingredient of quantum error correction. It is not necessary to show the results of these ancilla measurements to the outside world … but it *is* necessary to make them.

As one of Dick and Ken’s greatest

Gödel’s Lost Lettercolumns once asked“Are they allowed to do that?”… meaning in the present case …“At intermediate stages of a calculation, are quantum computers allowed to perform measurements on ancilla qubits, and perform classical computations on these measured values, and adapt subsequent quantum computations to the result of these measurements?”… and the consensus definition of quantum computing (AFAIK) is that the answer is “yes”.Indeed I agree with intermediate measurements being realistic, but then do we need to clarify the definition of “BQP” to state that? The nub is in the Anonymous 8:42pm part of the comments—to reference the proverb, can one always “cut twice and measure once”?

as far as I understand, all the intermediate measurements (and operations conditioned on their results) could be replaced by (quantum-)conditional gates with only polynomial overhead – i.e. all the classical post-processing could be done coherently on a quantum computer. Then the only measurement would be the one deciding if (one of) the factor(s) found is smaller than k. What am I missing?

Lurker asks:

What am I missing?Ken and Dick’s questions have caused me to realize, you are missing the case in which the intermediate classical processing is accomplished by a Turing machine that is running any of the algorithms in

Pfor which there is no proof (even in principle) that the algorithm runs inP.How

exactlycan one emulate an unprovably-in-Pclassical computation via polynomially-many reversible logic gates? Hey … don’t ask me! :)The proof machinery associated to this kind of analysis might well be illuminating in itself … as well as being essential to a rigorous definition of

BQP.I think this is case (2). I will try to write more later, but a good place to look is at the so-called Principle of Deferred Measurement. You can defer all measurements of a hybrid classical/quantum algorithm until the very end, when you only need to measure one qubit.

Ah—the “only need to measure one” is the part we’re missing…!?

This would be nice to see. But the given proofs do not yes any deferred measurements? No?

Anonymous’ post is one of several that have raised profound issues, which in aggregate have led me to to agree with Dick and Ken. A good starting reference is Michael Nielsen and Isaac Chang’s admirably concise and clear discussion of what they call the

Principle of Deferred Measurementand thePrinciple of Implicit Measurement(Section 4.4 ofQuantum Computation and Quantum Information).When we translate the algorithmic postulates of Nielsen and Chuang’s text into the geometric postulates of category theory—as engineers find convenient for practical computations—then we find that the Nielsen & Chang measurement principles are natural in both pictures (algebraic and geometric) … and yet their respective algebraic and geometric representations “sit in our brains in very different ways” (to borrow Bill Thurston’s wonderful phrase).

Thus it now seems (IMHO) that a discussion of these measurement-related issues in the context of a rigorous definition of BQP would be very interesting and valuable, precisely to the extent that it helps us “feel these ideas in our whole brain” (again to borrow a phrase from Bill Thurston).

It’s been many years since I’ve thought about this, but unless I’m misremembering something, or misunderstanding the problem (certainly possible!) this is not difficult to fix.

The key point is that if you want to use Shor’s algorithm as a subroutine in a classical computation, then you can actually convert all the parts in the classical computation to quantum, eliminating the measurement steps along the way. Formally, the procedure you apply to do this is: (1) convert the classical parts of the computation to a reversible classical computation using gates such as the Toffoli and Fredkin gates; (2) eliminate all the measurement steps, and replace bits by qubits, and classical Toffoli etc gates by their quantum equivalents. At the end, you need measure a single qubit: the qubit which corresponds to the classical bit you would have inspected. All the other qubits can safely be ignored.

That last paragraph is a summary. It’s instructive to work through an example which illustrates the general principle.

Suppose we have a unitary operation U which acts on (say) three qubits. At the end we measure those three qubits, and than postprocess those measurement results by doing a classical computation C, which we’ll assume is done using just Toffoli gates (say), and give just a single bit as output. In fact, this procedure is equivalent to not doing the measurement of the three qubits, and replacing the classical Toffoli gates by their quantum equivalents, followed by measuring the appropriate qubit.

This isn’t difficult to prove, with a bit of playing around, although I won’t try it in a comment.

I hope I’ve understood the question correctly, and addressed it.

I am, by the way, assuming that you agree with the following: you can use Shor’s (quantum) algorithm as a subroutine in a classical algorithm that will, with high probability, tell you if a number n has a factor smaller than k. The total complexity is polynomial in the size of n.

That’s the starting assumption behind my comment: if we’re not agreed on this, then I have misunderstood your objection.

Incidentally, a little Googling suggest that you’ve taken the quote about BQP from the short stub article about BQP on the Polymath wiki (http://michaelnielsen.org/polymath1/index.php?title=The_complexity_class_BQP ). I wrote that stub, although I had no idea the text was mine until I Googled just now! I doubt there’s much else that is related to quantum computing on that wiki, and it’s certainly not a site on quantum computing.

Here’s an example that illustrates what’s going on. It’s not intended as any kind of formal proof, but rather as an example to explain the intuition behind what’s going on.

(1) Suppose we perform a quantum computation that results in a state \sum_x \psi_x |x>

(2) We measure the state, getting result x with probability |\psi_x|^2.

(3) We now do a (reversible) classical computation, getting result C(x) with probability |\psi_x|^2, where C(.) is some classical permutation.

(4) We consider the first bit only, C_1(x). The result is 0 with probability \sum^0_x |\psi_x|^2, where the ^0 indicates that we only sum over those x for which C_1(x) = 0. Similar considerations holds for the first bit being 1.

Now consider the following alternate series of steps:

(1′) Suppose we perform a quantum computation that results in a state \sum_x \psi_x |x>

(2′) [This step intentionally left blank, i.e., no measurement is performed]

(3′) We take the reversible classical computation from step (3), above, and replace all the gates by their quantum equivalents, e.g., classical Toffoli by quantum Toffoli, etc. We do the resulting quantum computation, with the final state being \sum_x \psi_x |C(x)>.

(4′) We measure the first qubit only, and leave the other qubits alone. The result is 0 with probability \sum_x^0 |\psi_x|^2, by the standard rules for quantum measurement.

Note that the probability for a 0 outcome in step (4′) is exactly the same as in step (4).

As Michael and others have pointed out, measuring only one bit does not limit the power of BQP, since BQP contains BPP. For example, BQP can be amplified, just like BPP, using similar techniques.

Measuring a single bit at the end is really just a convention, due to the fact that languages demand one bit answers (accept/reject). In this sense, BQP is just like BPP.

However, “Bottleneck BQP” probably is weaker. One way to formalize this is to consider BPP^{BQNC1}, which still contains factoring, but is weaker than BQP relative to an oracle.

By the way, the reason no one talks about solving factoring in this way is that no one other than complexity theorists cares about it. If a quantum computer is built for factoring, then it’d be insane to perform the expensive continued fraction expansion on it, when that part of Shor’s algorithm works perfectly fine on a classical computer. Ditto for performing the complete factorization and outputting a single bit to answer whether there exist a factor of N less than some threshold. So most discussions of quantum factoring will present it as a hybrid classical-quantum algorithm.

aram,

Of course in practice you are right. But the statement factoring as language L is in BQP is a mathematical statement. As such it deserves a proof—no?

It seems like this objection extends beyond factoring to the decision form of many functions, and so it is definitely important to clear up any misunderstanding about this. Michael’s approach works in almost all circumstances (and possibly all, though I’m just worried that there is a possibility of destructive interference for some problems if you input the output of one computation into the other without performing measurements along the way). Either way, here is a slightly different way of achieving the same thing for any function being converted to a decision problem:

Whenever you should measure a qubit in the Z basis, instead CNOT it with an ancilla prepared in the |0> state, making a pseudo copy of the state of the qubit. Note that measuring in the Z basis commutes with the control of the CNOT, and so whether you measure before or after, you obtain the same result. This means that this operation is equivalent to simply making a copy of the classical result of the measurement.

Next you simply perform your next bit of computation on this pseudo copy, as if it were the output bit, while not performing any operations on the measured qubit. As Michael mentions reversible classical computation can be achieved through Toffoli gates, but you can also perform more quantum operations if necessary (depending on how you have formulated the decision problem).

Clearly, the measurements all no come at the end of the computation, and only one of these corresponds to the output of the decision problem. As such, you can simply ignore the other outputs, and never bother to measure them, meaning you only use one measurement.

Hope this answers the question.

Bennett, Bernstein, Brassard and Vazirani (http://arxiv.org/abs/quant-ph/9701001, Section 4) show that BQP with BQP subroutines still sits in BQP. Combined with Shor that shows that language versions for Factoring also sit in BQP.

All,

Let me say this simply. One: the `proofs” out there are not complete. Two: where is the pointer to a paper that proves this important result?

I am a huge fan of your blog, but this kind of question does not seem well-suited for the format. The proof uses well-known background material, but that does not make it incomplete.

Anonymous, the physicist Lowell Brown described such proof technology (sardonically) as “Well-known to those who know it well.”Lance,

I still do not see this. Okay do lots of BQP subroutines, but Shor gives a different value each time. How do you put them together?

Actually I’m quite confused by this kind of entanglement: Suppose |\psi> = |00> + |11>, when you have measured the first qubit and the result is 0, the measurement result for the second one will automatically come to be 0, even before you actually measure it.

I don’t know what is the physical foundation of it — why Nature does things in this way (or is it)? But this simple example shows the intrinsic nature of quantum computing. Thanks to the Principle of Deferred Measurement, just one qubit (the decision qubit) needs to be measured. You can turn the whole classical decision framework into an equivalent quantum one and just measure the final decision qubit. After you have done all the computation except measuring that qubit, the state of it will be a proper superposition of all possible results. If the classical method ensures a greater share of the results are correct, then by the way a measurement works, the quantum method also ensures that.

In a sentence, superposition accumulates, while entanglement differs. That’s where the power of quantum computing lies.

I think this is a really good question for http://cstheory.stackexchange.com/, how about you post it there? For instance, Peter Shor himself uses the site quite regularly, so if anyone knows the answer, chances are someone there will.

Alex has made is a really good suggestion IMHO … certainly the comments (so far) on this topic have raised just as many questions as they have answered. For example, Michael Nielsen’s procedure for classical-to-quantum conversion of intermediate steps make perfect sense for when the intermediate computation is carried out by circuits … but does this argument go through when the intermediate classical computation is a Turing computation?

So I have come to agree with Ken Regan and Dick Lipton, that a thorough exposition of these issues would be welcomed by many folks for at least three reasons: (1) Questions like “Is factoring in BQP?” deserve a rigorous answer, (2) The proof machinery associated to that answer is of independent interest, and (2) The communal response to this class of question illuminates the Reinhard Selten-esque proposition “Quantum computing is for proving theorems, not for computing.”

It seems that your opposition comes from the definition of BQP: are we restricted to measuring exactly one qubit? If not, if we’re allowed to measure other qubits during computation, then Lance’s comment and link seem like a clear confirmation of the canonical “proof” which you’re questioning.

I particularly don’t understand the issue of “Shor’s giving a different value each time”. As usual we can just use trial division to verify the outputs of Shor’s algorithm and reduce the problem size. Polynomially many repetitions will result in an exponentially small chance of not finding any factors.

Huck,

Shor’s algorithm, the quantum part, does not give the same answer. It gives different information each time.

Dick: Do you, or do you not, agree that Michael, Aram, Joe, and Lance have answered your question? If you agree, then I think it would be a useful public service to add a note to the original post, to clear up any possible confusion:

“Yes, factoring is really in BQP. Really! And in other breaking news, the Karp-Lipton Theorem really holds, c^2 really equals a^2+b^2 for a right triangle…”

Provided you agree that your question has been answered, your complaint is really about pedagogy: you think that Shor’s original paper (as well as later expositions) should have directly showed that some decision problem related to factoring has polynomial-size quantum circuits, rather than giving poly-size circuits for the factoring *search* problem and then appealing to well-known closure properties of BQP (as shown, for example, in the BBBV paper) to get that the decision version is also in BQP.

For whatever it’s worth, I strongly disagree with you about pedagogy. By similar logic, one could assert that *nothing* in theoretical computer science has “really” been proved, so long as the authors have merely described their reductions in English, and not given explicit state transition diagrams for the appropriate Turing machines!

I think a fairer standard is that something is “proved” when the gaps are small enough that any expert (or enterprising student), working in good faith, can fill them without difficulty. And that standard is more than met by Shor’s algorithm, as evidenced (for example) by the thread above.

Scott,

I am not not for being pedantic nor for excessive formalism. I have written papers arguing against that for years. The reason for the post was to make a couple of points: (I) that factoring is in BQP is a very important theorem; (II) as such it should have a proof that is on the web and available to all. The A answer that no one made was: “here is a url to the proof in X’s text/paper/wiki etc.” The answer that I got is: yes the simple argument that is sometimes made is not the real one, but here is the correct one.

I think that there is a big difference between being pedantic and simply wanting to see a sketch of what they have said in the comments. I do agree they have answered the issue, but would suggest that someone write down a short proof so all can see.

Scott you are the expert of experts in quantum computing, and I meant no disrespect to you or anyone else. I just wanted to see how this fundamental result is proved. So yes it is true.

I think perhaps the issue now stems with how big a step deferred measurement is. When I read the proof in your post, I immediately considered it a full proof because deferring measurements seems to me a small step (falling into one of the gaps Scott mentions). Obviously, however, if this has caused the two of you some trouble in understanding the proofs, it is not as obvious a step as it might seem to those writing up the proof.

So, with that said, here is a short proof (excuse the latex, I’m not sure if you support it in comments or not):

One way to decide the problem is simply to run Shor’s algorithm $\log(N)$ times (where $N$ is the input number), and then check whether any of these factors is less than $k$. Clearly $N$ has at most $\log(N)$ factors, and so the error probability is bounded.

Each run can be written as a circuit composed of a quantum part $Q$, a measurement in the computational basis $M$, and a classical part $C$. So, the input state evolves as $C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger$. As this is done $m$ times in parallel, we have $(C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m}$. To this we then need to apply another round of classical computation $S$ to check whether one of these circuits outputted a factor $<k$. So the full operation is $S (C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m} S^\dagger$. Lastly, we explicitly measure the output bit for this decission problem (again in the computational basis) which we denote $M_D$.

Note, both $C$ and $S$ need to be performed reversibly for this particular proof.

So the entire process can be written as $M_D(S (C M(Q |input\rangle \langle input| Q^\dagger) C^\dagger)^{\otimes m} S^\dagger)$.

The step here which has lead to the confusion comes next: we note that not only that $M_D$ and $M^{otimes m}$ are simultaneously diagonalizable, but that this is also true for $K M^{otimes m} K^\dagger$, for any reversible classical computation $K$.

Taking $K = S C^{\otimes m}$ and this allows us to rewrite the operation as $M(S^\dagger (C^\dagger)^{\otimes m} M_D( S(C Q |input\rangle \langle input| Q^\dagger C^\dagger)^{\otimes m} S^\dagger) C^{\otimes m} S)$ by commuting the measurements.

So, we still have the same number of measurements, but not $M_D$ is measured first. Since we only care about the result of $M_D$ (it is the output of the decision problem, after all), we can simply ignore the subsequent computation, and choose not to perform it, since it will not alter the value returned for $M_D$. This yields $K = S C^{\otimes m}$ and this allows us to rewrite the operation as $M_D( S(C Q |input\rangle \langle input| Q^\dagger C^\dagger)^{\otimes m} S^\dagger)$.

Note that this now has only a measurement of one qubit, and it is performed at the end of the computation, giving the output of the decision problem, and so the result is proved.

Dick: No offense taken! On the one hand, I knew that you personally were seeking knowledge in complete good faith; but on the other hand, I also knew from experience that the Quantum Confuseniks would seize on your post as proof that even well-known computer scientists doubt the mathematical correctness of Shor’s algorithm.

I disagree with your repeated contention that the proof of factoring in BQP that appears in textbooks is “not the real proof.” It IS the real proof—it just assumes a bit of background knowledge that needs to be filled in if you don’t have it. Which, fortunately, is exactly what people did here—and in my experience, also what sites like MathOverflow and CS Theory StackExchange are perfectly designed for.

Well, I actually support a “pedantic” attitude here. It’s not just “pedagogy”, it’s science. A mathematical result should have a written proof. Nothing more or less.

Dick and Ken’s questions, and the comments upon their questions, certainly show us that that quantum dynamics can “sit in our brains” in many different ways (in Bill Thurston’s wonderful phrase). And this is a wonderful thing … because as Thurston again has put it:

The preceeding quotes are from Thurston’s foreward to Daina Taimina’s

Crocheting Adventures with Hyperbolic Planes(2009), and from Thurston’s AMS essayOn Proof and Progress in Mathematics(1994) … both essays are well worth reading (for students especially).Perhaps no element of math and physics sits in our brains in more delightfully diverse ways than what Nielsen and Chuang’s

Quantum Computation and Quantum Informationcall the “Principles of Deferred and Implicit Measurement” (Section 4.4), or more formally, their “Theorem 8.2: Unitary Freedom in the Operator-Sum Representation”, Nielsen and Chuang have clearly set forth the algebraic and informatic aspects of these principles … and yet it is necessary to appreciate that these same ideas appear in the literature in innumerable other guises: Mensky’s textbookContinuous Quantum Measurements and Path Integrals(1993) embraces a path integral point of view; Carlton Caves’ on-line notesCompletely Positive Maps, Positive Maps, and the Lindblad Form(revised 2008) translates these ideas into the language of stochastic calculus; Ashtekar and Schilling’sGeometrical Formulation of Quantum Mechanics(1999) makes at least a start toward translating these ideas into geometric language; and van Holten’s recent reviewAspects of BRST Quantization(2005) continues this program of linking algebra and geometry in the context of quantum dynamics. Literally hundreds more such references could be cited, and the diversity of their points-of-view is both incredible and wonderful.In consequence of these diverse ways that quantum dynamics can “sit in our brains”—

noneof which are mathematically mature at present, and whose connexions and technological implications especially are mysterious toeveryone—the collective process of understanding quantum dynamics can seem slow & immensely confusing … to student and expert alike.So it seems to me, that in raising these issues so plainly and explicitly, and in such a friendly student-accessible style, Dick and Ken have done us all a service that is very much in the spirit of Feynman:

Hence, my thanks and appreciation are extended, for the wonderful physical and mathematical questions that Ken and Dick have raised, and for their sustained hard work in hosting this outstanding weblog.

Dear all, this interesting post is related to several things I am interested in so let me mention three. The first is about scientific blogging and other Internet scientific resources and discussions. I always hoped that those can serve as useful way to get answers to basic questions and to explain basic and simple proofs/issues. (This was one of the motivation in my own blog and Internet activities.) And since we have no shortage of space we can give the arguments in full details (or add details interatively) and there is no special reason to be cryptic. (Even if for experts just a hint or a shorter argument will suffice”.

Second when we talk about “P” we both mean “what classical computers (Turing machines) can do in polynomial time, and also the more restricted definition about decision problems. I thought these two meanings lead to no confusions and it is safe to regard them as more or less the same. (But I am a novice computational complexitian, do I miss something?). The most important thing about Shor theeorem is that it shows that factoring can be done by a quantum computer in polynomial time. The question if BQP capture a decision version of factoring is also important and interesting (but not as spectacular) so I think it is a good service to ask it. In the quantum case the gap between “what a quantum computer can do in a polynomial time” and “BQP” is wider and more subtle than for the classic case. Lets call “QUANTUM-P” “Whar quantum computers can do in polynomial time”. It is very interesting if “QUANTUM-P” and “BQP” express more or less similar computational power.

Third, “QUANTUM P” includes “BPP^QSAMPLE” (what a classical probabilistic computer can do with additional ability to sample the state of a quantum computer with n qubits) [OK, maybe I ignore a non uniformity issue but take the “word” definition as the commiting one”.

There were interesting discussions recently in connection to a paper by Aaronson and Arkhipov if BQP captures more or less the same power as BPP^QSAMPLE. (Certainly a quantum computer can in polynomial time perform tasks requiring probabilistic classical computation with access to sampling states that a quantum computer can reach in polynomial time. Maybe the QUANTUM P is even stronger than that??

I think Scott was more leaning than me to believe that QUANTUM-P (as expressed by BPP^QSAMPLE) is stronger substantially compared to BQP.

The relative power of BQP and BPP^QSAMPLE is discussed in my post http://gilkalai.wordpress.com/2010/11/17/aaronson-and-arkhipovs-result-on-hierarchy-collapse/ and also on Scott blog http://www.scottaaronson.com/blog/?p=473

Gil, it’s easy to see that BQP = BPP^QSAMPLE. So as far as the definition of BQP is concerned, this is a complete non-issue.

Now, if you want a class different than BQP, of course you can consider QSAMPLE itself … but then you’re talking about sampling problems, not decision problems!

Or you can consider a class like BPP^NP^QSAMPLE, which my and Alex’s work indeed shows might be stronger than BPP^NP^BQP.

Right Scott, I was confused. I meant to compare BQP with QSAMPLE. Or more precisely with what classical computers can do – not just decision problems- equiped with QSAMPLE subroutines. Not with BPP^QSAMPLE. (I got it right in my quoted post.) In any case, one point I wanted to mention is that when you prove “Quantum computer can do X” it may reflect something computationally stronger than (or something which does not follow from) “Quantum computers can solve decisions problems in BQP”. So “quantum computers can factor” which is a theorem nobody can doubt, does not automatically answers what Dick’s asked about. As it seems clear from the comments the required implication is known to knowers and it uses some familiar steps (we should all be familiar with).

Another comment is that for factoring we only need on top of BQP log depth quantum computation (this is a remarkable quite recent result but I will not attempt to spell the names of the discoveres from memory) so in a sense factoring does not require the full power of quantum computers. I wonder if this log depth result can also be translated to the decision problem setting. (But again this may be obvious and routine to experts.)

I meant “on top of BPP” or “on top of classical computers”

Hi Gil,

In response to your question about log depth being enough for factoring (I’m not sure how to quote here), the result is due to Cleve and Watrous:

http://arxiv.org/abs/quant-ph/0006004

Technically, only the quantum part is log depth and so classical poly-time postprocessing is still necessary (see my comment above).

So for the decision problem, Thm 3 of that paper states that it’s contained in ZPP^{BQNC}.

i promise that this is not meant out of pedantry, but the distinction between what you mean by “quantum computers” and any particular complexity class (for instance, BQP) is exactly why complexity classes have careful definitions and mean particular things. since we do not currently have (useful, real, imaginary) “quantum computers” to play with, the phrase must mean some particular model of what they could or might be. and capturing distinctions in those different ideas of models is why we have different classes. fair enough? so asking about membership in a class for a problem really does matter, since it is really the only careful way to ask the question.

s.

Lets define פ (PE) as everything a classical computer can do in polynomial time ר (RESH) as anything quantum computer with unlimmited supply of random bits can do in polynimial time and ק (KUF) as anything a quantum computer can do in polynomial time. So פ (PE) contains P and its computational power seems to be expressed by P. So we can talk about decision problems sampling optimization and whatever else we imagine. ר(RESH) contains RP and BPP. ק contains BQP and also QSAMPLE and also it contains ר^QSAMPLE (RESH^QSAMPLE). (QSAMPLE is sampling a distribution expressed by a quantumcomputer after running a polynomial number of steps.) My intuition was always that the computational power of ק is essentially expressed by BQP. There are some indications that QSAMPLE might be genuinly stronger than BQP. (But those are far from being definite.) Maybe ק (KUF) is even stronger than ר^QSAMPLE (RESH^QSAMPLE) so the are further powerful things we can do with quantum computers.

Can we formalize פ, ר, ק (PE RESH and KUF)? I dont see why not? I think we disnot have much motivation to do it since it looked that decision problems capture computational power very good (and it still look so for P and BPP and may well be true also for BQP.)

The result I mentioned is by R. Cleve and J. Watrous. (I see no problem to translate it to the decision version.)

Factoring in log-depth (BPP^BQNC) was done by Cleve and Watrous in 2000 — it’s not that “recent” anymore! And yes, the decision problem of (say) whether there’s a prime factor ending in 7 is also in BPP^BQNC — no additional difficulty whatsoever.

Scott Aaronson suggests:

Background knowledge that needs to be filled … is exactly what sites like MathOverflow and CS Theory StackExchange are perfectly designed for.What Scott says is plain common sense and so, guided by the many fine comments on this topic, I have plunged into the mathematical waters by posting a generalization of the question (an imperfect generalization, perhaps) on MathOverFlow as

“Does BQP^P = BQP ? … and what proof machinery is available?“Comments, outright answers, and alternative versions of this question all are very welcome.Of course I agree that we need careful definitions of complexity classes! But I am still confused about how this question got rolling. As Lance (and the Zookeeper) pointed out, we’ve known since 1997 that BQP^BQP = BQP, and this includes BPP^BQP.

So there’s no problem using Shor’s algorithm as part of a classical randomized algorithm, and we can solve the function problem Factoring with high probability in (quantum) polynomial time. We can then sort the factors and see if there is a small one, or one ending in 7, or whatever, thus solving decision versions of Factoring as well.

The original question seems to have stemmed from whether we can reduce the quantum part of the entire algorithm to measuring a single qubit. Again, I think it’s nice that we can. But this seems more to me like showing that

“measuring one qubit BQP” = BQP.

Perhaps my confusion stems from the fact that no model of quantum computation currently in use in the community is limited to measuring a single qubit… and I don’t see this limitation in e.g. the definition of BQP in the Complexity Zoo.

What am I missing?

- Cris

Cris, over on

TCS StackExchange, Luca Trevisan has given an ingenious construction that answers the two-part question “Do runtimes for P require EXP resources to upper-bound? … are concrete examples known?” with “yes” and “yes for all practical purposes” (FAPP).The point is not that anyone thinks BQP^P≠BQP, but rather, that the obstructions to P-uniform reduction of algorithms→circuits are more formidable than is generally appreciated … this is the good point behind Ken and Dick’s question (as I understand it).

Where does this discussion makes of Leonid Levin’s critique? That it is well-founded or not?