Skip to content

Can Quantum Machines Do It All?

September 17, 2011


What keeps SAT out of BQP?

Dan Boneh is an expert on cryptography, especially the use of advanced number theory, such as the Weil pairing. He is famous for his seminal work on identity based encryption with Matt Franklin, which uses the Weil pairing.

Today I want to recall an old result Dan and I proved years ago and mix in a new result.

When Peter Shor announced in 1994 his instantly famous result that there is a quantum factoring algorithm that runs in polynomial time, many of us were amazed. We quickly read his paper, tried to understand why it worked, and followed the proofs step-by-step. It was—is still—a beautiful result.

Yet there was something about the result that troubled me. Perhaps no one else was troubled, but I did not really understand why the theorem was true. Yes I could follow the steps, yes I could explain the result to my students, but I did not really understand it.

I have discussed this phenomenon before: proofs are not just a mechanical means to move a statement from the conjecture pile to the proved pile. No, proofs are much more—they need to convey why the result is really true. They need to not seem magical, which Peter’s proof seemed to me. Proofs need to explain what is really happening.

Now I will say that Scott Aaronson’s well-noted explanation does this for the central idea, but back then Dan and I had no Scott to guide us. So we did something else.

In order to understand what was going on, Dan and I decided the best way would be to try and generalize Peter’s theorem. This is a standard method in mathematics: one way to really see what is happening is to generalize a theorem. Doing this can often shed new light on why the original theorem is true, and also can be interesting in its own right.

I could give countless examples of this method, it is used so often in math. Here is an example from number theory that is later explained better—I believe—by a classic result from group theory. Consider Fermat’s Little Theorem:

Theorem:
If {p} is a prime and {a} is an integer not divisible by {p}, then

\displaystyle  a^{p-1} \equiv 1 \bmod p.

There are many proofs of this fundamental theorem. See our friends at Wikipedia here for several proofs: They have proofs based on: counting necklaces, using modular arithmetic, using the binomial theorem, and using dynamical systems.

Each proof gives a different insight into why {p} must be prime, why {p} cannot divide {a}, and why the theorem is true.

The proof that I personally feel best explains what is happening is the one based on Lagrange’s Theorem:

Theorem: In any finite group, the order of an element divides the order of the group.

As an aside Joseph Lagrange did not prove his theorem, but only discussed special cases in 1771. It was finally proved by Pietro Abbati and published in 1803. The proof of Fermat’s Little Theorem now follows since when {p} is prime, the set of numbers

\displaystyle  \{ 1,2,3,\dots,p-1\}

form a group under multiplication modulo {p}.

Hidden Linear Functions

Dan and I began to look at what we called the hidden linear problem. A function from {h: \mathbb{Z} \rightarrow S} has period {q} provided

\displaystyle  h(x + q) = h(x)

for all {x}. We say that {h} is m-to-one provided at most {m} integers in the set {0,1,2,\dots,q-1} map to the same value in {S}. Note, if {h} is one-to-one, then it is m-to-one with {m=1}.

Let {f(x_{1},\dots,x_{k})} be a function from the integers {\mathbb{Z}^{k}} to some arbitrary set {S}. Say that f has hidden linear structure over {q} provided there are integers and some function {h} of period {q} so that

\displaystyle  f(x_{1},\dots,x_{k}) = h(x_{1} + \alpha_{2}x_{2} + \cdots + \alpha_{k}x_{k})

for all integers {x_{1},\dots,x_{k}}. Further we say that {f} is m-to-one provided {h} is.

Our conjecture, which eventually became a theorem is:

Theorem:
Suppose that {f(x_{1},\dots,x_{k})} is a function that has a hidden linear structure over {q} which is at most m-to-one. Assume the conditions:

  1. Let {n = \log q}, so that {m} and {k} are at most polynomial in {n}.

  2. Let {p} be the smallest prime divisor of {q}, so that {m<p}.

For such a function {f}, in quantum polynomial time in {n} we can recover the values of {\alpha_{2},\dots,\alpha_{k}} modulo {q} from an oracle for {f}.

Note that the added conditions are critical. If the function {h} were constant then there would be no way to reconstruct the {\alpha}‘s: there must be some limit on how much it collapses. The second condition makes the values of the {\alpha}‘s make sense modulo {q}.

Look at our paper for details. The above theorem is strong enough to prove both factoring and discrete logarithm are in polynomial quantum time. This already indicated that we had made some progress. In Shor’s paper he uses different but similar arguments to handle factoring and the discrete logarithm. I personally thought that our proof helped me have a greater understanding of Shor’s brilliant work.

By the way, proving this theorem was hard. We used the same type of quantum algorithm that Peter did. But the fact that {h} was not one-to-one created serious difficulties for us. { } Sometimes relaxing from a constant to `polynomial’ is a walk in the park, but not this time… I recall many times that one of us wanted to give up, but eventually we found the key trick. In the one-to-one case—the case that occurs in Peter’s theorems—there is an exponential sum that must be estimated. This sum is quite easy to handle. However, in the poly-to-one case the sum that arose was much harder to bound. We finally found a trick: when trying to bound a sum, change the sum. More on this another time.

Quantum Detection of Periods

Dan and I could also prove the following theorem:

Theorem:
Suppose that {h:\mathbb{Z} \rightarrow S} has period {q}, and no smaller period. Also let {h} be m-to-one. Add the conditions:

  1. Let {n = \log q}, so that {m} and {k} are at most polynomial in {n}.

  2. Let {p} be the smallest prime divisor of {q}, so that {m<p}.

For such a function {h}, in quantum polynomial time in {n} we can recover the the period {q}.

In Shor’s paper this theorem is essentially proved with {m=1}: the so called bijective case. Obviously our result is a modest generalization to the case where the function can fail to be one-to-one, but a value in the range can only have a polynomial number of pre-images. Since then there have been much better improvements to the period solving problem. The current best results can recover the period provided the function {h} has small auto-correlation—see this for details. Indeed for quantum algorithms that only use {h} as an oracle, the current results are essentially best possible.

SAT

There is another reason that there may be no quantum algorithm for solving the general period finding problem. That is, given a small circuit—not an oracle—for the function {h}, find its period. The reason is that the following is a theorem:

Theorem: The following two problems are equivalent via a random reduction:

  1. Finding an assignment to a {\mathsf{SAT}} instance.

  2. Finding the period of a Boolean valued function given by a small circuit.

Note, the lower bounds of quantum complexity theory are mostly based on a black box argument—they assume that only oracle access is available to the function whose period you want. There is no way known to show that they apply when the function {h} is given by a circuit. This follows since if {\mathsf{P} = \mathsf{NP}} one could guess the period.

This will be an example of “evolving” a theorem from a trivial beginning to more-meaningful forms. At issue first are: how small is `small’ and what is the time on the reduction? Then the main issue will become, how close are the resulting cases of period-finding to ones that quantum algorithms can handle?

First we prove a version where `small’ means polynomial, the reduction is deterministic polynomial time, and the cases of periodicity involved are trivial. Namely, given an instance formula {\phi} for {\mathsf{SAT}}, let {C} be the circuit that on input an assignment {x} outputs {\phi(x)}. If {\phi} is not satisfiable then the function {h} computed by {C} is identically zero and so has period {1}. If {\phi} is true on the all-{0} assignment then we say yes. Else, {\phi} is satisfiable, so {h(0^n) = 0} but {h(x) = 1} for some other {x}, so {h} does not have period {1}. It may not have any period, which is what makes this trivial. But we certainly know {\phi \notin \mathsf{SAT} \iff h} has period {1}.

Amplifying Periods

Now we want {h} always to have a period, in the case where {\phi \in \mathsf{SAT}}. This can be done by a simple padding trick. Let us use some number {n' > n} of variables defining a value {y}, and define the circuit {C'} by {C'(y) = C(y \bmod{2^n})}. Then the function {h} always has a period dividing {2^n}—the question is whether that period is {1} or something larger.

Thus finding the period of a periodic function {h} given by poly-size circuits is {\mathsf{NP}}-hard. Of course this {h} is far from bijective—it is just 0-1 valued. If we could make {h} bijective on this period, at least when the formula is satisfiable, then we would put {\mathsf{SAT}} into {\mathsf{BQP}}. How close can we come?

We start by arranging that when {\phi \in \mathsf{SAT}}, with high probability at least one {h} we run across has a period of some prime order {q}, where finding {q} for this {h} is enough to say {\phi \in \mathsf{SAT}}. Take the above circuit {C}; we wish to see if it has an input {x} so that {C(x)=1}. We can assume that {x} is unique, if it exists by using the famous result of Leslie Valiant and Vijay Vazirani.

A further simple random argument shows that we can also assume that the solution {x} is a prime number. Here’s why: Consider flipping each input bit of {C} independently: since the proportion of primes up to {2^n} is order-of {1/n}, this will map the {x} to a prime number with reasonable probability, and we only need that our test succeeds on it once. In summary we can assume that {C(x)} is either unsatisfiable or has some prime {q} as the unique solution.

Now let {h(x)} be the function

\displaystyle  \bigvee_{p | x} C(p),

where {p} is over all prime divisors of {x}. Then it is easy to see the following: If {C(x)} is unsatisfiable, then {h} is identically {0} and has period {1}. If there is a unique prime {q} so that {C(q)=1}, then {h} has period {q}. Therefore we have reduced solving {\mathsf{SAT}} to period finding—where we can restrict attention to positive cases where the period has prime length and the function is {0} except for a {1} on the period start.

There is a small point, really a BIG point: the function {h(x)} does not have a poly-size circuit. In order to compute it we must factor {x} into it prime factors and then check each one, {p}, to see it makes {C(p)=1}. However, factoring, currently, has algorithms that run in time roughly order-of {\exp(n^\epsilon)} for {\epsilon = 1/3}. Thus {h} will have circuits of this order of sub-exponential size, where the circuits themselves are also succinct. So long as `small’ means size {\exp(n^\epsilon)} for some known {\epsilon}, we are OK.


With that point on hold, we can finally try to work on the period structure we have achieved. For arguments {w} where {h(w) = 0}, we would like to provide values in {\{0,2,\dots,q-1\}}, or at least provide {w} different values. Not knowing {q} in advance may be a problem, but possibly inside the details of reducing to factoring in the previous paragraph, we may be able to assume that {q} is known.

Here is where we pause and say we’ve at least posed a non-trivial research idea. We may work on it tomorrow and find it doesn’t go any further. We might not put it in a paper yet. But in a blog at least we can see how many ideas get their birth and first steps, where hard theorems successfully raised in the past may give some fostering.

Open Problems

Is {\mathsf{SAT}} in {\mathsf{BQP}}? This is the motivating question.

What other ideas for creating and modifying periods might be relevant?

About these ads
12 Comments leave one →
  1. Dave Bacon permalink
    September 17, 2011 5:43 pm

    Reminds me a bit of an approach that didn’t quite work: http://arxiv.org/abs/quant-ph/0105005 cool that the archive keeps all versions :)

  2. aram permalink
    September 17, 2011 5:54 pm

    h(x) does have a poly-size quantum circuit.

    But making it injective on Z/q is still going to be hard.

  3. September 18, 2011 5:00 pm

    This is more rigorous version of the theorem. The rest depends on your free will to understand.

    Theorem
    For the polynomial p(x)= \sum x_i^4 - 2 x^T Q x, x \in \mathbb{R}^n, Q \in \mathbb{R}^n \times \mathbb{R}^n, p^*(x)= p(x)-\min p(x)= \sum g_i^2(x), where g_i(x) are polynomials at most degree 2.

    Proof

    p(x) is bounded from below and attains its minimum.
    Let x^* be one of the global minima of p^*(x), then {x^*}^T \nabla p^*(x)= 4 \sum {x^*_i}^4 - 4 {x^*}^T Q x^*= {x^*}^T( {x^*}{x^*}^T-Q) x^* = 0, from which \min p(x)= -\sum {x^*_i}^4 , and p^*(x)= \sum x_i^4 - 2 x^T Q x + \sum {x^*_i}^4.
    Polynomial \hat p(x)= \sum (x_i^2- {x_i^*}^2)^2 has 2^n zeros, including x^*
    p(x)= \sum (x_i^2- {x_i^*}^2)^2+ 2x^T({x^*}{x^*}^T-Q)x. Matrix \Delta= {x^*}{x^*}^T-Q is positive semi-definite with singular vector x^*, since for otherwise there exists \hat x| \hat x - x^* parallel to negative eigenvector of \Delta, that p(\hat x) < p(x^*).
    By Cholesky decomposition \Delta= LL^T, and p(x)= \sum (x_i^2- {x_i^*}^2)^2+ 2x^T L L^T x is a sum of squares. \square

    p(x) can encode partition problem, max-cut, and may be some more. In decision problems there is polynomial gap between satisfying instances and not satisfying instances. Therefore, semi-definite program decide in polynomial time. Similar thing can be done for the minimization problems like max-cut. There there is a polynomial gap between value of maximal cut and next one. This can be accepted only when some people of sum-of-squares programs (SOS) would be willing to consider it seriously. Lazy skepticism will not work here. My experience with posting is that experts catch idea almost instantly, the most sensible comments I got from people from Parrilo group (see, for example, Scott Aaronson post about “why philosophers … “, around comment #140 or so). The rest are lazy skeptical.

    Now, why small fraction of people working on sum of squares programs did not go here. First, people realized that quartic polynomials of certain form encode NP-complete problem and did not look further. For example, Monique Laurent in her review about SOS state quartic polynomial encoding, but does not show anywhere, that it cannot be represented as sum of squares of other polynomials. Second, the major direction there is Positivestelensatz arising as a solution to Hilbert 17th problem, i.e. every positive polynomial can be represented as sum of squared polynomial fractions. If someone find the efficient way to solve it, than there would be an algorithm to find minimum of arbitrary polynomial. This is much more general case (and therefore much more difficult). For establishing P=NP it is sufficient to find a class of polynomials that can be solved by SOS and that encode all instances of some NP-complete problem, that is within framework of NZ Shor 1987 and Parrilo thesis 2000, another relevant reference Parrilo Sturmfels 2003.

    The asymmetry arising from the enormous bias toward P \ne NP multiplied by the attraction produced by Clay prize, and all-corner advertisement of the importance of the problem renders collaborative approach toward solving this problem almost impossible. The polymath atmosphere, where everyone can draw random crazy idea is immediately kicked back by the guards of the field. Therefore, the intention of drawing huge attention to the problem with relatively small monetary prize given the commonly assumed risks, and the absence of “polymath” atmosphere, seems to me unethical. Therefore, I consciously will not write any form of formal paper on P=NP, for at least formally be ineligible for the prize. Even if the proof above is not rigorous enough, from this point I think it is easy to patch it. This is cranky, and therefore,

    Sincerely yours,
    Crackpot Trisector

  4. khalik sheikh permalink
    September 20, 2011 1:19 pm

    this is a great one………………..

  5. September 21, 2011 2:02 am

    Thanks for patching. The mental illness is not when you talk to yourself, but when then no one understands you. Relax for a moment, the proof is wrong. e.g. \Delta can be negative, since vectors [\pm x_k] are not orthogonal. By subtracting more and more of matrix \Delta it is possible to bring p^*(x) to another global minimum, then repeat that with all the axes that are still negative definite, until there are 2^n zeros with \vec x^*_(1) orthogonal to each other. Then one need to construct a sum of squares representation of that minimal polynomial (could we say extreme? I think they are extreme in more narrow sense that Choi and Lam definition), or find counter example. I do not have that construction right now. You can help me finding counter example.

    • September 29, 2011 2:33 pm

      People, this is my last post. Besides emotions I was trying to be as constructive as my limited ability to think allows. There is no place to discuss this issues, and I’m too stupid to run polymath. The question whether p^*(x)= p(x)-\min p(x) is sum of squares for polynomials p(x)= \sum x_i^4 - 2 x^T Q x, x \in \mathbb{R}^n, Q \in \mathbb{R}^n \times \mathbb{R}^n - real symmetric matrix. This is easy to formulate, and should not require very high-ended math to solve it, it is also very concrete. Positive answer settles down P vs NP question. To my little knowledge, the problem is still open. I’m offering $200 for the first one who will provide counter-example. I’m giving you some hints to make your life easier. If you ever prove (will not be lazy) there are no counter-examples (e.g. p^* that is not sum of squares), you do not have my permission to use my name in connection to this and related problems (laugh here). I removed all the posts from my blog, in couple of month search engine cash will gone, and you are free to go.

      For arbitrary set of linearly independent vectors \vec y_k \in \mathbb{R}^n, \vec y_i^T \vec y_k= 0, \forall i\neq k \sum_{i=1}^n x_i^4 = \sum_{k=1}^n (\alpha_k x^T \vec y_k \vec y_k^T x)^2+ \sum_m (\beta_m x^T B_m x)^2 and \vec y_k^T B_m \vec y_k =0, \forall k,m, for the set of orthogonal vectors it is easy to find \alpha_k

      (x^Ty)^2-t= ( x^T y y^T x)^2-t= 0 describes a pair of hyperplanes orthogonal to y and distance \sqrt{t} from origin. A set of n vectors defines parallelotop \sum_k \left(( x^T y_k y_k^T x)^2-t_k\right)^2.

      p(x)^2+g(x)^2= ( p(x) \cos(\phi) + g(x) \sin(\phi))^2 + ( - p(x) \sin(\phi) + g(x) \cos(\phi))^2

      \sum_k \vec x_k \vec x_k^T= [x_1, x_2, ..., x_n] [x_1, x_2, ..., x_n]^T= [x_1, x_2, ..., x_n] R^T R [x_1, x_2, ..., x_n]^T, R^T R= I.

  6. Marc Smith permalink
    September 22, 2011 6:27 pm

    A friend of mine said that Sakai and Kasahara invented Identity-based Cryptography based on the Weil Pairing. They presented their work on a Japanese conference called SCIS, I guess.

    I just think that it is really sad that these guys didn’t had their work recognized. Someone should try to invalidate Boneh’s patent.

    That’s just my opinion, the opinion of a student.

    • September 23, 2011 11:50 pm

      In the references of this paper by Sakai and Kasahara, citation [4] is Sakai-Ohgishi-Kasahara, “Cryptosystems based on pairing over Elliptic Curve”, SCIS Jan 2001, while citation [5] is Boneh-Franklin, CRYPTO August 2001. Citation [2] is Sakai-Ohgishi-Kasahara, “Cryptosystems Based on Pairing” from SCIS 2000, while reference [1] is a TR version from 1999 that mentions elliptic curves.

      The Boneh-Franklin paper (2003 journal version) cites the 2000 paper as using the pairing for key-exchange. So does this paper by Boneh and Boyen, in its first paragraph which shows key-exchange as one possible application.

      This 2010 paper by Aniket Kate and Ian Goldberg says in its abstract, “In this paper, we design distributed PKG setup and private key extraction protocols for three important IBE schemes; namely, Boneh and Franklin’s BF-IBE, Sakai and Kasahara’s SK-IBE, and Boneh and Boyen’s BB1-IBE.” Perhaps they are regarded today as intrinsically different schemes.

      Anyway, this shows the authors recognizing each other and their work being recognized separately by a third party. We have discoursed on independent discovery several times in this blog, where the community has reached equitable conclusions different from your opinion.

      • September 24, 2011 8:30 am

        Dear KW Regan.

        Thanks for the enlightment. It made me feel better about academic ethics.

Trackbacks

  1. Twelfth Linkfest
  2. Being Fearless And Doing Theory « Gödel’s Lost Letter and P=NP
  3. Sex, Lies, And Quantum Computers | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,230 other followers