Skip to content

News on Intermediate Problems

March 5, 2015


The Minimum Circuit Size Problem goes front and center

EricDasRyanCody

Eric Allender, Bireswar Das, Cody Murray, and Ryan Williams have proved new results about problems in the range between {\mathsf{P}} and {\mathsf{NP}}-complete. According to the wide majority view of complexity the range is vast, but it is populated by scant few natural computational problems. Only Factoring, Discrete Logarithm, Graph Isomorphism (GI), and the Minimum Circuit Size Problem (MCSP) regularly get prominent mention. There are related problems like group isomorphism and others in subjects such as lattice-based cryptosystems. We covered many of them some years back.

Today we are delighted to report recent progress on these problems.

MCSP is the problem: given a string {x} of length {n = 2^k} and a number {s}, is there a Boolean circuit {C} with {s} or fewer wires such that

\displaystyle  C(0^k)\cdot C(0^{k-1} 1)\cdot C(0^{k-2}10) \cdots C(1^{k-1} 0)\cdot C(1^k) = x?

For {x} of other lengths {m}, {2^{k-1} < m < 2^k}, we catenate the values of {C} for the first {m} strings in {\{0,1\}^k} under the standard order. Since every {k}-ary Boolean function has circuits of size {O(\frac{2^k}{k}) = O(\frac{n}{\log n})} which are encodable in {O(n)} bits, MCSP belongs to {\mathsf{NP}} with linear witness size.

Several Soviet mathematicians studied MCSP in the late 1950s and 1960s. Leonid Levin is said to have desired to prove it {\mathsf{NP}}-complete before publishing his work on {\mathsf{NP}}-completeness. MCSP seemed to stand aloof until Valentine Kabanets and Jin-Yi Cai connected it to Factoring and Discrete Log via the “Natural Proofs” theory of Alexander Razborov and Steven Rudich. Eric and Harry Buhrman and Michal Koucký and Dieter van Melkebeek and Detlef Ronneburger improved their results in a 2006 paper to read:

Theorem 1 Discrete Log is in {\mathsf{BPP}^{\mathrm{MCSP}}} and Factoring is in {\mathsf{ZPP}^{\mathrm{MCSP}}}.

Now Eric and Bireswar complete the triad of relations to the other intermediate problems:

Theorem 2 Graph Isomorphism is in {\mathsf{RP}^{\mathrm{MCSP}}}. Moreover, every promise problem in {\mathsf{SZK}} belongs to {\mathsf{BPP}^{\mathrm{MCSP}}} as defined for promise problems.

Cody and Ryan show on the other hand that proving {\mathsf{NP}}-hardness of MCSP under various reductions would entail proving breakthrough lower bounds:

Theorem 3

  • If {\mathrm{SAT} \leq_m^p \mathrm{MCSP}} then {\mathsf{EXP} \not\subseteq \mathsf{NP} \cap \mathsf{P/poly}}, so {\mathsf{EXP \neq ZPP}}.

  • If {\mathrm{SAT} \leq_m^{\log} \mathrm{MCSP}} then {\mathsf{PSPACE \neq ZPP}}.

  • If {\mathrm{SAT} \leq_m^{ac_0} \mathrm{MCSP}} then {\mathsf{NP} \not\subset \mathsf{P/poly}} (so {\mathsf{NP \neq P}}), and also {\mathsf{E}} has circuit lower bounds high enough to de-randomize {\mathsf{BPP}}.

  • In any many-one reduction {f} from {\mathrm{Parity}} (let alone {\mathrm{SAT}}) to {\mathrm{MCSP}}, no random-access machine can compute any desired bit {j} of {f(x)} in {|x|^{1/2-\epsilon}} time.

The last result is significant because it is unconditional, and because most familiar {\mathsf{NP}}-completeness reductions {f} are local in the sense that one can compute any desired bit {j} of {f(x)} in only {(\log |x|)^{O(1)}} time (with random access to {x}).

Why MCSP is Hard to Harden

The genius of MCSP is that it connects two levels of scaling—input lengths {k} and {n}—in the briefest way. The circuits {C} can have exponential size from the standpoint of {k}. This interplay of scaling is basic to the theory of pseudorandom generators, in terms of conditions under which they can stretch a seed of {\mathsf{poly}(k)} bits into {n} bits, and to generators of pseudorandom functions {g: \{0,1\}^k \longrightarrow \{0,1\}^k}.

An issue articulated especially by Cody and Ryan is that reductions {f} to MCSP carry seeds of being self-defeating. The ones we know best how to design involve “gadgets” whose size scales as {k} not {n}. For instance, in a reduction from {\mathrm{3SAT}} we tend to design gadgets for individual clauses in the given 3CNF formula {\phi}—each of which has constant-many variables and {O(\log n) = O(k)} encoded size. But if {f} involves only {\mathsf{poly}(k)}-sized gadgets and the connections between gadgets need only {\mathsf{poly}(k)} lookup, then when the reduction outputs {f(\phi) = (y,s)}, the string {y} will be the graph of a {\mathsf{poly}(k)}-sized circuit. This means that:

  • if {s > \mathsf{poly}(k)} then the answer is trivially “yes”;

  • if {s \leq \mathsf{poly}(k)} then the answer can be found in {\mathsf{poly}(n)} time—or at worst quasipolynomial in {n} time—by exhaustively trying all circuits of size {s}.

The two horns of this dilemma leave little room to make a non-trivial reduction to MCSP. Log-space and {\mathsf{AC^0}} reductions are (to different degrees) unable to avoid the problem. The kind of reduction that could avoid it might involve, say, {n^{1/2}}-many clauses per gadget in an indivisible manner. But doing this would seem to require obtaining substantial non-local knowledge about {\phi} in the first place.

Stronger still, if the reduction is from a polynomially sparse language {A \in \mathsf{NP}} in place of {\mathrm{SAT}}, then even this last option becomes unavailable. Certain relations among exponential-time classes imply the existence of hard sparse sets in {\mathsf{NP}}. The hypothesis that MCSP is hard for these sets impacts these relations, for instance yielding the {\mathsf{EXP \neq ZPP}} conclusion.

A paradox that at first sight seems stranger emerges when the circuits {C} are allowed oracle gates. Such gates may have any arity {m} and output 1 if and only if the string {u_1 u_2\cdots u_m} formed by the inputs belongs to the associated oracle set {A}. For any {A} we can define {\mathrm{MCSP}^A} to be the minimum size problem for such circuits relative to {A}. It might seem axiomatic that when {A} is a powerful oracle such as {\mathrm{QBF}} then {\mathrm{MCSP}^{\mathrm{QBF}}} should likewise be {\mathsf{PSPACE}}-complete. However, giving {C} such an oracle makes it easier to have small circuits for meaningful problems. This compresses the above dilemma even more. In a companion paper by Eric with Kabanets and Dhiraj Holden they show that {\mathrm{MCSP}^{\mathrm{QBF}}} is not complete under logspace reductions, nor even hard for {\mathsf{TC}^0} under uniform {\mathsf{AC}^0} reductions. More strikingly, they show that if it is hard for {\mathsf{P}} under logspace reductions, then {\mathsf{EXP = PSPACE}}.

Nevertheless, when it comes to various flavors of bounded-error randomized Turing reductions, MCSP packs enough hardness to solve Factoring and Discrete Log and GI. We say some more about how this works.

Randomized Reductions to MCSP

What MCSP does well is efficiently distinguish strings {x \in \{0,1\}^n} having {n^{\alpha}}-sized circuits from the vast majority having no {n^{\beta}}-sized circuits, where {0 < \alpha < \beta < 1}. The dense latter set {B} is a good distinguisher between pseudorandom and uniform distributions on {x \in \{0,1\}^n}. Since one-way functions suffice to construct pseudorandom generators, MCSP turns into an oracle for inverting functions to an extent codified in Eric’s 2006 joint paper:

Theorem 4 Let {B} be a dense language of strings having no {n^{\beta}}-sized circuits, and let {f(x,y) = z} be computable in polynomial time with {x,y,z} of polynomially-related lengths. Then we can find a polynomial-time probabilistic oracle TM {M} and {c > 0} such that for all {n} and {y},

\displaystyle  \Pr_{x,r}[M^B(y,f(x,y),r) = w \text{ such that } f(w,y) = f(x,y)] \geq \frac{1}{n^c}.

Here {x} is selected uniformly from {\{0,1\}^n} and {r} is uniform over the random bits of the machine. We have restricted {f} and {B} more than their result requires for ease of discussion.

To attack GI we set things up so that “{x}” and “{y}” represent a graph {G} and a permutation {\pi} of its vertices, respectively. More precisely “{G}” means a particular adjacency matrix, and we define {f(\pi,G) = G'} to mean the adjacency matrix {G'} obtained by permuting {G} according to {\pi}. By Theorem 4, using the MCSP oracle to supply {B}, one obtains {M} and {c} such that for all {n} and {n}-vertex graphs {G},

\displaystyle  \Pr_{\pi,r}[M^{\mathrm{MCSP}}(G,f(\pi,G),r) = \rho \text{ such that } f(\rho,G) = f(\pi,G)] \geq \frac{1}{n^c}.

Since {f} is 1-to-1 we can simplify this while also tying “{G'}” symbolically to {f(\pi,G)}:

\displaystyle  \Pr_{\pi,r}[M^{\mathrm{MCSP}}(G,G',r) = \pi] \geq \frac{1}{n^c}. \ \ \ \ \ (1)

Now given an instance {(G,H)} of GI via adjacency matrices, do the following for some constant times {n^c} independent trials:

  1. Pick {\pi} and {r} uniformly at random and put {G' = f(\pi,G)}.

  2. Run {M^{\mathrm{MCSP}}(H,G',r)} to obtain a permutation {\rho}.

  3. Accept if {\rho(H) = G'}, which means {H = \rho^{-1}\pi(G)}.

This algorithm has one-sided error since it will never accept if {G} and {H} are not isomorphic. If they are isomorphic, then {G'} arises as {\rho(H)} with the same distribution over permutations that it arises as {G' = \pi(G)}, so Equation (1) applies equally well with {H} in place of {G}. Hence {M^{\mathrm{MCSP}}(H,G',r)} finds the correct {\rho} with probability at least {\frac{1}{n^c}} on each trial, yielding the theorem {\mathrm{GI} \in \mathsf{RP}^{\mathrm{MCSP}}}.

The proof for {\mathsf{SZK \subseteq BPP}^{\mathrm{MCSP}}} is more detailed but similar in using the above idea. There are many further results in the paper by Cody and Ryan and in the oracle-circuit paper.

Open Problems

These papers also leave a lot of open problems. Perhaps more importantly, they attest that these open problems are attackable. Can any kind of many-one reducibility stricter than {\leq_m^p} reduce every language in {\mathsf{P}} to MCSP? Can we simply get {\mathsf{EXP} \not\subset \mathsf{P/poly}} from the assumption {\mathrm{SAT} \leq_m^p \mathrm{MCSP}}? The most interesting holistic aspect is that we know new lower bounds follow if MCSP is easy, and now we know that new lower bounds follow if MCSP is hard. If we assume that MCSP stays intermediate, can we prove lower bounds that combine with the others to yield some non-trivial unconditional result?

[added paper links]

28 Comments leave one →
  1. March 6, 2015 2:54 am

    Where can I find the paper you are reporting on?

  2. Serge permalink
    March 6, 2015 12:38 pm

    Is it reasonable to hope for a proof that some well-known, natural problem is NP-intermediate? Or is it as hopeless as to prove that some meaningful, important arithmetic statement is undecidable?

    • March 8, 2015 11:09 am

      You need first to prove PNP…

      • Serge permalink
        March 8, 2015 2:36 pm

        Well, I mean a natural problem that’s NP-intermediate under the assumption that P!=NP. I guess intermediateness makes sense regardless of whether P=NP…

        By the way, intermediate problems look like objects flying within the Lagrangian point between the two large bodies P and NP-complete. If a galactic polynomial algorithm were found to factor integers, maybe the probability of finding a more feasible algorithm would increase – because the original algorithm could be improved. But as long as none has been found, we can’t say anything.

      • Serge permalink
        March 11, 2015 8:31 am

        In fact, the large body represented by the NP-complete problems is a black hole that swallows any attempt to understand it. The problems in it are so compressed that they’re all reducible to one another. But if planet P managed to attract a single intermediate one, then it would swallow all the NP problems as well, turning into a black hole of easiness. We might end up with such a geometric model if the probability of finding an algorithm was seriously studied.

  3. March 7, 2015 10:28 am

    Regarding MSCP^A, it seems like a more natural generalization of MSCP to use oracle gates might be the following MSCP(A,t) where one has an additional input t and one is allowed at most t uses of the oracle gate. This problem then includes MSCP in its usual fashion (when t=0). This seems to give rise to two obvious questions 1) Is there an obvious example of a choice of A such that NP=P^MSCP(A,t) ? 2) Does the assumption MSCP(MSCP, T) being NP-hard imply that MSCP is itself NP-hard?

    • March 10, 2015 12:31 am

      Regarding the “more natural generalization”: MCSP^QBF is complete for PSPACE (under ZPP-reductions), and more generally, if A is a “standard” complete set for some large class C (including most of the interesting and well-studied complexity classes C that are bigger than PSPACE), then MCSP^A is complete for C under P/poly reductions. So the definition that we’ve been using has some attractive properties.

      On the other hand, Theorem 14 in this paper http://ftp.cs.rutgers.edu/pub/allender/KT.pdf can be used to show that, for many oracles A that might be of interest, there isn’t very much difference between your definition of MCSP(A,1) and MCSP^A. That is, if we’re able to prove (with our current bag of techniques) that MCSP^A is complete for some class, then the same bag of tricks will show that MCSP(A,1) is complete for the same class. So it really boils down to two cases:
      t=0 gives you MCSP
      t>0 gives you something pretty similar to MCSP^A.

      With regard to your questions:
      1) There is not an obvious example of an A such that NP = P^MCSP(A,t) (for any parameter t). If A is not in NP intersect coNP, then it’s not clear that MCSP(A,t) is going to be in NP. But even if A is more powerful (say, A is anywhere in the polynomial hierarchy), then we have no idea how to reduce SAT to MCSP(A,t).
      2) I don’t see how to show the implication “MCSP^MCSP is NP-hard” implies “MCSP is NP-hard”. In fact, it’s even worse than that. I don’t see how to prove “MCSP^MCSP is NP-hard” implies “MCSP^SAT is NP-hard” (say, under poly-time many-one reductions). More generally, even if A is much easier than B, I don’t see how to show a reduction from MCSP^A to MCSP^B.

      It’s all a bit strange.

      — Eric

    • March 10, 2015 2:27 pm

      Is there sense in restricting the oracle to be a string y of length polynomially related to |x|? Then oracle gates would have width only log|y| proportional to k = log|x|. Can this be tied to some relative Kolmogorov complexity notion, say KT(x|y)?

  4. Emi permalink
    March 23, 2015 4:18 am

    Reblogged this on Pathological Handwaving.

  5. FranceA permalink
    April 7, 2015 9:27 am

    Dear Sirs,

    I’m not a “professional” mathematician.

    I have [almost] written an algorithm running in polynomial time that, for any given integer N creates a set S made of integers strictly less than N and such that N has at least one factor in common with at least one of the elements of S. S has cardinality less than (log N)^2

    Does it imply that FACTORIZATION is in P ?

    Thanks

    FranceA

  6. May 9, 2015 6:05 pm

    Reblogged this on Rupei Xu.

  7. 18268126 permalink
    February 9, 2021 8:24 am

    I read this blog page for the 100th time. I understand nothing. Please let Lipton simplify what you are writing. Unless the reader is an expert there is no way to understand Reagan’s writing. Please simplify.

    • February 9, 2021 9:06 am

      Some articles are intended to be more technical than others. This one presumes from the get-go that you know how to define relativized BPP and ZPP. It is for graduate students aspiring to be experts, as a front-end to engaging with the papers covered. These posts are often more work than the simpler ones.

      • 18268126 permalink
        February 9, 2021 11:58 am

        I am graduate student too. Things which are supposed to simple are exaggerated and things which have to be explained simply is hidden using jargon and impenetrable sentences. Graduate students who already understand the technicalities already know the results (for instance the very definition of MCSP is toasted beyond recognition and simplicity or the intended simplicity is so non-canonical (either way it adds no context to the whole posting)).

      • 18268126 permalink
        February 12, 2021 8:15 pm

        For example you say ‘genius’ of MCSP is two scalings but nothing after it is clear. Why is it two scalings a genius?

Trackbacks

  1. A Big Result On Graph Isomorphism | Gödel's Lost Letter and P=NP
  2. A Matter of Agreement | Gödel's Lost Letter and P=NP
  3. A Panel On P vs. NP | Gödel's Lost Letter and P=NP
  4. Lost in Complexity | Gödel's Lost Letter and P=NP
  5. Princeton Is Invariant | Gödel's Lost Letter and P=NP
  6. A Reason Why Circuit Lower Bounds Are Hard | Gödel's Lost Letter and P=NP
  7. From the Previous ’20s Decade to This One | Gödel's Lost Letter and P=NP
  8. Is P=NP a Grave Matter? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading