News on Intermediate Problems
The Minimum Circuit Size Problem goes front and center
Eric Allender, Bireswar Das, Cody Murray, and Ryan Williams have proved new results about problems in the range between and
-complete. According to the wide majority view of complexity the range is vast, but it is populated by scant few natural computational problems. Only Factoring, Discrete Logarithm, Graph Isomorphism (GI), and the Minimum Circuit Size Problem (MCSP) regularly get prominent mention. There are related problems like group isomorphism and others in subjects such as lattice-based cryptosystems. We covered many of them some years back.
Today we are delighted to report recent progress on these problems.
MCSP is the problem: given a string of length
and a number
, is there a Boolean circuit
with
or fewer wires such that
For of other lengths
,
, we catenate the values of
for the first
strings in
under the standard order. Since every
-ary Boolean function has circuits of size
which are encodable in
bits, MCSP belongs to
with linear witness size.
Several Soviet mathematicians studied MCSP in the late 1950s and 1960s. Leonid Levin is said to have desired to prove it -complete before publishing his work on
-completeness. MCSP seemed to stand aloof until Valentine Kabanets and Jin-Yi Cai connected it to Factoring and Discrete Log via the “Natural Proofs” theory of Alexander Razborov and Steven Rudich. Eric and Harry Buhrman and Michal Koucký and Dieter van Melkebeek and Detlef Ronneburger improved their results in a 2006 paper to read:
Theorem 1 Discrete Log is in
and Factoring is in
.
Now Eric and Bireswar complete the triad of relations to the other intermediate problems:
Theorem 2 Graph Isomorphism is in
. Moreover, every promise problem in
belongs to
as defined for promise problems.
Cody and Ryan show on the other hand that proving -hardness of MCSP under various reductions would entail proving breakthrough lower bounds:
Theorem 3
- If
then
, so
.
- If
then
.
- If
then
(so
), and also
has circuit lower bounds high enough to de-randomize
.
- In any many-one reduction
from
(let alone
) to
, no random-access machine can compute any desired bit
of
in
time.
The last result is significant because it is unconditional, and because most familiar -completeness reductions
are local in the sense that one can compute any desired bit
of
in only
time (with random access to
).
Why MCSP is Hard to Harden
The genius of MCSP is that it connects two levels of scaling—input lengths and
—in the briefest way. The circuits
can have exponential size from the standpoint of
. This interplay of scaling is basic to the theory of pseudorandom generators, in terms of conditions under which they can stretch a seed of
bits into
bits, and to generators of pseudorandom functions
.
An issue articulated especially by Cody and Ryan is that reductions to MCSP carry seeds of being self-defeating. The ones we know best how to design involve “gadgets” whose size scales as
not
. For instance, in a reduction from
we tend to design gadgets for individual clauses in the given 3CNF formula
—each of which has constant-many variables and
encoded size. But if
involves only
-sized gadgets and the connections between gadgets need only
lookup, then when the reduction outputs
, the string
will be the graph of a
-sized circuit. This means that:
-
if
then the answer is trivially “yes”;
-
if
then the answer can be found in
time—or at worst quasipolynomial in
time—by exhaustively trying all circuits of size
.
The two horns of this dilemma leave little room to make a non-trivial reduction to MCSP. Log-space and reductions are (to different degrees) unable to avoid the problem. The kind of reduction that could avoid it might involve, say,
-many clauses per gadget in an indivisible manner. But doing this would seem to require obtaining substantial non-local knowledge about
in the first place.
Stronger still, if the reduction is from a polynomially sparse language in place of
, then even this last option becomes unavailable. Certain relations among exponential-time classes imply the existence of hard sparse sets in
. The hypothesis that MCSP is hard for these sets impacts these relations, for instance yielding the
conclusion.
A paradox that at first sight seems stranger emerges when the circuits are allowed oracle gates. Such gates may have any arity
and output 1 if and only if the string
formed by the inputs belongs to the associated oracle set
. For any
we can define
to be the minimum size problem for such circuits relative to
. It might seem axiomatic that when
is a powerful oracle such as
then
should likewise be
-complete. However, giving
such an oracle makes it easier to have small circuits for meaningful problems. This compresses the above dilemma even more. In a companion paper by Eric with Kabanets and Dhiraj Holden they show that
is not complete under logspace reductions, nor even hard for
under uniform
reductions. More strikingly, they show that if it is hard for
under logspace reductions, then
.
Nevertheless, when it comes to various flavors of bounded-error randomized Turing reductions, MCSP packs enough hardness to solve Factoring and Discrete Log and GI. We say some more about how this works.
Randomized Reductions to MCSP
What MCSP does well is efficiently distinguish strings having
-sized circuits from the vast majority having no
-sized circuits, where
. The dense latter set
is a good distinguisher between pseudorandom and uniform distributions on
. Since one-way functions suffice to construct pseudorandom generators, MCSP turns into an oracle for inverting functions to an extent codified in Eric’s 2006 joint paper:
Theorem 4 Let
be a dense language of strings having no
-sized circuits, and let
be computable in polynomial time with
of polynomially-related lengths. Then we can find a polynomial-time probabilistic oracle TM
and
such that for all
and
,
Here is selected uniformly from
and
is uniform over the random bits of the machine. We have restricted
and
more than their result requires for ease of discussion.
To attack GI we set things up so that “” and “
” represent a graph
and a permutation
of its vertices, respectively. More precisely “
” means a particular adjacency matrix, and we define
to mean the adjacency matrix
obtained by permuting
according to
. By Theorem 4, using the MCSP oracle to supply
, one obtains
and
such that for all
and
-vertex graphs
,
Since is 1-to-1 we can simplify this while also tying “
” symbolically to
:
Now given an instance of GI via adjacency matrices, do the following for some constant times
independent trials:
-
Pick
and
uniformly at random and put
.
-
Run
to obtain a permutation
.
-
Accept if
, which means
.
This algorithm has one-sided error since it will never accept if and
are not isomorphic. If they are isomorphic, then
arises as
with the same distribution over permutations that it arises as
, so Equation (1) applies equally well with
in place of
. Hence
finds the correct
with probability at least
on each trial, yielding the theorem
.
The proof for is more detailed but similar in using the above idea. There are many further results in the paper by Cody and Ryan and in the oracle-circuit paper.
Open Problems
These papers also leave a lot of open problems. Perhaps more importantly, they attest that these open problems are attackable. Can any kind of many-one reducibility stricter than reduce every language in
to MCSP? Can we simply get
from the assumption
? The most interesting holistic aspect is that we know new lower bounds follow if MCSP is easy, and now we know that new lower bounds follow if MCSP is hard. If we assume that MCSP stays intermediate, can we prove lower bounds that combine with the others to yield some non-trivial unconditional result?
[added paper links]
Trackbacks
- A Big Result On Graph Isomorphism | Gödel's Lost Letter and P=NP
- A Matter of Agreement | Gödel's Lost Letter and P=NP
- A Panel On P vs. NP | Gödel's Lost Letter and P=NP
- Lost in Complexity | Gödel's Lost Letter and P=NP
- Princeton Is Invariant | Gödel's Lost Letter and P=NP
- A Reason Why Circuit Lower Bounds Are Hard | Gödel's Lost Letter and P=NP
Where can I find the paper you are reporting on?
Allender and Das:
http://eccc.hpi-web.de/report/2014/068/
Murray and Williams:
http://eccc.hpi-web.de/report/2014/164/
Thank you!
Thanks also—it took me awhile to realize I’d left the two links out of the post.
You might also want to find the [Allender, Holden, Kabanets] STACS 2015 paper that was mentioned. It’s available here:
Is it reasonable to hope for a proof that some well-known, natural problem is NP-intermediate? Or is it as hopeless as to prove that some meaningful, important arithmetic statement is undecidable?
You need first to prove PNP…
Well, I mean a natural problem that’s NP-intermediate under the assumption that P!=NP. I guess intermediateness makes sense regardless of whether P=NP…
By the way, intermediate problems look like objects flying within the Lagrangian point between the two large bodies P and NP-complete. If a galactic polynomial algorithm were found to factor integers, maybe the probability of finding a more feasible algorithm would increase – because the original algorithm could be improved. But as long as none has been found, we can’t say anything.
In fact, the large body represented by the NP-complete problems is a black hole that swallows any attempt to understand it. The problems in it are so compressed that they’re all reducible to one another. But if planet P managed to attract a single intermediate one, then it would swallow all the NP problems as well, turning into a black hole of easiness. We might end up with such a geometric model if the probability of finding an algorithm was seriously studied.
Regarding MSCP^A, it seems like a more natural generalization of MSCP to use oracle gates might be the following MSCP(A,t) where one has an additional input t and one is allowed at most t uses of the oracle gate. This problem then includes MSCP in its usual fashion (when t=0). This seems to give rise to two obvious questions 1) Is there an obvious example of a choice of A such that NP=P^MSCP(A,t) ? 2) Does the assumption MSCP(MSCP, T) being NP-hard imply that MSCP is itself NP-hard?
Regarding the “more natural generalization”: MCSP^QBF is complete for PSPACE (under ZPP-reductions), and more generally, if A is a “standard” complete set for some large class C (including most of the interesting and well-studied complexity classes C that are bigger than PSPACE), then MCSP^A is complete for C under P/poly reductions. So the definition that we’ve been using has some attractive properties.
On the other hand, Theorem 14 in this paper http://ftp.cs.rutgers.edu/pub/allender/KT.pdf can be used to show that, for many oracles A that might be of interest, there isn’t very much difference between your definition of MCSP(A,1) and MCSP^A. That is, if we’re able to prove (with our current bag of techniques) that MCSP^A is complete for some class, then the same bag of tricks will show that MCSP(A,1) is complete for the same class. So it really boils down to two cases:
t=0 gives you MCSP
t>0 gives you something pretty similar to MCSP^A.
With regard to your questions:
1) There is not an obvious example of an A such that NP = P^MCSP(A,t) (for any parameter t). If A is not in NP intersect coNP, then it’s not clear that MCSP(A,t) is going to be in NP. But even if A is more powerful (say, A is anywhere in the polynomial hierarchy), then we have no idea how to reduce SAT to MCSP(A,t).
2) I don’t see how to show the implication “MCSP^MCSP is NP-hard” implies “MCSP is NP-hard”. In fact, it’s even worse than that. I don’t see how to prove “MCSP^MCSP is NP-hard” implies “MCSP^SAT is NP-hard” (say, under poly-time many-one reductions). More generally, even if A is much easier than B, I don’t see how to show a reduction from MCSP^A to MCSP^B.
It’s all a bit strange.
— Eric
Thanks! That’s very enlightening.
Is there sense in restricting the oracle to be a string y of length polynomially related to |x|? Then oracle gates would have width only log|y| proportional to k = log|x|. Can this be tied to some relative Kolmogorov complexity notion, say KT(x|y)?
Reblogged this on Pathological Handwaving.
Dear Sirs,
I’m not a “professional” mathematician.
I have [almost] written an algorithm running in polynomial time that, for any given integer N creates a set S made of integers strictly less than N and such that N has at least one factor in common with at least one of the elements of S. S has cardinality less than (log N)^2
Does it imply that FACTORIZATION is in P ?
Thanks
FranceA
Reblogged this on Rupei Xu.