Skip to content

How to Solve P=NP?

April 27, 2009


An approach to the solution of P=NP?

images7

Landon Clay is the Boston businessman who help fund the Clay Institute and its seven associated Millennium Problems, one of which is P=NP. The person(s) who solves P=NP will get, after certain rules are fulfilled, a cool one million dollars. That is a lot of money, but somehow juxtaposed next to current bank bailouts and other loans measured in the trillions of dollars, it does seem a bit less impressive. But, unlike Grigori Perelman, I would take the money, for sure.

Today I want to talk about how I would approach the P=NP, if I was forced to work on it. I think of the quote from Paul Erdös on Ramsey numbers:

He asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5, 5) or they will destroy our planet. In that case, he claims, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6, 6). In that case, he believes, we should attempt to destroy the aliens.

See our friends at Wikipedia for this quote and the definition of the Ramsey function {R(n,m)}.

My version is suppose that the alien force asks us to prove that P=NP. I would say get our best theorists and mathematicians and look for an algorithm. If on the other hand, they ask for a proof that P{\neq}NP, I would agree with Erdös and suggest we attempt to destroy them.

You know why I believe this. Even if the conventional wisdom is that P=NP is false, I think we have a better chance of finding an algorithm, then a lower bound proof. I guess also if the aliens are “fair”, by asking for a proof that P=NP, one could conclude that it was true. To ask for a proof otherwise would make the aliens pretty mean. If they were that evil, they might just destroy us whether we solved P=NP or not.

There is a history of people asking for solutions to problems that they know are unsolvable. A famous example, is the 15 puzzle which you probably have played with one time or other. The goal is to slide the 15 squares around a {4 \times 4 } square until they are in order.
180px-15-puzzlesvg

I was always told that the great puzzle maker of the late 1880’s, Sam Loyd, created this puzzle. The current understanding is that he did not, and the puzzle was created by Noyes Chapman, the Postmaster of Canastota, New York. My sister-inlaw lives in Canastota, but that has nothing to do with this post.

The fun part of the puzzle is that a classic parity argument shows that the usual initial arrangement cannot be moved to the required final arrangement. The puzzle is unsolvable. The puzzle caused a “craze” during the late 1880’s. There were people who even thought they had solved the puzzle, but could not remember how. The best guess is pieces fell out of the puzzle, while they tried to solve it.

Role of Prizes

In the earlier history of mathematics, prizes for solving problems played a larger role than today. I have already talked about Henri Poincaré’s prize winning paper on the three-body problem. Erdös often offered money for the solution to open questions. He sometimes made the prize money quite asymmetric: a counter-example was worth $50 and proof was worth $500. I recall that one lucky solver won a prize of $1,000 for a very hard problem of Erdös and was asked was it worth it: he answered that he calculated that he made about 50¢ per hour.

Ron Graham used to manage Erdös’ money, including paying people who solved his open problems. Ron told me that at first most people kept the checks, probably framing them–this saved Ron the money. Then, people got color copier machines, so they could make a perfect copy, frame that, and then cash the check. Ron said that was okay, but what really got him was when some people started to cash the checks, and then asked for the cancelled checks back.

A final story about prizes concerns Jean-Paul Sartre, who won the Nobel prize for literature in 1964. Apparently he refused the Nobel prize, saying: “It is not the same thing if I sign Jean-Paul Sartre or if I sign Jean-Paul Sartre, Nobel Prize winner. A writer must refuse to allow himself to be transformed into an institution, even if it takes place in the most honorable form.” However, later he wrote a letter to the Nobel committee asking for the just the prize money; he was politely turned down.

My Approach

Okay, the aliens have thrown down the challenge to find an algorithm for SAT. He is my best guess today for an approach. I would first use the powerful results from PCP that show that I “only” need to find an algorithm for 3-SAT that beats the trivial bound of satisfying {7/8} of the clauses. The exact theorem that we are using is:

Theorem: Let {\epsilon>0}. Suppose for satisfiable clauses of 3-SAT, there is a polynomial time algorithm that satisfies {7/8 + \epsilon} clauses, then P=NP.

Let {\cal C} be a set of 3-clauses that are satisfiable. I would then divides sets of clauses into two cases:

{\bullet} In this case I will assume that the clauses {\cal C} are “random.” This means that the clauses have enough “randomness”, in some sense, that the known algorithms that work well on random SAT can work on these clauses. I have no exact idea how to make this precise, but I hope you see the idea.

{\bullet} In this case I will assume that the clauses are not “random.” This means that there is some inherent structure to the clauses. I will hope that I can exploit this structure to show that there is a fast method for solving the SAT problem on structured clauses.

This randomness vs. structure is the idea that was used so brilliantly by Ben Green and Terry Tao in the recent famous proof of primes in arithmetic progressions. Perhaps we can use it on SAT.

The fundamental issue here is exactly what do I mean by “random” and “non-random”. The simple answer is that I do not yet have a precise notion. One way to approach this could be to try and characterize those clauses that the best SAT solvers fail on. If we could somehow understand what causes them to fail, then we would understand what “non-random” means in this context.

Then, we could attempt to show that for non-random clauses there is some additional properties that allow us to design an algorithm for such clauses.

Open Problems

The obvious question is can we make any progress on P=NP using this or some related approach. What I like about is two things. First, it uses one of the deepest results known in computational complexity–the PCP theorem. Second, it is looking for an algorithm, which we are good at finding. Just about every breakthrough in theory can be viewed as a new clever algorithm. This even holds for lower bounds. For example, the bounds on constant depth circuits are based on clever algorithms for approximating certain kinds of circuits by low degree polynomials.

Finally, if you get this to work, when you accept your Clay award and your million dollars please at least mention the blog.

About these ads
10 Comments leave one →
  1. Alex Nikolov permalink
    April 27, 2009 3:39 pm

    Maybe your point about the 15 puzzle being unsolvable with some initial arrangements will be more clear if you have a picture showing such an arrangement. There is one in wiki with 14 and 15 swapped for example. I think it will be a more useful illustration than showing a solved puzzle – the end goal is pretty clear without a picture.

    I love your idea for a fast SAT algorithm.

  2. April 28, 2009 2:17 am

    It is possible to look at the approximation schemes for dense instances of Max 3SAT as an instantiation of this idea.

    I think that before hoping to do better than 7/8 for general Max 3SAT instances one should tackle the problem of distinguishing random 3SAT instances with, say, nlog n clauses and n variables, from satisfiable formulas. And to do so in the way you describe, one should have a checkable definition of “quasirandomness” for 3-uniform sparse hypergraphs, and such that a random 3-uniform hypergraph with n variables and nlog n hyperedges satisfies the definition with high probability. Unfortunately, we know very, very, little about the quasirandomness of sparse hypergraphs. I have written a guest post on this subject on Terry Tao’s blog.

    • rjlipton permalink*
      April 28, 2009 8:09 am

      I agree. I think of this as a long term project. But you idea is obviously a great one.

  3. May 24, 2009 6:17 am

    I’ve just spotted this post and I find it very interesting, since it relates the area I feel most comfortable with to an area that I have always found fascinating. Here are a couple of thoughts about your proposal to tackle 3-SAT.

    You draw a very interesting analogy with the use of structure and quasirandomness in additive combinatorics. But the arguments in additive combinatorics do not tend to say of a given object that either it is quasirandom and you are done or it is structured and you are done for a different reason. Rather, they do one of two things. Either they say that if you are not quasirandom then you can pass to some subset where some parameter (e.g. density or energy) goes up, or they decompose a structure into a quasirandom part and a structured part (and perhaps a small part as well).

    What does that tell us about your putative SAT algorithm? I think the decomposition idea is probably more promising: one might be able to prove something like that the set of clauses looks like a random subset of a structured hypergraph. Then to use this one would have to devise an algorithm that generalized the algorithms that work in the random case to some algorithm that worked for random perturbations of the structured case.

    It looks as though the sort of structure one would be looking for might be related to influence of variables: in the random case one would expect variables to have low influence, so perhaps the structured case would involve variables of high influence. Unfortunately, there seem to be a lot of intermediate cases …

    One other remark is that your first step, the project of identifying why random 3-SAT can be solved and attempting to replace “random” by “quasirandom”, seems like something that could in principle be tackled in the short term, even if it is not clear how one would get from there to a proof that P=NP. I’m even tempted to ask what the best references are for algorithms that tackle random 3-SAT.

    I’m not trying to suggest by these remarks that I find it at all likely that P=NP. But I’ll have to think about whether I need to change my prior distribution as a result of your very nice idea of using the PCP theorem.

    • Timothy Chow permalink
      September 8, 2009 5:41 pm

      One difficulty I foresee with this particular structure/randomness dichotomy is that the “structured” cases here are essentially the “hard” cases. For example, if one follows Cook and Mitchell’s suggestion of transforming factoring problems to SAT instances, then one gets “structured” examples that totally stump the algorithms that are good at random 3-SAT. It’s not clear that we have gained anything by isolating “structured” problems in this sense.

      Regarding references for algorithms that are good at random k-SAT, I would start by studying the survey propagation algorithm. It performs extremely well, and there is some very appealing intuition from statistical physics behind it. There has been enough published about survey propagation by now that you may be better off looking for your own favorite reference than taking suggestions from others, but you could always start by looking at this fundamental paper by the inventors of the method.

  4. Carsten Milkau permalink
    September 22, 2010 5:10 am

    It surprises me that attempts on the P=NP question focuses so much on NP problems. It is a well-known fact that P=NP implies P=PH. Schaefer and Umans (SIGACT 2002) give a nice little compendium of second- and third-level complete problems. And of course, there’re the original canonical \Sigma_k- and P\i_k-complete generalizations of SAT. Showing any of these “harder” problems to be outside P would do as well.

    I also have an interesting question:

    Is there a PH-complete problem? (under deterministic polynomial-time reductions)

    I am not aware of anyone having found such a problem. It is easy to find PH-hard ones, but I do not see why any of them should be in PH. If the answer is no, then \Sigma_i \subsetneq \Sigma_{i+1} for infinitely many i, and \Sigma_i \neq \Pi_i for all i (i.e. PH does not collapse). In particular, we would have Co-NP != NP and thus P != NP. If the answer is yes, an explicit such PH-complete problem might be a great candidate for studying membership in P. So either case, we win.

    • Ørjan Johansen permalink
      September 22, 2010 8:26 am

      Isn’t the question of whether there is a PH-complete problem x equivalent to the question of whether the hierarchy collapses? Because that problem would itself have to be at some level of the hierarchy, and the reduction means everything else is at most P^x.

      • Carsten Milkau permalink
        September 22, 2010 9:01 am

        Yes it is. What I’m pointing at with the positive answer is essentially the following path:

        * someone proofs that PH = \Sigma_k for some k
        * maybe k gets improved if it is not already very small (maybe even to k=1)
        * eventually, someone realizes that k cannot be improved further,
        thus essentially showing \Sigma_{k-1} \neq \Pi_{k-1}

        The point is that looking at “hard problems in PH” may help moving to the next step in the list.

Trackbacks

  1. Big Red Bits » Prisioners and boxes
  2. A Random Forest» Blog Archive » P ≠ NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,916 other followers