Skip to content

Some further comments into the recent claimed proof that P is not equal to NP

Yuri Manin is a great mathematician, who has worked in various areas and also is well known for his many excellent expository works. He has won numerous prizes, including the Nemmers Prize in Mathematics in 1994, and the Cantor Medal in 2002. One of his quotes is:

A proof only becomes a proof after the social act of “accepting it as a proof”.

Today is a follow up to Sunday’s post on Vinay Deolalikar’s claimed P ${\neq}$ NP proof. The community is working on deciding whether or not to accept his proof as a “proof.” No one has yet had a chance to study his 102-page draft paper in full depth. (Note—this is the author’s update today, but page references up through page 77 seem to be identical with the version linked before.) We do not wish to be part of a “rush to judgment”—the author has advanced serious and refreshingly new ideas of definite value, and deserves time and space to develop them more fully.

We—Ken Regan and I—do wish to be part of the discussion of the paper, and help in any way we can to facilitate the resolution of whether it is a proof or not. Hence we have jointly written this update.

We think that Vinay Deolalikar should be thanked for thinking about this difficult problem, and for sharing his ideas with the community. Whether he got it right or not, he has tried to add to our understanding of this great problem. We need more people working on hard problems. If no one does, then they never will be solved.

Possible Issues With The Proof

Even after about one day there have been many serious comments raised about the proof. We have collected some of the main ones together. They are specific objections, that need to be answered in any subsequent development. Here we collect and summarize what seem to be four main objections. The objections are related in that they all question the author’s deduction that P = NP implies the existence of a polynomial-time algorithm of a certain specific kind, which he then argues leads to a contradiction involving rigorously-known statistical properties of random ${k}$-SAT instances, for large enough ${k}$. Some of the objections converge on the same point in the paper, notably Section 7.2.They seem, however, to be mathematically separate—insofar as patching one might not patch the others.

Each has been voiced by more than one reader, in comments in this and other weblogs. We’ve ordered them for ease of discussion—this is not an opinion on which is most important. This post was first drafted with the comments we saw as of 4pm EDT Monday, and apologies in advance for ones missed in the meantime. Moreover, we expect that some of the objections can be answered topically—this is already evinced by some of our referenced comments, and is a goal of the conversation we are trying to promote.

1. The ordered-structure issue. This is raised by James Gate here, by Arthur Milchior here, and by David Barrington (also quoting Lance Fortnow) here. Note, however, that Barrington here and Milchoir in reply may have patched their part of this point.

For background, logical formulas can be classified according to whether they reference a presumed total order ${<}$ on the elements of the structures they describe. For certain logical vocabularies, such an ordering can be defined even if there is not an explicit symbol for it, so the issue doesn’t matter. This is the case for formulas characterizing NP. The class P is known to have characterizing formulas over the FO(LFP) vocabulary when the explicit order is present, but whether this is true when the order is absent is a major open question, which we addressed in this blog last April 5. It is also called a “major question” in the Ebbinghaus-Flum book referenced on page 67 of Deolalikar’s paper. The paper attempts to compensate by adding a pre-defined relation symbol “${R_E}$” to the vocabulary, and asserts that past this point the synthesized order relation will not be needed, but there is doubt over whether the details in the paper suffice to accomplish this.

2. The paper may not handle tupling correctly. This is raised by Paul Christiano here, and is part of Barrington’s comment here.

For background, the paper requires that a certain predicate—let’s call it ${Z(\cdot)}$—in the FO(LFP) formula be unary. The formula needs, however, to express properties of ${k}$-tuple of elements of the underlying structure (where it might be possible to make ${k}$ the same as the “${k}$” in the ${k}$-SAT problem being analyzed). Standardly one passes to a new structure whose elements are ${k}$-tuples over the old structure. However, this can greatly distort adjacency and distance-${d}$ neighborhood relations from the original structure. The author provides results with counting arguments that aim to limit the distortion, asymptotically as the size ${n}$ of the (original) universe goes to infinity. However, there is fundamental doubt about whether these pieces fit together. (It is possible that the objectors are overlooking material early in section 3 of the paper that can make them fit—again we have not gone to full detail here.)

3. The LFP characterization of P may not accomplish what the author needs. This is also raised by Christiano, paragraph with “More simply”, and was the cautionary point of our initial reaction here.

For background, consider the problem of determining whether a given context-free grammar ${G}$ derives the empty string, which is complete for P under logspace reductions, and in that sense captures polynomial-time complexity. Initialize a set ${N}$ to be the empty set. Then iterate the following process:

For every grammar rule ${A \longrightarrow X}$ with ${X \in N^*}$, add the variable ${A}$ to ${N}$.

Since ${\emptyset^*}$ matches the empty string, this initially picks up all variables that derive the empty string directly. Further iterations may add more variables. The process stops and outputs the final ${N}$ when the last iteration goes through all the rules and finds no new variables to add. This must happen within ${O(n)}$ iterations since each adds at least one new variable to ${N}$. Then ${N}$ is the Least Fixed Point (LFP) of the iteration operation, and the problem answer is “yes” provided the start symbol ${S}$ of the grammar belongs to this ${N}$.

One can write an FO(LFP) formula that yields this algorithm. The meta-question on which this objection is based is, must every FO(LFP) formula (for this query) yield an algorithm that is similarly tractable to analyze? Going further, does Deolalikar’s logical vocabulary in section 7.2 even have enough richness to guarantee such an algorithm at all? The paper does directly address this matter in section 7.2, but the references therein do not yet seem to cinch the argument. The two objections above may be contributing factors to the lack perceived here, but this is raised as a separate conceptual matter.

4. The paper may target the wrong hardness phase of randomized ${k}$-SAT. This is raised by Cris Moore here and is corroborated by Alif Wahid in reply here.

This seems to be the most mathematically detailed objection. It might show that the author’s proof strategy is on the wrong track, in advance of patching any of the other objections. It might alternately show the best road to getting surprising and concrete valuable upper-bound results about (randomized) ${k}$-SAT from Deolalikar’s work, even if the lower-bound consequences are elusive. Certainly we should note the interesting succession of rigorously-proved results referenced both by Deolalikar and by these commenters.

There have been various comments critiquing other aspects of the paper, and some other long ones on the mathematical content, but these seem to be the most focused objections to now.

Suresh Venkatasubramanian has linked from his wonderful GeomBlog a “Google Doc” that similarly aims to collect evaluations of the paper. One must be a registered member of Google and log in (or be logged in) when clicking his link.

Open Problems

Still there remains the key question: is the proof correct? In one sense the present paper almost surely has mistakes—not just from the above objections but what one could expect of any first-draft in a breakthrough situation. The real questions are, is the proof strategy correct, and are the perceived gaps fixable?

129 Comments leave one →
• August 14, 2010 8:01 am

On 10 Aug, 06:50, Niels Fröhling wrote:

> Up to date reactions, comments of the community/researchers (summary):

Deolalikar may possibly have proven the lesser significant of either P!
=NP (not the more ‘unthinkable’ P=NP) …

it appears New Generation Lossless Data Representations likely point
the way forward to prove the ‘converse’ P=NP feasible !

http://lofi.forum.physorg.com/New-Generation-Lossless-Data-Representations_28214.html

1. August 9, 2010 8:53 pm

Some updates: Another look at the “Google Doc” turned up a comment by Ryan Williams on Twitter as “rrwilliams”. (Perhaps Deolalikar’s assertions about the particular form of the for-contradiction LFP-based algorithm would head off that argument, if they are correct.)

The original Slashdot item has a comment by syygates along similar lines about FO(LFP).

Commenter vloodin has elaborated on his impressions in yesterday’s original item, first with a separate query about the structure of constraint-satisfaction problems involved, and then with a point about unary relations that seems to fall under heading 2. above.

• harrison permalink
August 10, 2010 11:39 am

Because this is a silly point to argue 140 characters at a time, and the wiki isn’t terribly conducive to raising objections, I’ll leave a comment here.

I don’t think Ryan Williams’ Valiant-Vazirani objection is relevant, unless I’m missing something crucial. Deolalikar defines a measure of “complexity” on solution spaces and asserts, essentially, that “complex solution spaces are hard” — that there’s some moral lower bound to computational complexity in terms of solution-space complexity. But there’s no reason to believe that “simple solution spaces should be easy,” that there’s an upper bound, and indeed this isn’t necessary to prove P != NP. So unless the Valiant-Vazirani reduction preserves Deolalikar’s notion of complexity, and I don’t know of any reason that it should, the fact that UNIQUE-SAT has simple solution spaces is a non-issue.

• Ryan Williams permalink
August 10, 2010 12:28 pm

My objections are not rigorous, in the sense that I am not directly pointing to a place in his proof where it breaks. I am merely pointing out that the “complexity” of a solution space is neither necessary nor sufficient for NP-hardness. So a proof that is supposed to rely on properties of the solution space of random k-SAT (and then argue from the NP-hardness of k-SAT) seems very strange, and quite fragile.

I am not sure what complexity measure you are referring to, but let me point out that the Valiant-Vazirani reduction can viewed as simply adding uniform random XOR clauses to a formula.

• harrison permalink
August 10, 2010 12:38 pm

I am merely pointing out that the “complexity” of a solution space is neither necessary nor sufficient for NP-hardness.

Okay, fair enough. I think my point was simply that one doesn’t need to have a necessary condition for problem A to lie outside of P in order to separate P from NP, just a sufficient one. But I suppose the point about Valiant-Vazirani can be considered “circumstantial evidence” against the idea that solution-space complexity should be closely tied to computational hardness, whether or not it can be made into a rigorous objection against the proof.

• Daniel permalink
August 10, 2010 4:31 pm

Regarding known algorithms, Unique-k-SAT turned out to be easier than (general) k-SAT in terms of worst case running bounds.

Remember that Valiant-Vazirani increases clause width.

• Ryan Williams permalink
August 10, 2010 5:05 pm

Unique-k-SAT turned out to be easier than (general) k-SAT in terms of worst case running bounds.

Daniel: As far as I know that is yet to be proved. Yes, the current algorithms for Unique-k-SAT are faster, but there’s also opposing evidence as well. There are “subexponential” reductions from k-SAT to Unique k-SAT (as k becomes arbitrarily large), which are basically a more careful Valiant-Vazirani. See http://cseweb.ucsd.edu/~ccalabro/cikp.pdf

• Daniel permalink
August 10, 2010 5:19 pm

Ryan: Yes, I remember the isolation lemma, and I believe that one day someone will prove that k-SAT is subexp reducible to Unique-k-SAT for every k. Would this break the proof?

So at the end, the structure of the solution space would not matter anyway?

2. harrison permalink
August 9, 2010 9:14 pm

To my untrained eyes, the most serious part of Cris Moore’s objection is the observation that random k-XORSAT seems to exhibit the same kind of clustering behavior as random k-SAT — if this could be made rigorous, it would seem to preclude any proof along the lines of Deolalikar’s argument, or at least any proof that didn’t use some other fundamental property of k-SAT. But I’m still not totally clear on exactly what statistical properties of random k-SAT he’s using, so it’s possible that random k-XORSAT does avoid them.

3. August 10, 2010 12:15 am

As is the way of the web these days, I’ve created an “update to one ongoing question” domain:

http://doespequalnp.com

to try and keep track of discussions and keep the rest of the world (or at least, CS grads who never did much with theory after schooling was over) interested. Oddly enough, an article on complexity has already shown up on AOL’s news site because of this, so while I don’t think we can plan on seeing Cook and Deolalikar gracing the red carpet at this year’s Oscars, there might be some mainstream hook that it’d be worth trying to jump on to keep people interested.

And if not, well, all I’m out is a domain name registration.

4. bob permalink
August 10, 2010 12:41 am

As an amateur who has tinkered with P vs. NP in the past, I just wanted to suggest a possibility.

In the Travelling Salesman Problem (symmetric path version) it seems likely that it is possible to construct a path from a random set of points in the 2-D plane in such a way that at all times the subset path is a minimum length path. It also seems likely that the construction time is polynomial for almost all (Poisson) random configurations of points in the plane. All subprocesses in the construction are known (in the peer reviewed literature) to have polynomial time.

What if proving P vs NP one way or another actually hinges on a small sub-class of configurations that is small enough to make P = NP almost surely. Would the community accept this as a proof of P = NP, if say, countable subsets of zero measure were overlooked in some integration (Lebesgue-style)?

• August 10, 2010 7:49 am

The fact that almost all instances of an NP-hard problem are easy doesn’t give you any argument for P=NP. And practically, once you start reducing another hard problem to the one you think is almost always easy, you’ll find that “magically” you get only hard instances.

5. Cristopher Moore permalink
August 10, 2010 1:01 am

Indeed, random k-XORSAT has very similar statistical properties to random k-SAT. There is a clustering transition at which an exponential number of clusters appear, with a Hamming distance Theta(n) between clusters. These things are known rigorously, and the density at which the clustering transition occurs is known exactly since it corresponds to the appearance of the 2-core in the hypergraph of clauses. The situation for k-SAT is much more complicated, but we can obtain bounds on the clustering transition using clever applications of the first and second moment methods [AR-T and AC-O], and the resulting picture seems similar.

Deolalikar dismisses k-XORSAT in his Remark 7.12, saying that the issue is “the number of independent parameters”, but I don’t understand what he is getting at. Following the tupling issue (#2 above), it seems that the whole point is that polynomial-time algorithms are not limited to local interactions between the original variables of a problem. When we solve an XORSAT instance using Gaussian elimination, we end up coupling the bits of the problem globally. As Deolalikar says, in XORSAT the clusters can be given a succinct description in terms of a linear basis; but isn’t the difficulty proving that we can’t do something similar for k-SAT?

I am inclined to believe, along with physicists like Zdeborova, Krzakala, Mezard, Zecchina, and others, that k-SAT is hard in the region where w.h.p. all clusters have frozen variables. In the CS literature this is often confused with the clustering transition, perhaps because the rigorous results of Achlioptas, Ricci-Tersenghi, and Coja-Oghlan apply in the large-k limit where the “clustering” and “freezing” transitions take place at roughly the same density. k-SAT and k-coloring for small k turn out to be somewhat special, and as I mentioned earlier there is ample evidence that clustering alone doesn’t make them hard.

As you say, I may be missing precisely what statistical property of the 1RSB phase Deolalikar is relying on, but it would have to be something that SAT has but XORSAT lacks. At the moment I am quite skeptical that this works.

• harrison permalink
August 10, 2010 8:56 am

XORSAT is complete for Parity-L, right? So it seems like an argument proving, say, NL != NP might avoid this problem, at least, unless I’m misremembering what we know about space inclusions (very possible.) Do you know of any problems in NL (or L) that exhibit this sort of clustering?

• August 11, 2010 3:51 pm

2-SAT is NL-complete (and O(n)). I’ve heard that it has similar statistical properties to k-SAT, though I don’t know the details.

• August 10, 2010 9:42 am

Dear Chris,
Would Deolalikar’s argument show that random k-SAT is hard at some region? Somehow this looked always to me as a much harder statement than k-SAT being hard in worse case.

• Cristopher Moore permalink
August 10, 2010 12:03 pm

I think so, but I can’t pretend to understand the logic part of his proof — I’m just trying to parse which statistical properties of k-SAT he’s relying on.

• August 10, 2010 1:46 pm

Dimitris Achlioptas has given me some notes on k-SAT and k-XORSAT which I have transcribed on the wiki at

http://michaelnielsen.org/polymath1/index.php?title=Random_k-SAT

He points out that there is at least one distinction between random k-SAT and random k-XORSAT, namely that the codewords in k-XORSAT are always linearly related, whereas for k-SAT they should (intuitively, at least) be independent. A large part of Deolalikar’s argument seems to be focused on quantifying this independence property. It still does not seem clear exactly how this is done yet, but it does seem to indicate that XORSAT is not a completely fatal counterexample to the method.

• August 10, 2010 2:18 pm

About XOR-SAT, that is the precisely the point Cris talked about above: How does the proof shows that there is no such symmetry or relation between the solutions in K-SAT. Imagine it is forbidden for a moment to think about linear space and Gaussian elimination. Then to my knowledge XOR-SAT is as hard as K-SAT.

Remark about the Dimitri’s notes:

* Dimitri’s definition of the d1RSB is not the one used in statistical physics either. In physics d1RSB is defined via divergence of Monte Carlo equilibration time, is equivalent to extremality of the Gibbs measure, and reconstruction on trees. And there is a couple of rigorous results about those issues in SAT and coloring as well.

* But independently of the definition of what is called d1RSB. Deolalikar uses the frozen phase and defines it via the non-trivial cores, this agrees with the definitions in physics. And several works in physics (in particular http://arxiv.org/abs/0704.1269) conjectured that is is indeed this phase that makes problems hard for a large class of algorithms (not including linear symmetry using algorithms like Gauss elimination)

• Cristopher Moore permalink
August 10, 2010 2:33 pm

The upshot of all this is that Deolalikar has to make a distinction between SAT and XORSAT based on something other than their statistical properties. I guess it’s up to the logicians to tell if he has done so.

6. Pied permalink
August 10, 2010 1:57 am

My very high level in complexity theory allowed me to notice that you wrote “Milchoir” and “Milchior”, and that almost certainly one of those two spellings is wrong.
I wish I could understand more the discussion about the paper (or the paper itself), but it is still a great pleasure to read your blog and its comments.

Cheers,

P!

• Ryan permalink
August 11, 2010 10:07 am

Unfortunately your argument that “Milchoir” != “Milchior” (I’ll call this problem OI != IO) is insufficient to prove that one of those spellings is wrong. I am going to have to see a much more comprehensive writeup with more complex wordage.

Other than that, I agree… the topic is very interesting even if I can’t understand most of the discussions.

Ryan

7. August 10, 2010 2:46 am

Great Article.

8. Thomas Schwentick permalink
August 10, 2010 3:36 am

There is an another issue with the locality in remark 3 of Section 4.3. Moving from singletons to tuples destroys locality: this is because the distance of two tuples is defined on the basis of its participating elements. For example, if two tuples have a common element then their distance is (<=) 1. Thus, even if in the "meta-structure" two tuples are far apart, they can be neighbors because of their singular elements.
I must admit that I have not read the rest and cannot judge whether this issue is crucial in the following.
(Sorry for double-commenting. I missed that there is this newer post on the matter)

9. vloodin permalink
August 10, 2010 3:49 am

Nice to see a separate blog discussing the mathematics of the proof.

1. I do not get the order objection that many are rising. Certainly in the proof an order relation is assumed. What is the problem then?

2. ( and also 3.) This seems to be the problem – reduction to unary relations, and also form of cumputation.

The key point in the argument is that in the graph that is considered in chapter 7 (by considering model of computation) every vertex has a degree that is quasi-bounded, O(poly(log(n))) as n goes to infinity. However, this seems to fail in his reduction.

• August 10, 2010 4:58 am

There, is two differents problem with the order, the successor relation is assumed in chapter 7, but when you read chapter 4 you do not see it. So while you read chapter 4 and did not read chapter 7, you think his statement of the theorem is false, since he did not speak of any successor relation.

The second problem with the order is similar to the second remark of this post. It is that, if you have got an order relation the Gaifman graph become total(he states it), but since you can define the order relation in this formalism thanks to succ and lfp, it seems that there is no fondamental reason to reject order. To state it an other way, if this part of this proof is correct, it seems it should also be correct with an order relation. (Or, more generally, with any relation that you can define with an LFP, which may be not unary and then change the Gaifman graph (which seems to be the second remark of this blog post))

• August 10, 2010 11:19 am

Which give me an idea, I hope it may be interesting to solve this potential problem:

Abiteboul Vianu Vardi defined relational machines(http://portal.acm.org/citation.cfm?id=256295 ) and a nondeterministic fixpoint. P_r and NP_r are the equivalent of P and NP in relational machine, they proved that P_r=FO(LFP) and NP_r=FO(NLFP) (without order this time).

And as an extension to the well known Abiteboul Vianu’s theorem, they proved that P=NP iff P_r=NP_r.

Hence if it was true that k-SAT is in NP_r, it would now be enough to prove that k-SAT is not in FO(LFP) without having to wonder if there is an order or not. Hence it seems that the order/successor relation could be removed from the proof, which would remove one potential problem !
(But I do not know if k-SAT is in NP-r, and if it is, it does not seems trivial)

10. August 10, 2010 4:49 am

Could you please correct the typo in the second occurence of my last name, it’s “Milchior” and not “Milchoir”.

11. Daniel permalink
August 10, 2010 6:09 am

The author writes “The Hamming distance between a solution that lies in one cluster and that in another is O(n).”. Isn’t it always like that?

12. Robert permalink
August 10, 2010 7:14 am

Just to step back for a minute; Ken Regan initially described the proof as consisting of 5 steps:

“Deolalikar has constructed a vocabulary V which apparently obeys the following properties:

1. Satisfiability of a k-CNF formula can be expressed by NP-queries over V—in particular, by an NP-query Q over V that ties in to algorithmic properties.
2. All P-queries over V can be expressed by FO+LFP formulas over V.
3. NP = P implies Q is expressible by an LFP+FO formula over V.
4. If Q is expressible by an LFP formula over V, then by the algorithmic tie-in, we get a certain kind of polynomial-time LFP-based algorithm.
5. Such an algorithm, however, contradicts known statistical properties of randomized k-SAT when k >= 9.”

Is it correct to say that all the doubts concern step 3? (i.e whether P=NP really implies that an NP query over V can be expressed by an LFP+FO formula over V)

• August 10, 2010 11:41 am

Basically yes, with your “i.e.” statement in place of my 2. & 3. On second look I mis-wrote 2.—“P-query” was meant to correspond intensionally to “P(EQ)” in my analogy which followed, but it’s really extensional. In general, issues 1. and 2. show that the whole thing has to be looked at more expansively than my first reaction. (This is also partly why we did not include my outline in the new post—and we cut other things so as to minimize interposing ourselves between the paper and the comments.) One can also interpret “issue 4.” as a question about step 5.

Replying to Daniel just above, indeed that’s why we put “O(n)” in quotes in the first post; of course Theta(n), or at least Omega(n), is meant. And to Arthur, sorry about the typo—I didn’t proofread as carefully as with other posts. It’s not easy to change.

13. proaonuiq permalink
August 10, 2010 7:55 am

I´ve now completed a first very superficial look of the paper. As has been said in previous thread, those familiarized with the preliminary stuff, can skip chapters 1 to 6, except the fourth.

In seventh the author summarizes what IMHO are the key high level strategical points of the proof:

1) “If LFP were able to compute solutions to the d1RSB phase of random k-SAT, then the distribution of the entire space of solutions would have a substantially simpler parametrization than we know it does”.

That´s the modus tollens (i assume that “than we know it does” refers to a theorem).

2) “The framework we have constructed allows us to analyze the set of polynomial time algorithms simultaneously, since they can all be captured by
some LFP, instead of dealing with each individual algorithm separately”.

In order not experts can understand the whole issue i will try to rephrase these two propositions in a very simplified way:

1. d1RSB phase of random k-sat = hard problems in NP.
2. LFP = P
3.If P computes Hard problems in NP a contradiction is derived.
Therefore P!=NP

If 1 and 2 ar correct, 3 is clearly correct. If i´m not wrong your point 1 and 3 rises doubts in proposition 2; and point 4 rises doubts in proposition 1. I have not clear to which point of above argument your proposition 2 belongs.

IMHO the approach is strategically acceptable. It remains to be confirmed by experts that it is also valid tactically.

P.s.
1. Thanks !
2. Come on guys ! The first who confirms the proof or finds the flaw in it will be as famous as the author of the proof !

• Scott Schulz permalink
August 10, 2010 11:04 am

As I stochastic specialist, twenty years out of grad-school, I found this comment extremely helpful. Thanks for framing proof at this level.

14. Jason permalink
August 10, 2010 8:01 am

It seems to me one of the key claims is that, essentially, all polynomial-time algorithms depend on exploiting locality, in the form of conditional independence and the sum-product algorithm. I don’t quite understand the logic parts of his prof of this claim. But like Chris I wonder how Gaussian elimination and similar cancellation based algorithms (like Valiant’s holographic algorithms) can be understood to derive their efficiency ultimately from the sum-product algorithm.

15. August 10, 2010 8:07 am

Just a note that Suresh’s Google Docs file on the paper has now been moved to the polymath wiki, where it has been broadened to serve as an aggregator for all the news and information about the paper that is coming in:

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar's_P!%3DNP_paper

Of course, contributions are very welcome.

16. August 10, 2010 9:50 am

Serious (unfixable) problem.

Vinay uses FO(LFP) as his main tool. However, I think he fails to realize that FO(LFP) , although defined using least fixed point, also contains greates fixed points.

Thus, if \mu x f(y,x) is the formula denoting least fixpont of f(y,x) under x,
and \nu x f(y,x) is the formula denoting greates fixpont of f(y,x0 undex x,

then there is this triple negation rule:

\neg \mu x \neg f(y, \neg x) = \nu x f(y,x). (See Kozen’s paper on axiomatizing Mu-Calculus, or my paper with Allen Emerson on Mu-Calculus in FOCS 1991 titled “Mu-Calcus, Tree Automata and Determinacy”, or just original Vardi and Immerman papers). Note that the left hand side is well defined in LFP, as x is under even number of negations from the operator \mu.

Now, his whole theory of least fixed point only increasing the size of the structures falls apart. BTW, this is a usual mistake most people new to fixed-point logics fall prey to.

For example, now he has to deal with formulas of the kind
\nu x (f(y, x) \and g(y, x)).

His section 4.2 deals with just one least fixed point operator…where his idea is correct.

But, in the next section 4.3, where he deals with complex fixed point iterations, he is just
hand waving, and possibly way off. Given that he does not even mention greatest fixed points leads me to think that this is a huge (unfixable) gap.

Charanjit Jutla
IBM T.J. Watson

• August 10, 2010 12:21 pm

Nice comment. It causes me to wonder, though, whether your objection can be met by using transitive closure in place of LFP, thus getting “only” a proof(-strategy) for NP != NL. Or NP != L if one must use deterministic transitive closure.

• Charanjit Jutla permalink
August 10, 2010 1:28 pm

Hi Ken,

Problem with fixed point logics and its fragments is that the hierarchies even for propositional fixed point calculua are not well understood. For example, it is not known if Proposition Fixed Point logics (commonly called Mu-Calculus) have a strict hierarchy of expressibility when considering alternation of \mu and \nu operators. For FO, I would think the problem is even more serious.

This is nothing surprising, as fixed point operators are so powerful, giving us P = FO(LFP).

For years, people (in the Mu-calculus field) have suspected though that mu-calculus is a great formulation, which can lead to deep results. For example, myself and Emerson used it to prove Martin’s theorem for determinacy of inifinte games upto Borel hierarchy level 3.

However, it requires many simpler (and still hard to prove) results first, rather than jumping straight to complexity separations.

Charanjit Jutla

• Nimmy permalink
August 10, 2010 1:40 pm

(Disclaimer: I havent’ read the article)

What really needs fixing seems to be the handling of k-tuples.
The Gaifman approach doesn’t seem to work at all with k-tuples, as e.g. Thomas Schwentick pointed out and I don’t see how using only transitive closure instead of LFP would help, this problem will still persist.

• August 10, 2010 2:23 pm

LFP on finite structures with order reduces to a single least fixpoint.
(That’s why the LFP = P proof works.) That’s why the question of what he’s doing with order, is important. I don’t understand what he’s trying to do, but people I trust assure me the paper is obvious nonsense, so I’ve only skimmed it.

And I don’t understand your comment about it not being known whether mu-calculus has a hierarchy of expressibility in alternation. It’s 15 years since I (and independently Lenzi) proved that it did!
Are you thinking of a different mu-calculus from the one we know and love (the modal one)?

• August 10, 2010 11:34 pm

Ok, I revisited Immerman’s paper, and indeed one outer least fixed point suffices with FO. However, there is a trick to that characterization. Given a n^k time m/c M it creates a fixpoint formula say, \mu R \phi(R), and then it shows that M reaches final state on given structure s iff
\mu R \phi(R) (n^k) holds. While, that gives one P= FO(LFP), note that this
LFP maynot be bounded as it is . Its fixed point calculation can go on for over,
as now the universe is unbounded. This is in contrast to the proof that fixpoint FO formula on a finite structure can be computed in poly time.

So, it is possible that Vinay is just showing that SAT (or whatever hard problem he uses) cannot have a fixed pointFO characterization over the original universe of the structure, or even say a bounded universe.

• August 12, 2010 11:24 am

ok, that “unbounded computation” concern is not valid either…go figure!
Basically, given a linear order on the structure, the last node can be detected by a first order formula.

Maybe there is something to this approach afterall. His use of locality theorems for FO formula, along with single least fixpoint characterization of P is interesting…

17. harrison permalink
August 10, 2010 10:24 am

Another issue: If I understand his argument correctly, Deolalikar claims that the polylog-conditional independence means that the solution space of a poly-time computation can’t have Hamming distance O(n) [presumably he means \theta(n)], as long as there are “sufficiently many solution clusters.” This would preclude the existence of efficiently decodable codes at anything near the Gilbert-Varshamov bound when the minimum Hamming distance is large enough. I don’t know for sure, but my gut tells me that such codes absolutely should exist — can AG codes have these sorts of parameters?

18. August 10, 2010 12:28 pm

Just curious. If statistical physics was powerful enough to attack P != NP, then there might be successful efforts to give alternate proofs of simpler complexity results. Is that the case?

• Steve Nuchia permalink
August 10, 2010 2:43 pm

I was working in that direction when my adviser ran out of patience. Not thinking of it as statistical physics specifically, but the same sort of ensemble space reasoning.

I think there is an opening to develop a predictive complexity theory using some of the notions in this work. Something that can do for arbitrary problems what we can do for, say searching and sorting: put tight asymptotic bounds on the complexity of the best possible algorithm.

19. anon permalink
August 10, 2010 12:51 pm

Well, the link to the updated P vs NP paper is now broken, and the link to it from Deolalikar’s homepage is gone, along with any other mention of a proof of P != NP.
But the link to the original version (from August 6th) is still working…

20. August 10, 2010 1:31 pm

We added a prediction market for the next winner of the Clay Prize:

http://smarkets.com/current-affairs/clay-prize/next-winner

Jason / Smarkets

21. Daniel permalink
August 10, 2010 3:25 pm

Poor guy, seems that he withdraw the “proof”…

15 minutes of fame.

• rjlipton permalink*
August 10, 2010 3:46 pm

He has removed the paper. Is there a definitive answer yet.

• Daniel permalink
August 10, 2010 4:27 pm

Well, I also got the mail with Cook’s famous quote “This appears to be a relatively serious claim to have solved P vs NP.” fuelling my hope and I guess also many others’ hopes that we finally got it.

That’s why I feel with this poor guy: He sent the paper to some experts for comments, and I fear that Cook’s comment catapulted the paper into the web, unleashing the webmonster.

Google for “This appears to be a relatively serious claim to have solved P vs NP”. 207 hits already right now.

• Ryan Williams permalink
August 10, 2010 5:08 pm

Dick, he didn’t remove the paper, he just renamed it. According to the above comments, there are multiple versions accessible from his website at the moment.
See

http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/

• rjlipton permalink*
August 10, 2010 5:55 pm

Ryan,

Thanks for comment.

• Conan B. permalink
August 10, 2010 6:11 pm

He sent the paper to some experts for comments,

This is inaccurate. He has announced the solution of the problem explicitly in the mail, saying: “I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts.”

This is a public announcement.

• rjlipton permalink*
August 10, 2010 6:16 pm

Sunday I got that version too. But I asked him for ability to talk about and to link to his paper. I would not post a paper if he had not wished me too.

• August 10, 2010 6:22 pm

Conan, it’s clear to anyone who’s been following this that the first draft was not intended to be published. I’m n0t sure if you’re just confused or you’re intentionally trying to smear Deolalikar, but it would be nice if you could stop posting misinformation.

• August 10, 2010 7:51 pm

The wiki gives at least 3 differents version of the paper, 1, 2, 2+epsilon (2 was removed)
Sadly Deolalikar did not give us a list of the change. Hence using pdf2txt and diff, I created it and wrote it in the wiki here http://michaelnielsen.org/polymath1/index.php?title=Update

There is a lot of false positive, mostly because there was a new definition, which changed a lot of theorem numbers. I remove the false positive before I wrote in the wiki, I really hope I did not remove any real positive difference, but I can not be a hundred percent sure.

• Peter d. permalink
August 11, 2010 3:16 am

“Me”, I think you are wrong. The email was titled “Proof announcement: P is not equal to NP”. And explicitly stated that he announces the result. I sincerely think the author was not against the circulation of the paper at that time.

• Anonymous permalink
August 11, 2010 5:34 pm

It is hard to say who has leaked the paper to the public because it got forwarded several times. My guess is Cook has forwarded it to some other colleges (either after receiving a permission from Vinay or by assuming that this is an announcement because of the title and the contents of the first email sent by Vinay on Aug. 6 to the experts which I have copied below so you can judge yourself), and then the forwarding chain continued until someone leaked it to the public.

The first public mention of the paper that I have seen is http://gregbaker.ca/blog/2010/08/07/p-n-np/ and the first link to the paper I have seen was in one of the comments on complexity blog.

I guess that Cook’s now famous *one line* comment that “This appears to be a relatively serious claim to have solved P vs NP.” caused people to become too excited and they didn’t pay much attention to “appears” and “relatively” in his remark. The rest of the story is well-known.

Subject of the email:
“Proof announcement: P is not equal to NP”

Contents of the email:

I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts.

The proof required the piecing together of principles from multiple areas within mathematics. The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens. Second to this were the technical hurdles faced at each stage in the proof.

This work builds upon fundamental contributions many esteemed researchers have made to their fields. In the presentation of this paper, it was my intention to provide the reader with an understanding of the global framework for this proof. Technical and computational details within chapters were minimized as much as possible.

This work was pursued independently of my duties as a HP Labs researcher, and without the knowledge of others. I made several unsuccessful attempts these past two years trying other combinations of ideas before I began this work.

Comments and suggestions for improvements to the paper are highly welcomed.

Sincerely,
Vinay Deolalikar
Principal Research Scientist
HP Labs
http://www.hpl.hp.com/personal/Vinay_Deolalikar/

22. August 10, 2010 6:12 pm

except that it’s not linked off his web page any more, by the look of it.

• August 10, 2010 6:26 pm

Yeah, he removed the blurb from his HP page when he removed the updated version. I guess someone will have to check with him directly if this is a full retraction or just a return to stealth mode while he’s attempting to address the various concerns…

23. Anonymous permalink
August 10, 2010 6:36 pm

“attached in 10pt and 12pt fonts.”

that part of the ‘announcement’ sounded funny.

24. Anonymous permalink
August 10, 2010 7:12 pm

I have been trying to read the initial document for last two days. Of course the barriers question lingers on. And many other issues, starting from random k-SAT, to fixed points, to GV bound related ones.

But there’s something nice about the way he builds up the connection between the model theory and statistical physics.

for all the flaws for which the proof will become a “toast”, i think the community should raise a toast to his attempts for a building a new connection. It might prove worthwhile in the future.

prof. lipton and regan’s efforts are marvellous in that respect.

25. August 10, 2010 9:07 pm

期待最新的结果和研究

26. Vijay G. permalink
August 10, 2010 10:13 pm

A potential bug in Deolalikar’s proof
=========================

I believe Vinay’s proof has at least one bug. I believe he is considering only those algorithms that determine satisfiability by constructing assignments to the variables in the input k-SAT formula. He then purportedly shows that if these algorithms run in polynomial time we have a contradiction.

THE BUG IS THE FOLLOWING:

It is not necessary for an algorithm to assign variables in order to determine satisfiability. It is entirely possible that there is some as-yet-undiscovered algorithm that determines k-SAT satisfiability simply by syntactic manipulation of the input k-SAT formulas.

Any proof of P!=NP (that is based on k-SAT), must show that there is no polynomial time algorithm for k-SAT. It shouldn’t matter how satisfiability is determined.

Following is an example of an algorithm for determining satisfiability of simpler kinds of SAT formulas without the need for generating a partial assignment. I call these formulas ‘implication clauses’ with only 2 variables (implication clauses with two variables are essentially formulas of the form x1 => x2, x3 => negation(x4), …)

1) Chain the implications in the input appropriately (i.e., if the input contains x1 => x2 and x2 => x3, then conclude x1 => x3), derived all possible such formulas (by transitive closure)

2) If we have at least one formula of the form x1 => negation(x1), then return UNSAT

3) Else, return SAT

• Ernie Cohen permalink
August 11, 2010 9:14 am

If SAT is in P, constructing a satisfying assignment is also: repeatedly choose a variable x, substitute true for x in your formula, and check SAT on the new formula (changing the substitution for x to false if the SAT test fails).

• Vijay permalink
August 11, 2010 10:54 am

Yep! You are right :-(.

27. Endre Varga permalink
August 11, 2010 2:50 am

Well, I do not know about the details in the paper, but it seems to me that more than half of the SAT problems are solvable in O(P(n)log(n)) by simple brute force.

My sketchy proof:
– Let’s encode the input formula in a simple postfix polish notation for convenience. The length of the input is n. In this case we can evaluate the formula with a given assignment of variables in O(P(n)), where P is some polynomial.
– Every formula has an (unknown) truth table, which is a bit-string of the length 2^v if v is the number of variables in the formula. Conversely, every bit-string of some length 2^v has at least one corresponding formula.
– Now at least half of the bit-strings of any length (including 2^v length strings) is uncompressable. As the formula corresponding to a truth table is enough to produce the table itself, that would make it a suitable compression, so for half of the truth tables there could be no formula shorter than the tables themselves. This means, if the length of table is k, and is uncompressable, then we can substitute O(k) with O(n) because k =< n.
– For any uncompressable string the length of zero runs cannot be longer than O(log(n)) as that would allow some compression. This means that if we evaluate the formula for every possible variable assignment as if we are trying to build the truth table then in O(log(n)) we will find a "1" (a solution).
– As evaluating the formula is O(P(n)), then the whole search must finish in O(P(n)log(n)) for at least half of the SAT problems — for the ones with uncompressable truth tables.

28. xuehui869 permalink
August 11, 2010 3:03 am

如果P　＝　NP，好多事情就好办了

29. August 11, 2010 7:12 am

It’s interesting!

30. Ember Nickel permalink
August 11, 2010 10:56 pm

Congratulations on being a “top post” and thank you for helping keep the world up-to-date. This is the first I’d heard of the proof despite browsing several newspapers over the last few days–it’s nice to know there are people out there who’re interested in this news!

31. August 14, 2010 11:44 am

Thank you for the article and thank everyone for submitting their comments because it provided additional clarification for me. This information has been extremely helpful me

Thanks
Md.Alamin Khan

32. R Q permalink
August 14, 2010 10:45 pm

what i don’t understand about this proof is simply this:
if we are relying on sampling or clustering to solve randomized k-SAT at the point when the sample/cluster size becomes exponential, then wouldn’t we require an algorithm with exponential space complexity? as P is contained in PSPACE, that should be enough to prove NP != P.

no need, for statistics, physics, metaphysics, etc.

my understanding was that either we are not restricted to that type of solution or the reducibility of randomized k-SAT within NP does not provably extend to that point.
if that’s incorrect, and i’ve stumbled upon a simple proof that NP != P, I’ll be happy to provide a billing address to the Clay Foundation.

33. August 21, 2010 7:09 am

You cannot believe how long ive been looking for something like this. Browsed through 10 pages of Yahoo results without finding anything. One search on Bing. There this is…. Really gotta start using that more often

Thanks
Md.Alamin Khan

34. Otto E. Rossler permalink
October 13, 2010 3:41 am

Conjecture: “Gödel = lim NP for n –> infinity” (O.E. Rossler and G. Andreeva).