Skip to content

Could We Have Felt Evidence For SDP ≠ P?

March 15, 2014


Evaluating mathematical beliefs

Khachiyan
Rutgers in memoriam source.

Leonid Khachiyan in 1979 caused arguably the most sudden surprise to the West’s popular scientific understanding since the successful launch of Sputnik in 1957. His ellipsoid method gave the first algorithm for linear programming whose polynomial running time was verified. Thanks largely to Narendra Karmarkar it has been superseded by faster interior-point methods, while older algorithms have since been noted to run in polynomial time, but the breakthrough and inspiration came from Khachiyan. Could something like it happen again?

Today Ken and I want to ask whether recent argument over beliefs about {\mathsf{P=NP}}? can be evaluated in light of this shock.

Khachiyan’s “rocket” had actually left the hangar ten months before, in a January 1979 Doklady Akademii Nauk paper whose title translates into English as, “A polynomial algorithm for linear programming.” As recounted by Berkeley’s Eugene Lawler, it was sighted at a May 1979 meeting Lawler attended in Oberwohlfach, and after Peter Gács and László Lovász supplied proofs missing in the paper, it was pronounced correct at a conference in Montreal. The discovery was picked up in October by Science News, and then by the magazine Science. An allusion by the latter to the NP-complete Traveling Salesman problem was moved to the headline of a Nov. 4 story in England’s Guardian newspaper, and reflected three days later in the New York Times’s front-page screamer, “A Soviet Discovery Rocks World of Mathematics.”

Our point is not to say that linear programming being in {\mathsf{P}} was a surprise. To those who knew the reality behind the headlines, it wasn’t. As Lawler relates, the great George Dantzig had tried to set Times reporter Malcolm Browne straight on this and points related to what LP’s can and (ostensibly) cannot solve. The simplex algorithm already solved the vast majority of {n\!\times\!n} LP cases in {O(n^3)} expected time, so there was no feeling of practical intractability. Rather our point draws on something perhaps less widely known and appreciated: that Khachiyan’s ideas extend to solve a much wider class than linear programs, including so-called semi-definite programs or SDP’s, exactly or with high approximation. Thus it can be said to show that a complexity class {\mathsf{SDP}} defined by “approximation-robust” reductions to these programs equals {\mathsf{P}}.

We raise this with regard to the main technical argument in a recent post by Scott Aaronson titled “The Scientific Case for {\mathsf{P} \neq \mathsf{NP}}.” We wonder whether a similar argument might have seemed on the side of {\mathsf{SDP \neq P}} in the years before Khachiyan. Even more speculatively, we wonder whether a kind of “Reverse Oracle Result” can be formulated to make any of this more concrete. But first let’s review Scott’s comments in the wider context of belief about {\mathsf{P}} vs. {\mathsf{NP}} and about open problems that were resolved in surprising ways.

Scott’s Comments and Our Positions

Essentially Scott gave a quite reasonable argument for {\mathsf{P \neq NP}}, in his usual elegant and convincing style. Bill Gasarch expanded it. But. But mathematics is not something we argue about like: who was the best hockey player of all time, or what is the right philosophy? The simple fact is that no one has proved that {\mathsf{P \neq NP}}.

Our purpose with our recent post on a 13-GB certificate of unsatisfiability was not to start a discussion about {\mathsf{P \neq NP}}, but rather to witness that {\mathsf{NP}}-hardness is not so great a practical obstacle as we may think. The Gröbner basis algorithm is run all the time, despite the problem it solves being complete for exponential space. Notably it runs in singly-exponential time on a generic set of cases. If we can shift down a level, this is like having “smoothed polynomial-time behavior” of an algorithm for a {\mathsf{PSPACE}}-complete problem. Solving nontrivial cases of {\mathsf{NP}}-hard problems is addictive.

Almost the entire business model of this company is to solve {\mathsf{NP}}-hard optimization problems, using non-quantum computers. As is evident from examples on their website, they are not just gristing easy approximation for run-of-the-mill instances. To quote one of their blog entries (their emphasis):

According to academic research on NP-hard problems, it’s impossible to guarantee that optimal solutions to difficult problems will be found in a reasonable time frame. However, with almost all real-life planning puzzles, you can get excellent results very quickly.

Hence our initial reaction was, who cares about discussions on {\mathsf{P \neq NP}} being true or not, aside from progress on this great question? A full solution would be wonderful, but just having small steps would be great, even a possible program for a solution would be welcome. So that was what we thought we should just say, nothing more, except noting our split answers to Bill G’s reprised poll three years ago.

But. But Dick couldn’t resist adding some more sections, while Ken made some effort to counter Scott’s facts, counterfactually.

Dick Speaks

I feel compelled to explain why I am open-minded on this question perhaps more than anyone else. I have several reasons that I feel are important to remind all of us:

  1. In mathematics people have guessed wrong before on famous open questions.

  2. The actual theorems about why {\mathsf{P}} is weaker than {\mathsf{NP}} are extremely hard and very weak.

  3. If {\mathsf{P} = \mathsf{NP}} it could still be that {\mathsf{SAT}} is computable in time { n^{2^{1000}}. }

  4. If {\mathsf{P \neq NP}} it could still be that {\mathsf{SAT}} is computable in time {n^{\log n}}—while the evidence cited by Scott is properly for an exponential lower bound.

We’ll address the full {\mathsf{P}} versus {\mathsf{NP}} question, and not situations where say the algorithm generating {\mathsf{SAT}} instances is restricted to {r(n)} random bits—a case in which we’ve noted that in the limit one can solve them all in something like time {\mathsf{poly}(n)2^{r(n)}}.

Bad Guesses

I have discussed guesses in mathematics many times before on this blog. One of the biggest issues in guessing wrong is that people do not take seriously the other possibility. Researchers tend not to work on showing {\mathsf{P} = \mathsf{NP}} anymore. Research support does not go there, since we all “know” that it would be a waste of time, and there are other consequences to the field.

Here are some famous guesses that were essentially off by exponentials. For each I will list the time gap between the initial problem being raised and being solved.

  • The Borsuk Conjecture [60 years]. It was claimed that {d+1} was the answer to this geometric conjecture. Many top people worked on it for years and partial results were proved, showing {d+1} is correct in many important cases. Then Gil Kalai and Jeff Kahn proved that {d+1} was slightly off; the correct answer is exponential—at least {c^{\sqrt{d}}} for some {c>1}, and possibly with {d} in the exponent.

  • Barrington’s Theorem [26 years]. I worked on lower bounds to show that bounded width computation could not compute the majority function. This was joint work with Ashok Chandra and Merrick Furst, which introduced multi-party communication. Then David Barrington came along and proved that bounded width could do all of {\mathsf{NC}^{1}}. Hmmmm.

  • Law of Small Numbers [118 years].
    It was noticed by Gauss that

    \displaystyle  \pi(x) < li(x),

    for all reasonable size {x}. Here {li(x)} is the logarithmic intergal

    \displaystyle  \int_{0}^{x}\frac{dt}{\ln t,}

    which is an asympotic approximation to {\pi(x)}, the number of primes less than {x}. It was conjectured that this would always hold and was widely believed for over a century. Then John Littlewood proved that the lead changes between {\pi(x)} and {li(x)} infinitely often, although the first switch is upper bounded by an immense number. Richard Guy wrote a wonderful article on what he called the “The Strong Law of Small Numbers”: cases when evident phenomena held for small numbers but eventually would fail. Here is a table with other examples:

lawsmall

By the way the “common clock” on {\mathsf{P} \neq {NP}} is 43 years.

Weak Theorems and Galactic Algorithms

We do have the theorem that {\mathsf{DTIME}(n)} is not equal to {\mathsf{NTIME}(n)}, which we have discussed before and which is particular to the multitape Turing machine model—make the tapes planes or trees and it goes away. We cannot even deduce from it that {\mathsf{DTIME}(n^{2}) \neq \mathsf{NTIME}(n^{2}). } That’s pretty weak. Remember that {\mathsf{P} \neq \mathsf{NP}} means that {\mathsf{DTIME}(n^{1000})} does not contain {\mathsf{NTIME}(n)}. And more, of course.

The {\mathsf{P}} versus {\mathsf{NP}} statement still allows cross-cutting the generally-understood significance. That is:

  • {\mathsf{P = NP}} allows {\mathsf{SAT}} to have no less than an {n^{1,000}}-time algorithm.

  • {\mathsf{P \neq NP}} still allows {\mathsf{SAT}} to have an {n^{\log^{*} n}}-time algorithm.

To be sure, some evidence cited by Scott is really for an exponential lower bound on {\mathsf{SAT}}; we have discussed this before too. But what we are saying still cuts against the usual argument that “many people have worked on looking for algorithms for {\mathsf{SAT}}.” Yes many have looked for algorithms, but most were interested in “real” practical algorithms. For this kind of quest there is not much difference between {n^{20}} and {2^{n}}.

Aram Harrow communicates to us a more-concrete version of this point, which also serves as a bridge to Ken’s musings.

Aram’s Bridging Thoughts

Quoting Aram, with light editing: “One of the stronger reasons for {\mathsf{P \neq NP}} is the Bayesian one—it is easier to find algorithms than lower bounds, so our failure to find a subexponential-time algorithm for {\mathsf{3SAT}} speaks louder than our failure to find a super-linear lower bound for {\mathsf{3SAT}}. A related way of expressing this is that before {\mathsf{NP}}-completeness was understood, thousands of researchers in disparate fields were unwittingly all trying to put {\mathsf{3SAT}} into {\mathsf{P}}.

But a counter-argument is that all of those seemingly independent researchers would always come up with algorithms that relied on a few kinds of structure—a lot is covered by just two kinds:

  • Convexity/absence of local minima—cases where greedy algorithms work.

  • Tree structure—cases where dynamic programming, divide-and-conquer, and related ideas work.

This paucity of algorithmic variety can be viewed in (at least) two ways:

  1. Algorithms, like oil deposits, are a finite resource. We’ve found all the important ones, and {\mathsf{3SAT}} is always going to require exponential time.

  2. The paucity is a sociological phenomenon, due to the fact that all researchers have learned essentially similar math backgrounds.

On the latter, Terry Tao’s recent breakthrough on the Navier-Stokes equations is an example of how much the same ideas keep recirculating, and how much more quickly progress can be made by cross-applying ideas rather than coming up with radically new ones. Going from Erwin Schrödinger’s equation to Peter Shor’s quantum factoring algorithm is a 60-minute lecture, but it took over 60 years (and a change in perspective coming from the computer revolution) to discover. Our lack of algorithms reveals only our lack of creativity, and it is arrogant to posit fundamental limits to mathematics just because we can’t solve a problem. Either way, the central question isn’t so much about {\mathsf{NP}} but rather a “throwback” of a question now being asked about quantum computers:

Where does the power of deterministic algorithms come from?

A related question is the power of the Lasserre hierarchy. It has been shown to be effective for a large number of problems, but with a surprisingly small number of truly different techniques. I would love to know whether further work will increase or decrease the number of ways in which we know how to use it; that is, either by discovering new methods or by unifying apparently different methods.”

Ken’s Sixth World

The Lasserre hierarchy builds upon LP’s and SDP’s, and this brings us back to the intro. I (still Dick) remember many people in the 1970’s trying to prove that certain linear/convex programming problems were {\mathsf{NP}}-hard, despite all our confidence in the simplex algorithm for daily use. This makes Ken wonder:

What if SDP’s really were hard?

Russell Impagliazzo famously categorized five worlds that are consistent with current knowledge of complexity. Is there room to analyze any more, ones that are inconsistent now, but might have been meaningfully consistent had our field taken a different path?

All of Impagliazzo’s worlds—including ones with {\mathsf{P \neq NP}} and with {\mathsf{P = NP}}—have been instantiated via oracle results. All oracle results involve pretending that some ostensibly hard problem is easy. For instance, the world with {\mathsf{P = NP}} involves pretending {\mathsf{PSPACE}}-complete problems are easy, while known ones for {\mathsf{P \neq NP}} involve granting free access to a set coded so that the free access itself activates a diagonalization. What I (Ken) wonder is whether there is a sensible formal way to do “Reverse Oracle Results,” which pretend that some easy problem {X} is hard.

One known way to get this effect is to narrow the definition of “easy” so that {X} still has easy reductions to {X} from other problems {W}. For example, linear programming problems are P-complete under logspace (and even easier) reductions, as are problems of approximation by SDP’s. But here I mean something more structural—a sense in which {X} is the only route to solving a whole class of problems {Y}. Then we can segregate this entire class and pretend it all is hard. It might suffice to give “easy” reductions from {X} to all these {Y}. In particular, a lot of {Y}‘s are solved via the interior-point paradigm for (LP’s and) SDP’s. It could also employ ideas in reverse mathematics.

Countering Factuals with Counterfactuals

Scott replied to my comment about a possible {n^{20}} algorithm for {\mathsf{SAT}} in his post by referring to his earlier comment that:

Since it would be inelegant and unnatural for the class {\mathsf{P}} to be “severed into two” in this way, I’d say the much likelier possibility is simply that {\mathsf{P \neq NP}}.

Our point is, perhaps {\mathsf{P}} remains already manifestly “severed into two” along Khachiyan’s and the interior-point fault-lines. In particular, we wonder how the following consequences would have looked as conditional results had they been proved in the late 1970’s:

Theorem 1 If {\mathsf{SDP = P}}, then it is possible to compute in polynomial time close approximations to a function {\vartheta(G)} of undirected graphs {G} that is sandwiched between the {\mathsf{NP}}-complete clique number {\omega(G)} and chromatic number {\chi(G)}, which all coincide whenever neither {G} nor its complement has an odd induced cycle.

Theorem 2 If {\mathsf{SDP = P}}, then {\mathsf{MAX CUT}} is polynomial-time approximable within {0.87856\dots}, even though it is {\mathsf{NP}}-hard to approximate within {0.941\dots}.

In the first theorem we have also inverted time by embracing the Strong Perfect Graph Theorem which was proved in 2002 (final in 2006), but this conjecture was strongly felt and makes it transparent that many important families of graphs are perfect. Hence the sweeping tractability of major {\mathsf{NP}}-complete problems on these cases could be a surprise. On the second theorem, why should the difference between {0.878} and {0.9412} matter to such a simple problem as {\mathsf{MAX CUT}}?

Of course in the light of knowledge we understand how these two famous theorems work. On the latter the Unique Games Conjecture already helps explain how {0.87856\dots} may be special. But the present exercise is about how we reason when we don’t (yet) have the light of knowledge.

Open Problems

Can we make some formal sense of a world where Khachiyan’s breakthrough never happens?


Update 3/18/14: The comments section has some excellent replies, including ones on the argument of the last two sections by Scott Aaronson and by Timothy Gowers.


Update 3/17/14: The Lovász theta-function example replaced the post’s original quoting of Leslie Valiant’s first “Accidental Algorithm” under a mis-memory that LP’s were involved (and SDP’s in later developments). Snipping the irrelevant qualifier, it reads:

Theorem 3 Counting assignments to a certain class of planar formulas is deterministic polynomial-time computable modulo {7}, or modulo any Mersenne prime, even though it is {\mathsf{NP}}-hard modulo {3} or {5}.

Aside from the fact that this computation belongs to a class called {\mathsf{GapL}} which is commonly believed to be a proper subclass of {\mathsf{P}}, the intent was to argue: “What do Mersenne primes have to do with matchings and convex programming really? Surely counting must be equally hard modulo any odd prime—after all there’s no exception for Mersenne in modular circuit lower bounds—so {\mathsf{SDP \neq P}} must be an ‘invisible fence’ around this kind of ridiculousness.” The intro and Borsuk statements were also amended.

66 Comments leave one →
  1. March 15, 2014 12:14 pm

    I wonder how much of this post applies to P not equal to PSPACE?

    • March 15, 2014 12:21 pm

      That stayed on the cutting-room floor, leaving just the Gröbner-basis algorithm (technically, the “F5” algorithm) example to raise that mention. It is true IMHO that any P=NP claim should clarify whether it applies to P=PSPACE.

      • March 15, 2014 12:29 pm

        Professor Regan Could you clarify why P=NP should clarify P=PSPACE?

      • March 15, 2014 1:00 pm

        If P=NP, then the entire polynomial hierarchy collapses to P. This means that for all fixed k, the algorithm giving P=NP extends to answer questions of the form

        (\exists Y)(\forall Z)(\exists W)….(Q_k A) R(x,Y,Z,W,…,A)

        with k-many alternating quantifiers, and with R(x,Y,…,A) computable in time poly(|x|). Now the question is, how does it extend when k too is variable? If the time scales nicely enough with k, then P=PSPACE follows.

      • March 18, 2014 3:28 pm

        Professor Regan: I still don’t get it. I did not background research. It seems that P=NP just means P=PH and it can be that P!=PSPACE. Why should P=NP should explain P=PSPACE?

      • March 18, 2014 3:29 pm

        Should be ” I did some background research….”.

  2. March 15, 2014 12:18 pm

    Interesting take that P already remains severed. I wonder what Scott would say.

  3. Anonymous permalink
    March 15, 2014 12:33 pm

    Wasn’t convex programming (modulo the epsilon additive error which is necessary) known to be in NP intersecton co-NP just as LP was? Why is it surprising that it was in P?

    • March 15, 2014 1:02 pm

      We say in the intro that LP etc. in P was not a surprise. Factoring is an example of a problem in NP intersect co-NP whose membership in P would cause surprise.

      • Anonymous permalink
        March 15, 2014 1:37 pm

        Factoring is different in that it is a search problem rather than a decision problem. Same issue holds for problems such as those in classes such as PPAD. I still don’t get the point why you say that SDP being in P was the big deal about Khachiyan’s result rather than LP.

    • Adam L permalink
      May 25, 2019 4:12 pm

      Not until this paper:
      Ramana, M. V. (1997). An exact duality theory for semidefinite programming and its complexity implications. Mathematical Programming, 77(1), 129-162.

  4. March 15, 2014 2:24 pm

    Your “electric fence” argument is not analogous to Scott’s. If you made it more analogous to Scott’s, it would not be an argument for P != SDP, but rather for SDP != NP.

    Notice how you’ve stated the first part of each of your two theorms as “If P = SDP, then X is in P”, rather than the simpler “X is in SDP”. This is awkward and has no analogue in Scott’s argument and should suggest to you that you are doing something wrong.

    Restating, then, theorem 1 becomes “Counting matchings in a certain sort of graph mod 7 or mod any Mersenne prime is in SDP, but mod 3 or mod 5 is NP-hard”. And Theorem 2 becomes “Approximating MAXCUT within 0.941 is NP-hard, but approximating it within .87856 is in SDP”.

    Neither of these are really good “electric fence” arguments, but they’re now closer to what Scott was saying. Theorem 2 is the closer one — we’ve got a real-valued parameter (call it r), and if r is too low, one argument shows the problem’s in SDP, but if we’re if r is too high, an entirely different argument shows the problem’s NP-hard! It’s not quite the electric fence, though, because it’s not so near a miss; there’s a nonzero gap between the two regimes. Still, it is reminiscent of the electric fence argument, and if we go along with the analogy, it suggests SDP != NP.

    Theorem 1 is a little further away, because there’s not even the notion of ordering and distance there with the parameter. Perhaps instead of a real electric fence, there’s some sort of 2-adic electric fence? 🙂 (Note: I don’t think that actually works.)

    • March 15, 2014 4:53 pm

      A different weakness in my argument is that proving those as conditional theorems would still have exposed the trick of how the work: where mod-7 comes in, and for MAX-CUT, why 0.878… vs. 16/17—though not yet the Unique-Games reason for the neighborhood of .878… to be tight. Anyway this was the best I could do for a “counterfactual fence” by which to try to test Scott’s argument. The most portentous point IMHO is actually the one about the polynomial ideal membership problem being EXSPACE-complete even though an algorithm solving it (on the natural domain extending the complete cases) is generically-EXPTIME, though this doesn’t fall along the lines of testing Scott’s argument.

  5. March 15, 2014 5:42 pm

    I believe the Theorem 1 is misquoting the literature. Counting matchings in planar graphs is doable in polynomial time (and thus counting modulo any number is as well) – this is the FKT algorithm powering Valiant’s Holographic algorithms.

    The counting modulo 5 hardness result versus counting modulo 7 easyness result is for certain versions of planar-SAT (as seen in the Wikipedia entry linked to in the blog post).

    • March 15, 2014 8:45 pm

      Fixed, thanks. I considered mentioning the FKT algorithm. I may however be wrong to think that SDP’s are involved in what’s been built atop Valiant’s algorithm since.

  6. March 15, 2014 5:50 pm

    You know, the class of problems solved in polynomial time by the ellipsoid method is known as the class of “convex optimization problems.” The way this post is worded, in particular insinuating that SDP is the be-all and end-all of convex optimization problems (which are much more general), seems to ignore that fact. Care to elaborate?

    • March 15, 2014 9:02 pm

      I picked “SDP” as the lowest thing that captured the “surprise” examples and is solved by Khachiyan. The main editorial issue as the intro went through revisions was that ellipsoid is beaten by interior-point on SDP’s (for most purposes) but there seems to be no common “umbrella” name for the two. Meanwhile for the more-general convex problems I couldn’t find greater simplicity or surprise than for “SDP”, which is > LP and popular in theory right now. My other impression is that the Lasserre hierarchy is in some sense universal for approximations of convex optimization—?

      Added: I changed the intro’s “…a much wider class than linear programs, called semi-definite programs or SDP’s” to “…a much wider class than linear programs, including so-called semi-definite programs or SDP’s.”

  7. March 15, 2014 6:53 pm

    re P vs NP recently the PCP thm popped up on tcs.se … find it amazing nobody cites PCP theorem as possible hint that P=NP. overall this theorem seems to show that NP testing can be “compressed” to an amazing degree. have you covered this anywhere in your blog? havent seen it. ❓

    since you cite him…. to me scott aaronson blogs a bit too much on P vs NP opinions vs real facts. 🙄 did anyone notice how few refs he used in his latest blog? re yellow & green frogs or whatever its a colorful and appealing analogy for the layman, but as a serious scientist, wish he would ground it with more rigorous citations in TCS. eg turn the overall essay into a arxiv preprint. alas it is written by someone who maybe spends a bit too much time arguing over which sports team is gonna win. he says he doesnt care about sports in his blog but maybe he doth protest too much. actually maybe if he did adopt a team maybe he could do less “inside baseball” on TCS instead…. 😛

    would write this on his blog but he’s blocked me for a 1yr sentence, yeeks! & kinda doubt he will ever let up on that. 😥 find it hard to take him totally seriously as a professional scientist at times because of all his hard-core over-the-top pontificating on various subjects eg Dwave in particular…. oh yeah and his resistance to open science on his blog. just announced blocking lubos motl for 3years! guess should feel honored. 😎

    see also disagreeing with scott aaronson re QM physics

    anyway, favorite topic, great stuff, keep up the good work as always 😀

    • March 19, 2014 10:44 am

      Mathematicians seem to be striving to keep this question inside the mathematical field. Nevertheless, the yellow & green frogs, that was a biological argument… and the electric fence, isn’t it a physics argument? Though I’m convinced PvsNP has nothing to do with mathematics, I’d never expected a brilliant mathematician like Scott to illustrate this viewpoint – even subconsciously. Just kidding – well, almost.

      • March 19, 2014 2:09 pm

        re mathematicians vs P vs NP. shor has stated publicly he feels mathematicians didnt think it was a serious question until it remained open for a few decades. now there is serious mathematical interest after about 3 decades or so & claymath prizes (selected largely by mathematicians) helped “legitimize” it. gowers is doing a lot to bring mathematical interest. so, not exactly sure what you meant “mathematicans seem to be striving to keep this question inside the mathematical field”.

      • March 19, 2014 5:06 pm

        Because it’s a mathematics statement which can’t be answered for reasons that belong to physics. It so happens that math can be considered to be part of physics – proofs are algorithms and algorithms are meant to be run on hardware. So that this alleged mathematical undecidability is at the bottom line a physics phenomenon that’s been amplified by means of software. Yes, math is nothing but software… but in the present case it acts as an electronic microscope!

  8. March 15, 2014 10:16 pm

    Prof. Lipton, I would like to point out that much care needs to be exercised when making claims about SDP’s complexity.

    In particular, we do not even know if the Semidefinite Feasiblity Problem (SDFP := the decision problem of whether a Linear Matrix Inequality (LMI) A(x) ⪰ 0 with rational/real coefficients has a real solution) is in NP in the Turing machine model (the difficulty here comes from the fact that LMIs with integer coefficients could have only irrational and/or strictly doubly exponential rational solutions, as shown in the paper cited below), much less whether it is in P.

    The polynomial size gap-free dual (the standard SDP dual can exhibit duality gaps) that I developed back in 1995 (citation given below; see subsection 4.3, in particular) gives us that its membership in NP and Co-NP would be simultaneous.

    In the real number model (of Blum, Shub, Smale), the problem does belong to NP, and hence to Co-NP by the duality result above.

    The ellipsoid method of Khachian, as expanded upon by Grotschel, Lovasz and Schrijver in their celebrated book, gives us polynomial time algorithms for various APPROXIMATE versions of membership and optimization problems related to convex sets equipped with separation oracles (SDP solution sets, which are called spectrahedra, being a special case of this). Those methods, however, do not work in addressing exact questions (eg, SDFP as stated above.)

    Interior point methods (IPM) as advanced by Karmarkar, Nesterov, Nemirovski and many others do give polynomial arithmetic complexity, again for approximate versions such as finding a pair of primal and dual solution estimates that with a given epsilon of being feasible to their respective problems (approximate solutions can be turned into exact solutions for LP and convex QP, but not so for SDP), and are within epsilon of each each other in terms of objective values.

    One crucial factor to note here is that bit-length analyses of the numbers of the computation are lacking in the case of IPMs (for anything beyond convex QP), and so anyone invoking them do need to do their own analyses to in order to be able to make definitive complexity assertions. Most researchers have been ignoring this part. In the case of GW relaxation for MAXCUT, I and Leo Khachian (rest in peace, my friend) had at some point discussed and felt fairly confident that one could, if they sat down to do it, establish bit-length complexity bounds for that application by, if necessary, slightly dropping down the quality of solution being sought.

    Long story short, as of now, it is entirely possible that the general SDFP is NP-Hard (although, I’d compare the likelihood of that being the case to P being equal to NP.)

    Although the work mentioned above is a bit dated, apparently no significant developments have taken place re. the complexity of SDFP over the last 15+ years.

    Citation to my duality/complexity paper:

    M. V. Ramana. An exact duality theory for semidefinite programming and its complexity implications. Math. Programming, 77 (2, Ser. B):129–162, 1997 (the paper should still be available on citeseer.)

    Thanks.
    MVR

    ps: Thanks for your excellent blog. As a regular visitor, I enjoy your selection of topics and the lively discussion that the posts generate.

    • March 15, 2014 11:26 pm

      Dear Professor Ramana,
      Thank you for this great elucidation. The “SDP” part is mine—looking for a shortcut way to name a class that includes LP and the MAX-CUT approximation and is solved by Khachiyan’s methods. I guess it’s not so easily named as a class.
      —Ken R.

  9. March 16, 2014 12:38 pm

    Dear Dick and Ken,

    Kachiyan and LP:  It is great to read about Leonid Kachiyan and about linear programming which provides a steady source of surprises. Indeed LP in P was a wonderful breakthrough. I remember that Paul Erdos who was generally uninterested in CS (and applied math) mentioned in as a great breakthrough in a lecture shortly after it was discovered. In this blog post of mine you can find a nice picture of Dancis and Kachiyan. We can expect that LP and related things will continue to supply great surprises.

    Three possible general approaches

    Generally speaking we can consider three points of view:

    1. Miracles happen

    2. We cannot expect miracles but understanding happens

    3. We can expect neither miracles nor understanding

    Overall, I am a type-2 person. I expect understanding but not miracles. My view regarding understanding is that

    4. we should grab any piece of understanding that we can get (along with skepticism and nudging,) and not be overly picky about formats and formalities.

    NP=!P Given my 2-type,  getting to the NP=!P question, I firmly think that P=!NP.

    In the case of NP=!P, a proof that NP is not P will also be a miracle of a sort, so perhaps (in accordance with 4. ) a tentative but rather detailed understanding of the kind we have is roughly what we can expect.

    I remember meditating over here (perhaps a little under the influence of narcotics) about various formal ways that the difficulty of proving NP =!P (or rather a non-uniform version) may itself reflect a difficulty of the NP!=P nature.

    The evidence for NP!=P.  In my opinion the two main (related) pieces of evidence for NP=!P is first the huge failed implicit and explicit effort to find efficient algorithms to intractable tasks, and second the major coherent mathematical structure based on the NP=!P conjecture and related conjectures.

    Surprises. I don’t think we have witnessed any mathematical surprise of magnitude  like P=NP or even one order of magnitude smaller. So while bringing up surprises is clearly relevant, looking at it carefully does not support NP=P.

    Discussing mathematical statements. For me an exciting question is how (and if at all) mathematical statements (like the NP=!P) conjecture can be discussed and to what extent people can give scientific/academic  evidence and discuss such evidence. (I also care a lot about applied math statements.) I tend to believe that this is possible (and certainly interesting to try) to some extent. (In fact I am trying, on my and other blogs and related platforms.) But we can be skeptical about this issue as well.

    • John Sidles permalink
      March 16, 2014 9:50 pm

      Gil, thank you for this civil, well-reasoned post!

      Dick and Ken ask “Leonid Khachiyan’s ellipsoid method came as a shock [to mathematicians] in 1979 … could something like it happen again?”

      Remarkably enough, a comparably shocking mathematical discovery DID happen again, only six years later: Dan Shechtman’s discovery of quasicrystals in 1984.

      Following Scott Aaronson’s excellent advice — that concrete mathematical ideas/questions/answers are best posted to StackExchanges — I have posted a hyperlinked summary of Shechtman’s quasicrystal work to MathOverflow as Lessons from Crystallographic Classification,” which is offered in answer to Scott’s original question “Analogues of P vs. NP in the history of mathematics”.

      Shechtman’s Burning Arrow&nbsp Shechtman’s work assailed not the theorems of crystallographic classification (which were mathematically unassailable), but rather the starting postulates; thus Shechtman’s work was received by established crystallographers as a transgressive GLL-style “Burning Arrow” (as Khachiyan’s work was not).

      Possibly for this reason, Shechtman’s work aroused a storm of sardonic rancor from eminent crystallographers (while Khachiyan’s was received quiescently); in retrospect this sardonic rancor reflected poorly upon the researchers who indulged in it.

      Conclusion  Shechtman’s quasicrystal work (arguably) teaches broad lessons in regard to PvsNP and in particular suggests natural questions. Do complexity-theoretic “quasi-algorithms” reside within the gap between P and NP? Do the postulates that define the boundaries of the Complexity Zoo therefore require Shechtman-style revisions to accommodate not just new species of algorithms, but new phylums?

      No one knows the answers to these tough questions.

      Summary  Quasicrystal history teaches three plain lessons: (1) mathematical postulates sometimes require transformational revisions; (2) it is neither necessary, nor feasible, nor desirable that everyone think alike in regard to such revisions; and (3) sardonic rancor contributes little to mathematical discourse … whereas Shechtman-quality Burning Arrows contribute much!

      • March 17, 2014 12:19 pm

        Dear John, the example regarding quasicrystals is very interesting! (As always it is not 1 to 1 but there are interestig similarities.)

      • John Sidles permalink
        March 17, 2014 1:23 pm

        Yes Gil … moreover, there is a natural overlap in regard to Subhash Khot’s Unique Games Conjecture (UGC) that I have posted as a MathOverflow UGC/quasicrystal comment.

        It is very enjoyable to be alive in a century in which the same problems appear over-and-over — in ever-surprising guises — to mathematicians, scientists, engineers … and even medical researchers!

    • Javaid Aslam permalink
      March 19, 2014 1:56 am

      Can the human limitations be considered any evidence of a scientific theory?
      The conjectures about elegance or miracles attempting to support any scientific theory are strictly man-made; they remain within the boundaries of human imaginations.

      The biological systems they themselves are miracles but they are taken for granted. The navigation used by whales, thousands of miles migration of the butterflies, are they not worth noting? It is only the man-made “miracles” that are rare. Most of the great “inventions” are the imitations of the biological systems, whether it is an airplane or a camera, they are at the best an imitation, developed over a long period—long after the discovery of the founding principles. So what does not exist in a natural biological system, in an explicit form, could be much harder to “invent”.

      The NP v. P issue is an artificial phenomenon, far removed from the known biological processes. That might be the key reason that the majority has resorted to a more comfortable conjecture, that is, NP != P.

      Sometimes there is a faint line between discovery and invention; sometimes it could be a matter of interpretation. While NP=P event is not quite a black swan, it just might be only a slight mutation of a black swan, an already discovered “ miracle”.

      • March 19, 2014 10:02 am

        Elegance versus miracles… interesting metaphor! Thus if you want an elegant theory, you claim in the first place that miracles don’t happen… but when you’re actually expecting them, you’d better leave out elegance forever! And we – poor humans – are bound to always stand in between…

  10. gasarch permalink
    March 16, 2014 3:53 pm

    on a lower level- i wonder if one can have a world where barringtons theorem is false,
    your post reminds me of the post you did where you stated your answers to gasarch’s poll and also gave an account of a poll on NC1 and Bwbp in 1983 where most people thought them not equal. (the poll was fictional but I fell for it and commented bravo).

  11. Javaid Aslam permalink
    March 17, 2014 1:41 am

    Let’s hope we all live to see the fate of the scientific arrogance about NP!=P.

  12. March 17, 2014 6:42 am

    I firmly believe that P\ne NP, but I would like to try to play devil’s advocate by guessing what Dick’s answer (and maybe Ken — I’m not sure where he stands on this question) would be to the point that there is a huge and delicate structure of complexity classes, all of which would collapse if P=NP. Surely, one might think, the fact that we have all those NP-complete problems, all of which appear to be intractable, is evidence that something objective is going on? Is all that structure an illusion?

    In a P=NP world, the answer might be something like this. No, it’s not an illusion. It’s just that our notion of reduction needs to be changed from polynomial-time reducibility to reducibility using some subset of P that can perhaps be formally defined and that corresponds to the kinds of polynomial-time algorithms we are familiar with today. So it seems to me that our feeling that “something must be going on” could be justified even if P=NP.

    I agree with pretty well everything Gil writes above, but let me attack that too. (So I’m attacking myself in the process.) Gil says that surprises of the magnitude of P=NP just don’t happen in mathematics. But one might argue that that is begging the question, since Dick’s claim could be taken to be that it would not be a massive surprise if P=NP. Or rather, it ought not to be a massive surprise — clearly, given what people have said, it would in practice be a huge surprise. Somehow, we have to explain why it would be a huge surprise if P=NP, so the argument that surprises of that magnitude don’t happen is circular.

    On the other hand, I find some of the examples of earlier surprises unconvincing. For example, the relationship between \pi(x) and li(x) seems to me to be one where the evidence in favour of the conjecture was purely observational. There weren’t any theoretical reasons for one always to be bigger than the other. Contrast that with the massive surprise there would be if the twin-prime conjecture were false. That would be a surprise because it would show that a heuristic principle — that in a very precise sense the primes behave randomly — which has been confirmed in many many instances, is in fact wrong. I think the main point of Scott Aaronson’s post was to give similar reasons for believing that P\ne NP — that it has been subjected to tests (and not just attempts to find algorithms for NP-complete problems) that, had it been false in general, it might well have failed. So even if we start out open-minded, with a 50-50 prior distribution, say, subsequent observations should result in a posterior distribution that is more like 99.9 to 0.1.

    In the end, I think what convinces me that P\ne NP is that huge complexity can come from iterating a few very simple operations. There are many many examples of this phenomenon. If those operations are chosen as suitable operations on n-bit sequences, then we can rapidly produce Boolean functions that appear to behave completely randomly. The assertion that they are genuinely pseudorandom is of course stronger than the statement that P\ne NP, but I see no reason to disbelieve it. (I’m not talking about pseudorandom generators based on the hardness of factoring — the evidence for that seems to me to be much weaker, though I’d still guess that it is hard.)

    There’s a parallel here with the primes. My belief in the twin-primes conjecture is strengthened by the fact that it can be massively strengthened — roughly speaking to the statement that all configurations appear in the primes with the frequency you would expect after accounting for small divisors — and still appears to be true.

    • John Sidles permalink
      March 17, 2014 9:44 am

      Tim Gowers avers (speaking hypothetically): “No, it’s [\text{P}\ne\text{NP}] not an illusion. It’s just that our notion of reduction needs to be changed from polynomial-time reducibility to reducibility using some subset of P that can perhaps be formally defined and that corresponds to the kinds of polynomial-time algorithms we are familiar with today.”

      Prof. Gowers, the point-of-view that you express is exceedingly congenial to simulationists and software engineers, for the following pragmatic reasons.

      In the ordinary course of software/simulation verification and validation (V&V) it generically happens that engineers are confronted with algorithms whose claimed runtimes we are unable to verify. Naturally we wonder — and even formally postulate — that the runtime classification problem is not only empirically hard, and not only NP-hard, but even is undecidably hard (in both mathematical senses of “undecidable”).

      Supposing this hypothesis to be true — as is reasonable, given that it’s consistent with all of our practical experience and with with all of our present theorems — then what effective resolution(s) to \text{P}\overset{?}{=}\text{NP} can engineers hope for?

      The least-radical resolution is definitional adjustment(s) to complexity theory, along comparable lines as the definitional adjustment(s) that quasicrystals induced in the International Union of Crystallography (IUC):

      Definition of a crystal (as IUC-revised)  “A material is a crystal iff it has essentially a sharp diffraction pattern. The word essentially means that most of the intensity of the diffraction is concentrated in relatively sharp Bragg peaks.”

      Definition of a P-algorithm (as V&V-revised)  “An algorithm is in P iff it is associated to a Turing Machine (TM) whose runtime is verifiably \mathcal(O)n^k“. The word verifiably means that the TM’s listing is computable and its runtime exponent is gnostic.

      As everybody appreciates — but nobody really understands, least of all me — the collegial discussion of such revisions is historically associated to regrettably high levels of incivility, sardonicism, and rancor, that in retrospect contribute insubstantially to progress in any research discipline.

      Summary  Complexity theorists like Juris Hartmanis, Doron Zeilberger, and Charles Bennett have prepared a reasonably plain path forward (in literature surveyed on TCS StackExchange), and yet complexity theory has not yet exhibited its “Burning Arrows” (of cryptic P-algorithms) and/or found its Dan Shechtman (to shoot those cryptic P-arrows).

      That is a distinguished role, to which young complexity theorists may reasonably aspire, and we can all wish them every success in it!

      • John Sidles permalink
        March 17, 2014 10:23 am

        For the benefit of readers new to Gödel’s Lost Letter (GLL), the phrase “burning arrow” and/or “flaming arrow” is a recurring GLL trope that is associated to Gary Larson’s immortal Far Side cartoon — which regrettably Google Image does not find — in which a cowboy gazes in dismay at a newly-fired arrow and exclaims “Burning arrows! Are they allowed to do that?”

        In GLL-usage, the phrase “Burning Arrow” idiomatically refers to the natural uneasiness that we all feel whenever mathematical postulates alter … an uneasiness that is hilariously remediated by that The Far Side cartoon.

    • March 17, 2014 1:40 pm

      Tim, thanks for substantial comment. My own position is P ≠ NP, indeed circuit lower bounds of exp(Omega(n/log n)) for the most “compact” NP-complete languages. This stops short of signing on to the exp(Omega(n)) circuit lower bound which de-randomizes BPP and lots of etc. The logn divisor is separate from issues like the encoding size of an m-clause formula or m-edge graph being ~mlog(m).

      Our intent was to look for things that might have been said to have passed tests of the kind you say, but failed. Regarding twin primes, do the expectations you mention include that there should be arbitrarily large k such that some twin-prime pair flanks a multiple of 2^k?

      • March 18, 2014 4:46 am

        On that last question, if we naively assume that the probability of a number of the form 2^k\pm 1 being prime is around 1/k and that these probabilities are independent, then the probability of a twin-prime pair of that form is around 1/k^2, so the probability that there exists such a pair with k\geq K tends to zero. So I would not expect arbitrarily large such pairs.

        However, that doesn’t take into account special properties of numbers next to powers of 2, which I know are important since they’ve been used to find very large primes. I don’t know enough about this to know how my naive heuristic estimates would need to be adjusted.

      • John Sidles permalink
        March 18, 2014 8:29 am

        As an aside, numbers of the form 2^k\pm 1 are called Cunningham numbers. These numbers, with their algebraic and Aurifeuillian factors, are historically notable in that they are the subject of the Cunningham Project, which (to my knowledge) is the longest-running (and still on-going) computational project in the history of mathematics.

        References and links regarding the remarkable properties of Cunningham number factorizations, and the practical relevance of these factorizations to problems in quantum simulation can be found in the concluding “Open Problems” section of the MathOverflow question “What is the ‘Tangle’ at the Heart of Quantum Simulation?” In brief, Cunningham number factorizations control the dimensionality of secant varieties of Segre/Veronese/Grassmannian varieties, which are the natural venues for computational simulation via Lindblad/Carmichael stochastic automorphisms (aka “quantum trajectory unravelling”).

        The Soldier Healing Seminar’s (SHS) Green Sheet Notes for 2013 survey the present-day engineering understanding of varietal dynamics, and the (in-preparation) “SHS Red Sheet Notes for 2014” will include material concretely relating to Cunningham numbers, along the lines that the MathOverflow question foreshadows.

        Observation  Nowadays, few mathematical topics are so arcane that engineers (and medical researchers) cannot find practical applications of them, and few mathematical formalisms are so abstract that they cannot help in understanding computational simulation. Our present capabilities in observation, simulation, and regeneration fall far short of the fundamental limits that physics and mathematics impose … and WONDERFUL opportunities for young researchers and entrepreneurs are created by this shortfall!

    • March 18, 2014 12:03 am

      I agree with Tim that without a mathematical proof our arguments are tentative and somewhat circular.  Here is a question: Can we think  about a hypothetical world where the common belief is NP=P based just on the very same evidence that we have today?

      In the 60s: People believe more or less that computers can solve efficiently every natural finite combinatorial problem like, LP, IP assignement, scheduling and many others and they expect that the algorithms will gradually be discovered.

      In the 70s: The Cook and Karp revolution: A major surprise! Wonderful! Now, people think “all we need is to find a single algorithm and this will enable us to solve tons of problems.” More and more such problems are found in the decades to come. People realize though that there are problems higher up in the hieararchy that will require even stronger algorithms.

      This is compared to a situation of many different appealing conjectures in number theory that suddenly are discovered to be equivalent. Certainly a strong evidence that they are all true, right?

      In the 80s: The common interpretation to AC^0 results and other lower bounds: Duhh!! we cannot compute parity with bounded depth circuits, so what? Parity is easy and this just show that bounded depth circuits are unreasonable. Three decades later people think: People thing: NP=P’s skeptics, what further progress can you report on now??

      In the 90s: The interpretation of PCP: Wonderfull!! now it is enough to find an algorithm just for approximation (to many problems) and this will give us the desirable goal!

      Practically good SAT solvers People gained confidence that effifcient universal SAT solver will be found.

      How can we convince people in this other world that they are wrong? Do we have any common grounds with them?

      • John Sidles permalink
        March 18, 2014 9:15 am

        LOL … Gil, can we be entirely sure that you yourself are not a \text{\sffamily leprechaun}\rule[-0.2ex]{0pt}{2ex}?

        It is fair to say that present-day systems engineers already reside — effectively and even rationally — in your delightful fantasy-world of \text{\sffamily P}\sim\text{\sffamily NP}\rule[-0.2ex]{0pt}{2ex}. As evidence of which:

        (1)  \text{\sffamily P}\rule[-0.2ex]{0pt}{2ex}-simulation capability has doubled every two years in the past, ever since 1940 (or earlier). Sixty-four years and 32 doublings.

        (2)  No fundamental obstructions are known to prevent \text{\sffamily P}\rule[-0.2ex]{0pt}{2ex}-simulation capability from doubling every two years in the future, until 2075 (or later). Sixty-four more years and 32 more doublings (we hope).

        (3)  Pascal’s Wager — when transcribed from theology to mathematics — provides a cogent reason to regard \text{\sffamily P}\overset{?}{=}\text{\sffamily NP}\rule[-0.2ex]{0pt}{2ex} as undecidable … on the grounds that belief in PvsNP undecidability is least likely to imperil the sustainment of progress.

        “We have of late seen too many symptoms of declining prosperity not to feel an anxious wish for better times. I keep up my spirits and those of my family as well as I am able.”

        Sir Joseph Banks to John Hunter,
        30 March 1797

        Questions  Can belief that \text{\sffamily P}\ne\text{\sffamily NP}\rule[-0.2ex]{0pt}{2ex} credibly promise us a better world? What complexity-theoretic beliefs most effectively assist us to escape (for example) the dystopian realities of Clay Shirky’s recent essay (Google finds it) “The End of Higher Education’s Golden Age” (2014) … a dystopian reality with which young researchers especially are all-too-familiar?

        Gil, your essays perform a great service in inspiring us to contemplate these tough questions.

    • Javaid Aslam permalink
      March 22, 2014 12:44 pm

      Prof. Gowers,
      … ” I think what convinces me that P\ne NP is that huge complexity can come from iterating a few very simple operations.”

      This is a fairly intuitive notion. How would you distinguish the growth of n^100 from 100^n?.
      At the least we would need some very intelligent tools to do an objective analysis.

      I suspect the research has gone lot more into the depth of the analysis as opposed to looking at some more fundamental issues at the breadth first level.

  13. March 17, 2014 8:16 am

    One may be overlooked an important property in the solution of the problem. The famous result of Larry Stockmeyer says “PLANAR 3-COLORABILITY IS POLYNOMIAL COMPLETE”.
    The advice from this result don’t try to give an efficient algorithm for this problem. On the other hand I have a theorem “a planar graph has an chromatic number 3 iff it is crash-free”. Just have a look Stockmeyer’s the well-known gadget which has been left as an exercise for the reader:

    http://www.flickr.com/photos/49058045@N00/13217670565/

  14. March 17, 2014 4:43 pm

    Sorry, just saw this post today! It’s certainly a much meatier response to my argument than Lubos’s. 🙂 I have two main disagreements with what’s written here:

    (1) I don’t think it’s true that Khachiyan’s algorithm (or LP and SDP more generally) turn the class P into a “split P soup”—that is, split between problems that are in P for “normal” reasons, and those that are in P “but only for a crazy reason”—as I argued would have to happen if P=NP. There’s nothing that crazy about Khachiyan’s algorithm at all. On the contrary, as this post itself points out, LP was known “empirically” to be in P long before Khachiyan came along: it’s a convex optimization problem, so many local optimization methods (e.g., the simplex method) can be expected to work well in practice on it. (And certainly the duality theory of LP told us that LP couldn’t be NP-complete unless NP=coNP.) Even more important, many of the practical things that we can do in polynomial time using LP, we can also do in polynomial time WITHOUT using LP (often purely combinatorially)! Max-flow and maximum matching are two classic examples. Likewise, many things we can do using SDP, we can also do (e.g.) just by computing a principal eigenvalue. In summary, LP and SDP are NOT like islands totally separated from the rest of P—they’re more like mountain peaks with plenty of other stuff leading up to them.

    (2) I think the “parody” of my position—imagining that I wouldn’t have believed SDP could be in P, because it would lead to such “weird” consequences as MAX-CUT being poly-time approximable only to within 0.878, etc.—gets things ironically backwards. It’s not just that SDP’s were never particularly hard in practice, or that they’re NOT disconnected from the rest of P (see point (1)). Rather, it’s that the striking “electric fence” pattern—i.e., problems being solvable in polynomial time right up to the limit of NP-hardness and not beyond that (even when the limit involves some weird numerical parameter)—is exactly what I was offering as an argument for P!=NP! Commenter sniffnoy, above, gets it exactly right when he says that, just blindly applying my “electric fence” argument, such a pattern would have led to the presumably-correct prediction that SDP!=NP (even if one didn’t yet know that SDP is in P), rather than to the incorrect prediction that SDP is not in P.

    • John Sidles permalink
      March 17, 2014 6:42 pm

      Scott, please appreciate that Condition (3) of your MathOverflow question — “the conjecture was either proved or disproved” — may reasonably be regarded as unduly restrictive. Robert Heinlein’s juvenile SF novel Tunnel in the Sky (1955) conveys this point charmingly:

      “Intelligence can find solutions where there are none. Psychologists once locked an ape in a room, for which they had arranged only four ways of escaping. Then they spied on him to see which of the four he would find. The ape escaped a fifth way.”

      If psychologists can conceive four ways of escaping, and apes five, then surely complexity theorists can conceive more than two! 🙂

    • March 17, 2014 8:19 pm

      Scott, thanks. Incidentally, I’m swapping out the Valiant example (not LP-relevant like I misrecalled) for Lovasz-theta, but that won’t be even as close as the Valiant one was trying to be. It was as “meaty” as we could manage, i.e. the “facts” may be on your side… 🙂

  15. March 17, 2014 7:31 pm

    Regarding your lament that there’s a paucity of algorithmic techniques. There’s a very curious one which I thought of over a decade ago which hasn’t really been tried, but may yield fast algorithms for some problems which are currently intractable.

    Consider the linear relaxation of a SAT problem, meaning that the variables instead of 0 or 1 can be any real value in between. Then a clause of the form a or b or c becomes a + b + c >= 1. A rather quirky technique can now be used. Pick a set of variables, or their negations, and try to find the minimum value for their sum. If the minimum comes up a non-integer value, then because all the variables must be 0 or 1 you can round up, and reintroduce the new clause. For example, if you find that a + b + (1-c) >= 1.3, you can then ascertain that a + b + (1-c) >= 2, and introduce that as a new clause, and repeat for a different set of variables and assignments. Since you’re doing this using floats in a real computer making it fully formal might be a bit tricky, but in practice requiring a value over an integer of greater than, say, a millionth, should be fine.

    Doing this straightforwardly runs into some problems. The first is that the minimum assignment values have a tendency to be 0, because all the funny values are pushed into the non-included variables. That can probably be fixed by including all variables in the assignment to be minimized. The second is that for, say, 3-SAT, it’s possible to set any variable to between 1/3 and 2/3 and satisfy any clause, so the round-ups will hardly remove any of the possible solution space at all. The solution is to use specially designed problems where the minima will be at least 1/2 the number of variables. For example you could use a problem where all the clauses were of the form ‘At least 2/3 of the variables in this particular assignment to everything is correct’. I don’t believe problems of this type have ever been studies, although they appear at first blush to be much harder than random 3CNF problems for regular stochastic solving techniques.

    So the overall algorithm is: Pick a random assignment to all variables. Find the minimum number of variables in that assignment which must be true using linear programming. Round up and reintroduce. Repeat until either one of the minimum assignments when rounded off to 0 and 1 produces a satisfying assignment to the original problem or you’ve shown that there is no possible solution. I suspect/hope that if the number and restrictiveness of clauses is set right then this approach will handily deal with problems which walksat-based approaches can’t touch, and that the linear programming instances encountered will be ones which simplex goes exponential on.

    It should be easy enough to experiment with this, but I’ve never had handy access to the appropriate tools.

  16. March 17, 2014 11:47 pm

    One aspect that I regard interesting from a wider perspective in Scott’s argument is him making the “close calls” the central issue, and regarding this view as a teaching of “Popperian science.” I am not sure at all about this, and, as I said (in the Shtetl-Optimized’s discussion), the “close call” aspect is weaker for P=!PSPACE, and in certain ways stronger for hardness of unique games (which people are not at all convinced about). So I think this specific aspect is quite interesting and deserved to be discussed.

    Incidently, in another discussion over Scott’s blog I had another interesting disagreement related to “Popperian science” with Daniel Lidar. Daniel regards the gradual process of finding a model that fits the DWAVE machine where proponents and opponents of the “quantumness” hypothesis are proposing in turns models to fit the machine’s behavior as manifesting “Popperian science,” while I regard it as a violation of basic teachings in statistical hypothesis testing. Again I think that this disagreement is interesting from a wider point of view.

    • John Sidles permalink
      March 19, 2014 12:59 pm

      It’s fun to see that pretty much all the first-rank math-blogosphere pundits  — Gil Kalai, Scott Aaronson, David Deutsch, Michael Nielsen (etc) — agree that Popperian falsifiability is a wrong-headed view of science (or if not wrong, misleadingly incomplete).

      The reasons differ pretty remarkably from person-to-person, though!

      The strongest mathematical argument (known to me) is John P. A. Ioannidis’ much-cited “Why Most Published Research Findings Are False” (2005). In essence, Ioannidis argues — convincingly, as it seems to me — that data-mining science generates a larger number of plausible scientific hypotheses than data-mining science can ever falsify.

      This perhaps is a concrete numerical reason why so many researchers nowadays argue that explanation has become an essential element of 21st century science *and* mathematics … such that experiments alone are not enough for science, and (perhaps more controversially) proofs alone are not enough for mathematics.

      To particularly worth-reading arguments for the view that math-is-more-than-proof are Bill Thurston’s “On Proof and Progress in Mathematics” (1994), and Alexander Grothendieck’s “Letter to Ronnie Brown” (1983); the latter includes a marvelous testimony:

      “Throughout my whole life as a mathematician, the possibility of making explicit, elegant computations has always come out by itself, as a byproduct of a thorough conceptual understanding of what was going on. Thus I never bothered about whether what would come out would be suitable for this or that, but just tried to understand — and it always turned out that understanding was all that mattered.”

      This is why the potential undecidability of PvsNP (or any other problem) need not arouse in us any feelings of nervousness or rancor or dismay, but rather can inspire us to seek an understanding of *why* certain mathematical problems (seemingly) are stubbornly intractable and/or undecidable.

      Google’s Search finds all of these above references, which all are well-worth reading and even (and it seems to me) all convey essentially the same message.

  17. March 18, 2014 12:15 pm

    I’m of the opinion that, whenever the time-behavior an algorithm for an NP-complete problem gets closer to polynomiality, then either:

    – its correctness is provable and its behavior is undecidable (think of the already-known algorithm which solves SAT in polynomial time iff P=NP)

    – or its behavior is provable and its correctness is undecidable (think of the heuristic algorithms for the Travelling Salesman Problem).

    Of course, this phenomenon is – in my view at least – nothing but a rough algorithmic analogue to the Eisenberg Uncertainty Principle: “the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.”

    The two classes of NP problems that we try to separate – the feasible problems and the unfeasible ones – are somewhat reminiscent – to my perverted mind at least – of the Pauli Exclusion Principle: “no two identical fermions may occupy the same quantum state simultaneously.” Don’t ask me what a fermion is, I’ve cut-pasted that from Wikipedia…. all I know is that the Pauli Principle gives rise to the hierarchy of electron layers in an atom… in very much the same way that P!=NP gives rise to the Polynomial Hierarchy in complexity theory. But maybe it’s just me…

    I had already expressed in a previous comment on this blog that I was amused by the similarity between nuclear fusion and prime multiplication, one operation requiring much more energy – or more computing steps – than its reverse. Indeed, it can be said that prime multiplication – unprovably a one-way function – mimics nuclear fusion in an algorithmic way.

    After all, those similarities shouldn’t be too much surprising, since the very possibility of classical computing ultimately lies on quantum grounds. Thus, we may imagine that a few more phenomenons have also been transferred from the quantum world into the world of classical computing…

    • March 18, 2014 1:53 pm

      … such as the wave-particle duality – discovered by Louis de Broglie – which has been transferred into our computing world in the form of the proof-program duality – the so-called Curry-Howard isomorphism. The wave viewpoint is known to prevail at lower energies, while the particle viewpoint becomes more relevant at higher energies.

      Programming with proofs is called logic programming: the higher-lever the language which you’re using, the closer you are to the proof viewpoint. On the other hand, the other side is called algorithmic programming. The lower-level the language you’re using, the more efficient you are, but it requires more energy on your part because you’re closer to the hardware… or if you prefer: you’re closer to the “quantum ground” mentioned above!

      With so many analogies between algorithms and physics, now it’s no wonder anymore if mathematics is so well-suited for describing our physical world!

      • March 18, 2014 4:17 pm

        Sorry, *imperative* programming (not algorithmic programming). This the actual opposite of logic programming. I tend to believe that most undecidable statements in computer science – if not *all* of them – are the virtual image of some quantum phenomenon.

    • March 19, 2014 8:01 pm

      Do you have a reference for this SAT solver that is polynomial time if P=NP? It gets drowned out by noise when I try to GOOGLE it.

  18. March 18, 2014 3:43 pm

    And it is time for crackpot to say the word defending P=NP position.

    1. there was big surprise with Hilbert program of axiomatization of mathematics.
    2. many NP-complete problems can be encoded as polynomial problems, especially the decision versions. It is sufficient to look at quartic polynomials for problem definition in most cases, and sextic for 3SAT.
    3. sum of squares of polynomials (SOS) can be reduced to SDP. Finding the minimum of quartic polynomial in 2 variables is non-convex problem, but can be solved(for the purpose of showing P=NP) by SOS due to non-trivial reason (see any text for Hilbert 17 problem)
    4. Testing positivity of polynomial is hard because there is no good method for that => Lassere hierarhy, which in a sense is similar to searching Nullstellensatz certificates, except it shold e more powerfull due to nonlinear transformations in SOS/SDP part.
    5. Partition problem (NP-complete): given vector of integers a \in \mathbb{N}^n tell whether \exists x \in [\pm 1]^n that a^T x=0. One need to define whether quartic polynomial \sum (x_i^2-1)^2 + (a^T x)^2 is strictly positive or not. this is quartic polynomial and the size of monomial basis is polynomial in the size of the problem- it is about n^4.
    5.epsilon MAX-CUT in decision version is encodable in quartic polynomials.
    6. positive polynomials form a cone, and if one can find the basis for this cone the problem of positive polynomial program (PPP) would be done and P would be equal NP. The only work I’m aware of going in this direction is done by Bruce Reznick (http://www.math.uiuc.edu/~reznick/paper53.pdf) and that is done by old ideas of Hilbert, i.e. of more than hundred years ago.
    7. But there is an alternative way to create positive polynomials coming from NP-complete problems, say partition problems. For example, polynomial \sum_{k=1}^5 (x_k^2-1)^2 + \sum_{k=1}^5 (x_k)^2 is strictly positive. If one subtract small positive number say 0.1 the polynomial is still positive, but not a sum of squares. The same way it is possible to create many different examples of positive polynomials – take different values of a, for example, take a_k=T, k=1..n-4, where T is large number say 100, and take the rest of a to be unsatisfiable, then changing the particular places of a equal T one gets the number of positive polynomials roughly equal to the number of monomials. I did not see this approach in the literature, may be because it is written by the unreadable language in contrast to Reznick works.
    8. (7) is actually saying instead of solving particular problem let’s classify all NP-complete problems encodable by quartic or even sextic polynomial into subspaces of quartic/sextic polynomials. Therefore, it is now the problem of engineering to design enough interesting examples of unsatisfiale problems that cover the space of monomials.
    9. Some people claim there are much more positive polynomials than SOS polynomials with n going to infinity – therefore the above construction most probably will give non-sos positive polynomials after subtracting small positive integer. Moreover, since those polynomials are constructed from well defined examples it is easy to find their global minima, e.g. extreme points of positive polynomials cone.

    I’m very interested in the literature going along this direction e.g. toward solving PPP via subspace classification, similar to quadratic forms classification into universal and definite forms. But here we have 3 classes here – universal, SOS and positive quartic/sextic polynomials.

    Sorry for taking your time.

    • March 18, 2014 3:50 pm

      Sorry, in 7. the second sum is squared not individual terms

  19. Anonymous permalink
    March 25, 2014 2:24 am

    I am in the agnostic camp. I think the evidence for P not being equal to NP is pretty thin. Despite several the long posts that Scott has written his argument essentially boil down to the following:

    “no one has been able to find a polytime algorithm for any NP-hard problem so far”

    which is not very convincing for me.

    Inductive arguments about history of mathematics rely too much on homogeneity of mathematical world which I don’t understand.

    • Javaid Aslam permalink
      March 27, 2014 1:10 am

      Glad to see this deviating and non-popular response. The problem is that most of these “non-believers” like Scott and Tim Gowers have achieved a recognition which tend to backup whatever they believe or claim.

      Yes there is no mathematical foundation for the NP!=P belief. It is all based on “what if pigs could fly” conjecture. Some mathematicians/TCS people call them oracles.

      • March 27, 2014 5:50 am

        To me, it all boils down to the wrong equation “mathematical existence” = “physical existence”. The reasoning seems to be: “if that algorithm existed, it would eventually get found, proved correct and efficient, run on a computer and then our lives would be changed forever.” But there are so many things in math that are without any physical existence – real numbers, to begin with…

    • Marcel Kincaid permalink
      March 11, 2018 6:21 am

      ” I think the evidence for P not being equal to NP is pretty thin. ”

      But it’s not, at all.

      ” Despite several the long posts that Scott has written his argument essentially boil down to the following:

      “no one has been able to find a polytime algorithm for any NP-hard problem so far” ”

      This is utterly false and deeply intellectually dishonest.

  20. April 3, 2014 2:47 pm

    If you want an example of a big surprise: Matiyasevitch’s nailing Hilbert’s Tenth Problem, following works by Davis, Putnam, and Robinson. Georg Kreisel, when reviewing one of the earlier papers in that series, claimed that the result proved only had superficial relationship with Hilbert’s Tenth Problem…

    Many mathematicians did not believe that any recursively enumerable set, including for instance the prime numbers, could be described as the positive range of a multivariante polynomial over the integers…

    • April 4, 2014 11:04 pm

      Thanks very much for that example. When I first heard the result around 1980, I thought it was a logical outgrowth of the previous work.

Trackbacks

  1. Yale Flint Group and Leonid Khachiyan « Pink Iguana
  2. Shtetl-Optimized » Blog Archive » The Scientific Case for P≠NP
  3. Web Roundup: More Links for March - My blog
  4. On the Edge of Eclipses and P=NP | Gödel's Lost Letter and P=NP
  5. Is Semi-Definite Programming (SDP) Polynomial-Time Solvable? – The PolyTCS Project
  6. A new PolyTCS blog! | Combinatorics and more

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading