Could We Have Felt Evidence For SDP ≠ P?
Evaluating mathematical beliefs
Rutgers in memoriam source. 
Leonid Khachiyan in 1979 caused arguably the most sudden surprise to the West’s popular scientific understanding since the successful launch of Sputnik in 1957. His ellipsoid method gave the first algorithm for linear programming whose polynomial running time was verified. Thanks largely to Narendra Karmarkar it has been superseded by faster interiorpoint methods, while older algorithms have since been noted to run in polynomial time, but the breakthrough and inspiration came from Khachiyan. Could something like it happen again?
Today Ken and I want to ask whether recent argument over beliefs about ? can be evaluated in light of this shock.
Khachiyan’s “rocket” had actually left the hangar ten months before, in a January 1979 Doklady Akademii Nauk paper whose title translates into English as, “A polynomial algorithm for linear programming.” As recounted by Berkeley’s Eugene Lawler, it was sighted at a May 1979 meeting Lawler attended in Oberwohlfach, and after Peter Gács and László Lovász supplied proofs missing in the paper, it was pronounced correct at a conference in Montreal. The discovery was picked up in October by Science News, and then by the magazine Science. An allusion by the latter to the NPcomplete Traveling Salesman problem was moved to the headline of a Nov. 4 story in England’s Guardian newspaper, and reflected three days later in the New York Times’s frontpage screamer, “A Soviet Discovery Rocks World of Mathematics.”
Our point is not to say that linear programming being in was a surprise. To those who knew the reality behind the headlines, it wasn’t. As Lawler relates, the great George Dantzig had tried to set Times reporter Malcolm Browne straight on this and points related to what LP’s can and (ostensibly) cannot solve. The simplex algorithm already solved the vast majority of LP cases in expected time, so there was no feeling of practical intractability. Rather our point draws on something perhaps less widely known and appreciated: that Khachiyan’s ideas extend to solve a much wider class than linear programs, including socalled semidefinite programs or SDP’s, exactly or with high approximation. Thus it can be said to show that a complexity class defined by “approximationrobust” reductions to these programs equals .
We raise this with regard to the main technical argument in a recent post by Scott Aaronson titled “The Scientific Case for .” We wonder whether a similar argument might have seemed on the side of in the years before Khachiyan. Even more speculatively, we wonder whether a kind of “Reverse Oracle Result” can be formulated to make any of this more concrete. But first let’s review Scott’s comments in the wider context of belief about vs. and about open problems that were resolved in surprising ways.
Scott’s Comments and Our Positions
Essentially Scott gave a quite reasonable argument for , in his usual elegant and convincing style. Bill Gasarch expanded it. But. But mathematics is not something we argue about like: who was the best hockey player of all time, or what is the right philosophy? The simple fact is that no one has proved that .
Our purpose with our recent post on a 13GB certificate of unsatisfiability was not to start a discussion about , but rather to witness that hardness is not so great a practical obstacle as we may think. The Gröbner basis algorithm is run all the time, despite the problem it solves being complete for exponential space. Notably it runs in singlyexponential time on a generic set of cases. If we can shift down a level, this is like having “smoothed polynomialtime behavior” of an algorithm for a complete problem. Solving nontrivial cases of hard problems is addictive.
Almost the entire business model of this company is to solve hard optimization problems, using nonquantum computers. As is evident from examples on their website, they are not just gristing easy approximation for runofthemill instances. To quote one of their blog entries (their emphasis):
According to academic research on NPhard problems, it’s impossible to guarantee that optimal solutions to difficult problems will be found in a reasonable time frame. However, with almost all reallife planning puzzles, you can get excellent results very quickly.
Hence our initial reaction was, who cares about discussions on being true or not, aside from progress on this great question? A full solution would be wonderful, but just having small steps would be great, even a possible program for a solution would be welcome. So that was what we thought we should just say, nothing more, except noting our split answers to Bill G’s reprised poll three years ago.
But. But Dick couldn’t resist adding some more sections, while Ken made some effort to counter Scott’s facts, counterfactually.
Dick Speaks
I feel compelled to explain why I am openminded on this question perhaps more than anyone else. I have several reasons that I feel are important to remind all of us:
 In mathematics people have guessed wrong before on famous open questions.
 The actual theorems about why is weaker than are extremely hard and very weak.
 If it could still be that is computable in time
 If it could still be that is computable in time —while the evidence cited by Scott is properly for an exponential lower bound.
We’ll address the full versus question, and not situations where say the algorithm generating instances is restricted to random bits—a case in which we’ve noted that in the limit one can solve them all in something like time .
Bad Guesses
I have discussed guesses in mathematics many times before on this blog. One of the biggest issues in guessing wrong is that people do not take seriously the other possibility. Researchers tend not to work on showing anymore. Research support does not go there, since we all “know” that it would be a waste of time, and there are other consequences to the field.
Here are some famous guesses that were essentially off by exponentials. For each I will list the time gap between the initial problem being raised and being solved.
 The Borsuk Conjecture [60 years]. It was claimed that was the answer to this geometric conjecture. Many top people worked on it for years and partial results were proved, showing is correct in many important cases. Then Gil Kalai and Jeff Kahn proved that was slightly off; the correct answer is exponential—at least for some , and possibly with in the exponent.
 Barrington’s Theorem [26 years]. I worked on lower bounds to show that bounded width computation could not compute the majority function. This was joint work with Ashok Chandra and Merrick Furst, which introduced multiparty communication. Then David Barrington came along and proved that bounded width could do all of . Hmmmm.

Law of Small Numbers [118 years].
It was noticed by Gauss thatfor all reasonable size . Here is the logarithmic intergal
which is an asympotic approximation to , the number of primes less than . It was conjectured that this would always hold and was widely believed for over a century. Then John Littlewood proved that the lead changes between and infinitely often, although the first switch is upper bounded by an immense number. Richard Guy wrote a wonderful article on what he called the “The Strong Law of Small Numbers”: cases when evident phenomena held for small numbers but eventually would fail. Here is a table with other examples:
By the way the “common clock” on is 43 years.
Weak Theorems and Galactic Algorithms
We do have the theorem that is not equal to , which we have discussed before and which is particular to the multitape Turing machine model—make the tapes planes or trees and it goes away. We cannot even deduce from it that That’s pretty weak. Remember that means that does not contain . And more, of course.
The versus statement still allows crosscutting the generallyunderstood significance. That is:
 allows to have no less than an time algorithm.
 still allows to have an time algorithm.
To be sure, some evidence cited by Scott is really for an exponential lower bound on ; we have discussed this before too. But what we are saying still cuts against the usual argument that “many people have worked on looking for algorithms for .” Yes many have looked for algorithms, but most were interested in “real” practical algorithms. For this kind of quest there is not much difference between and .
Aram Harrow communicates to us a moreconcrete version of this point, which also serves as a bridge to Ken’s musings.
Aram’s Bridging Thoughts
Quoting Aram, with light editing: “One of the stronger reasons for is the Bayesian one—it is easier to find algorithms than lower bounds, so our failure to find a subexponentialtime algorithm for speaks louder than our failure to find a superlinear lower bound for . A related way of expressing this is that before completeness was understood, thousands of researchers in disparate fields were unwittingly all trying to put into .
But a counterargument is that all of those seemingly independent researchers would always come up with algorithms that relied on a few kinds of structure—a lot is covered by just two kinds:
 Convexity/absence of local minima—cases where greedy algorithms work.
 Tree structure—cases where dynamic programming, divideandconquer, and related ideas work.
This paucity of algorithmic variety can be viewed in (at least) two ways:
 Algorithms, like oil deposits, are a finite resource. We’ve found all the important ones, and is always going to require exponential time.
 The paucity is a sociological phenomenon, due to the fact that all researchers have learned essentially similar math backgrounds.
On the latter, Terry Tao’s recent breakthrough on the NavierStokes equations is an example of how much the same ideas keep recirculating, and how much more quickly progress can be made by crossapplying ideas rather than coming up with radically new ones. Going from Erwin Schrödinger’s equation to Peter Shor’s quantum factoring algorithm is a 60minute lecture, but it took over 60 years (and a change in perspective coming from the computer revolution) to discover. Our lack of algorithms reveals only our lack of creativity, and it is arrogant to posit fundamental limits to mathematics just because we can’t solve a problem. Either way, the central question isn’t so much about but rather a “throwback” of a question now being asked about quantum computers:
Where does the power of deterministic algorithms come from?
A related question is the power of the Lasserre hierarchy. It has been shown to be effective for a large number of problems, but with a surprisingly small number of truly different techniques. I would love to know whether further work will increase or decrease the number of ways in which we know how to use it; that is, either by discovering new methods or by unifying apparently different methods.”
Ken’s Sixth World
The Lasserre hierarchy builds upon LP’s and SDP’s, and this brings us back to the intro. I (still Dick) remember many people in the 1970’s trying to prove that certain linear/convex programming problems were hard, despite all our confidence in the simplex algorithm for daily use. This makes Ken wonder:
What if SDP’s really were hard?
Russell Impagliazzo famously categorized five worlds that are consistent with current knowledge of complexity. Is there room to analyze any more, ones that are inconsistent now, but might have been meaningfully consistent had our field taken a different path?
All of Impagliazzo’s worlds—including ones with and with —have been instantiated via oracle results. All oracle results involve pretending that some ostensibly hard problem is easy. For instance, the world with involves pretending complete problems are easy, while known ones for involve granting free access to a set coded so that the free access itself activates a diagonalization. What I (Ken) wonder is whether there is a sensible formal way to do “Reverse Oracle Results,” which pretend that some easy problem is hard.
One known way to get this effect is to narrow the definition of “easy” so that still has easy reductions to from other problems . For example, linear programming problems are Pcomplete under logspace (and even easier) reductions, as are problems of approximation by SDP’s. But here I mean something more structural—a sense in which is the only route to solving a whole class of problems . Then we can segregate this entire class and pretend it all is hard. It might suffice to give “easy” reductions from to all these . In particular, a lot of ‘s are solved via the interiorpoint paradigm for (LP’s and) SDP’s. It could also employ ideas in reverse mathematics.
Countering Factuals with Counterfactuals
Scott replied to my comment about a possible algorithm for in his post by referring to his earlier comment that:
Since it would be inelegant and unnatural for the class to be “severed into two” in this way, I’d say the much likelier possibility is simply that .
Our point is, perhaps remains already manifestly “severed into two” along Khachiyan’s and the interiorpoint faultlines. In particular, we wonder how the following consequences would have looked as conditional results had they been proved in the late 1970’s:
Theorem 1 If , then it is possible to compute in polynomial time close approximations to a function of undirected graphs that is sandwiched between the complete clique number and chromatic number , which all coincide whenever neither nor its complement has an odd induced cycle.
Theorem 2 If , then is polynomialtime approximable within , even though it is hard to approximate within .
In the first theorem we have also inverted time by embracing the Strong Perfect Graph Theorem which was proved in 2002 (final in 2006), but this conjecture was strongly felt and makes it transparent that many important families of graphs are perfect. Hence the sweeping tractability of major complete problems on these cases could be a surprise. On the second theorem, why should the difference between and matter to such a simple problem as ?
Of course in the light of knowledge we understand how these two famous theorems work. On the latter the Unique Games Conjecture already helps explain how may be special. But the present exercise is about how we reason when we don’t (yet) have the light of knowledge.
Open Problems
Can we make some formal sense of a world where Khachiyan’s breakthrough never happens?
Update 3/18/14: The comments section has some excellent replies, including ones on the argument of the last two sections by Scott Aaronson and by Timothy Gowers.
Update 3/17/14: The Lovász thetafunction example replaced the post’s original quoting of Leslie Valiant’s first “Accidental Algorithm” under a mismemory that LP’s were involved (and SDP’s in later developments). Snipping the irrelevant qualifier, it reads:
Theorem 3 Counting assignments to a certain class of planar formulas is deterministic polynomialtime computable modulo , or modulo any Mersenne prime, even though it is hard modulo or .
Aside from the fact that this computation belongs to a class called which is commonly believed to be a proper subclass of , the intent was to argue: “What do Mersenne primes have to do with matchings and convex programming really? Surely counting must be equally hard modulo any odd prime—after all there’s no exception for Mersenne in modular circuit lower bounds—so must be an ‘invisible fence’ around this kind of ridiculousness.” The intro and Borsuk statements were also amended.
I wonder how much of this post applies to P not equal to PSPACE?
That stayed on the cuttingroom floor, leaving just the Gröbnerbasis algorithm (technically, the “F5” algorithm) example to raise that mention. It is true IMHO that any P=NP claim should clarify whether it applies to P=PSPACE.
Professor Regan Could you clarify why P=NP should clarify P=PSPACE?
If P=NP, then the entire polynomial hierarchy collapses to P. This means that for all fixed k, the algorithm giving P=NP extends to answer questions of the form
(\exists Y)(\forall Z)(\exists W)….(Q_k A) R(x,Y,Z,W,…,A)
with kmany alternating quantifiers, and with R(x,Y,…,A) computable in time poly(x). Now the question is, how does it extend when k too is variable? If the time scales nicely enough with k, then P=PSPACE follows.
Professor Regan: I still don’t get it. I did not background research. It seems that P=NP just means P=PH and it can be that P!=PSPACE. Why should P=NP should explain P=PSPACE?
Should be ” I did some background research….”.
Interesting take that P already remains severed. I wonder what Scott would say.
Wasn’t convex programming (modulo the epsilon additive error which is necessary) known to be in NP intersecton coNP just as LP was? Why is it surprising that it was in P?
We say in the intro that LP etc. in P was not a surprise. Factoring is an example of a problem in NP intersect coNP whose membership in P would cause surprise.
Factoring is different in that it is a search problem rather than a decision problem. Same issue holds for problems such as those in classes such as PPAD. I still don’t get the point why you say that SDP being in P was the big deal about Khachiyan’s result rather than LP.
Not until this paper:
Ramana, M. V. (1997). An exact duality theory for semidefinite programming and its complexity implications. Mathematical Programming, 77(1), 129162.
Your “electric fence” argument is not analogous to Scott’s. If you made it more analogous to Scott’s, it would not be an argument for P != SDP, but rather for SDP != NP.
Notice how you’ve stated the first part of each of your two theorms as “If P = SDP, then X is in P”, rather than the simpler “X is in SDP”. This is awkward and has no analogue in Scott’s argument and should suggest to you that you are doing something wrong.
Restating, then, theorem 1 becomes “Counting matchings in a certain sort of graph mod 7 or mod any Mersenne prime is in SDP, but mod 3 or mod 5 is NPhard”. And Theorem 2 becomes “Approximating MAXCUT within 0.941 is NPhard, but approximating it within .87856 is in SDP”.
Neither of these are really good “electric fence” arguments, but they’re now closer to what Scott was saying. Theorem 2 is the closer one — we’ve got a realvalued parameter (call it r), and if r is too low, one argument shows the problem’s in SDP, but if we’re if r is too high, an entirely different argument shows the problem’s NPhard! It’s not quite the electric fence, though, because it’s not so near a miss; there’s a nonzero gap between the two regimes. Still, it is reminiscent of the electric fence argument, and if we go along with the analogy, it suggests SDP != NP.
Theorem 1 is a little further away, because there’s not even the notion of ordering and distance there with the parameter. Perhaps instead of a real electric fence, there’s some sort of 2adic electric fence? 🙂 (Note: I don’t think that actually works.)
A different weakness in my argument is that proving those as conditional theorems would still have exposed the trick of how the work: where mod7 comes in, and for MAXCUT, why 0.878… vs. 16/17—though not yet the UniqueGames reason for the neighborhood of .878… to be tight. Anyway this was the best I could do for a “counterfactual fence” by which to try to test Scott’s argument. The most portentous point IMHO is actually the one about the polynomial ideal membership problem being EXSPACEcomplete even though an algorithm solving it (on the natural domain extending the complete cases) is genericallyEXPTIME, though this doesn’t fall along the lines of testing Scott’s argument.
I believe the Theorem 1 is misquoting the literature. Counting matchings in planar graphs is doable in polynomial time (and thus counting modulo any number is as well) – this is the FKT algorithm powering Valiant’s Holographic algorithms.
The counting modulo 5 hardness result versus counting modulo 7 easyness result is for certain versions of planarSAT (as seen in the Wikipedia entry linked to in the blog post).
Fixed, thanks. I considered mentioning the FKT algorithm. I may however be wrong to think that SDP’s are involved in what’s been built atop Valiant’s algorithm since.
You know, the class of problems solved in polynomial time by the ellipsoid method is known as the class of “convex optimization problems.” The way this post is worded, in particular insinuating that SDP is the beall and endall of convex optimization problems (which are much more general), seems to ignore that fact. Care to elaborate?
I picked “SDP” as the lowest thing that captured the “surprise” examples and is solved by Khachiyan. The main editorial issue as the intro went through revisions was that ellipsoid is beaten by interiorpoint on SDP’s (for most purposes) but there seems to be no common “umbrella” name for the two. Meanwhile for the moregeneral convex problems I couldn’t find greater simplicity or surprise than for “SDP”, which is > LP and popular in theory right now. My other impression is that the Lasserre hierarchy is in some sense universal for approximations of convex optimization—?
Added: I changed the intro’s “…a much wider class than linear programs, called semidefinite programs or SDP’s” to “…a much wider class than linear programs, including socalled semidefinite programs or SDP’s.”
re P vs NP recently the PCP thm popped up on tcs.se … find it amazing nobody cites PCP theorem as possible hint that P=NP. overall this theorem seems to show that NP testing can be “compressed” to an amazing degree. have you covered this anywhere in your blog? havent seen it. ❓
since you cite him…. to me scott aaronson blogs a bit too much on P vs NP opinions vs real facts. 🙄 did anyone notice how few refs he used in his latest blog? re yellow & green frogs or whatever its a colorful and appealing analogy for the layman, but as a serious scientist, wish he would ground it with more rigorous citations in TCS. eg turn the overall essay into a arxiv preprint. alas it is written by someone who maybe spends a bit too much time arguing over which sports team is gonna win. he says he doesnt care about sports in his blog but maybe he doth protest too much. actually maybe if he did adopt a team maybe he could do less “inside baseball” on TCS instead…. 😛
would write this on his blog but he’s blocked me for a 1yr sentence, yeeks! & kinda doubt he will ever let up on that. 😥 find it hard to take him totally seriously as a professional scientist at times because of all his hardcore overthetop pontificating on various subjects eg Dwave in particular…. oh yeah and his resistance to open science on his blog. just announced blocking lubos motl for 3years! guess should feel honored. 😎
see also disagreeing with scott aaronson re QM physics
anyway, favorite topic, great stuff, keep up the good work as always 😀
Mathematicians seem to be striving to keep this question inside the mathematical field. Nevertheless, the yellow & green frogs, that was a biological argument… and the electric fence, isn’t it a physics argument? Though I’m convinced PvsNP has nothing to do with mathematics, I’d never expected a brilliant mathematician like Scott to illustrate this viewpoint – even subconsciously. Just kidding – well, almost.
re mathematicians vs P vs NP. shor has stated publicly he feels mathematicians didnt think it was a serious question until it remained open for a few decades. now there is serious mathematical interest after about 3 decades or so & claymath prizes (selected largely by mathematicians) helped “legitimize” it. gowers is doing a lot to bring mathematical interest. so, not exactly sure what you meant “mathematicans seem to be striving to keep this question inside the mathematical field”.
Because it’s a mathematics statement which can’t be answered for reasons that belong to physics. It so happens that math can be considered to be part of physics – proofs are algorithms and algorithms are meant to be run on hardware. So that this alleged mathematical undecidability is at the bottom line a physics phenomenon that’s been amplified by means of software. Yes, math is nothing but software… but in the present case it acts as an electronic microscope!
Prof. Lipton, I would like to point out that much care needs to be exercised when making claims about SDP’s complexity.
In particular, we do not even know if the Semidefinite Feasiblity Problem (SDFP := the decision problem of whether a Linear Matrix Inequality (LMI) A(x) ⪰ 0 with rational/real coefficients has a real solution) is in NP in the Turing machine model (the difficulty here comes from the fact that LMIs with integer coefficients could have only irrational and/or strictly doubly exponential rational solutions, as shown in the paper cited below), much less whether it is in P.
The polynomial size gapfree dual (the standard SDP dual can exhibit duality gaps) that I developed back in 1995 (citation given below; see subsection 4.3, in particular) gives us that its membership in NP and CoNP would be simultaneous.
In the real number model (of Blum, Shub, Smale), the problem does belong to NP, and hence to CoNP by the duality result above.
The ellipsoid method of Khachian, as expanded upon by Grotschel, Lovasz and Schrijver in their celebrated book, gives us polynomial time algorithms for various APPROXIMATE versions of membership and optimization problems related to convex sets equipped with separation oracles (SDP solution sets, which are called spectrahedra, being a special case of this). Those methods, however, do not work in addressing exact questions (eg, SDFP as stated above.)
Interior point methods (IPM) as advanced by Karmarkar, Nesterov, Nemirovski and many others do give polynomial arithmetic complexity, again for approximate versions such as finding a pair of primal and dual solution estimates that with a given epsilon of being feasible to their respective problems (approximate solutions can be turned into exact solutions for LP and convex QP, but not so for SDP), and are within epsilon of each each other in terms of objective values.
One crucial factor to note here is that bitlength analyses of the numbers of the computation are lacking in the case of IPMs (for anything beyond convex QP), and so anyone invoking them do need to do their own analyses to in order to be able to make definitive complexity assertions. Most researchers have been ignoring this part. In the case of GW relaxation for MAXCUT, I and Leo Khachian (rest in peace, my friend) had at some point discussed and felt fairly confident that one could, if they sat down to do it, establish bitlength complexity bounds for that application by, if necessary, slightly dropping down the quality of solution being sought.
Long story short, as of now, it is entirely possible that the general SDFP is NPHard (although, I’d compare the likelihood of that being the case to P being equal to NP.)
Although the work mentioned above is a bit dated, apparently no significant developments have taken place re. the complexity of SDFP over the last 15+ years.
Citation to my duality/complexity paper:
M. V. Ramana. An exact duality theory for semidefinite programming and its complexity implications. Math. Programming, 77 (2, Ser. B):129–162, 1997 (the paper should still be available on citeseer.)
Thanks.
MVR
ps: Thanks for your excellent blog. As a regular visitor, I enjoy your selection of topics and the lively discussion that the posts generate.
Dear Professor Ramana,
Thank you for this great elucidation. The “SDP” part is mine—looking for a shortcut way to name a class that includes LP and the MAXCUT approximation and is solved by Khachiyan’s methods. I guess it’s not so easily named as a class.
—Ken R.
Dear Dick and Ken,
Kachiyan and LP: It is great to read about Leonid Kachiyan and about linear programming which provides a steady source of surprises. Indeed LP in P was a wonderful breakthrough. I remember that Paul Erdos who was generally uninterested in CS (and applied math) mentioned in as a great breakthrough in a lecture shortly after it was discovered. In this blog post of mine you can find a nice picture of Dancis and Kachiyan. We can expect that LP and related things will continue to supply great surprises.
Three possible general approaches
Generally speaking we can consider three points of view:
1. Miracles happen
2. We cannot expect miracles but understanding happens
3. We can expect neither miracles nor understanding
Overall, I am a type2 person. I expect understanding but not miracles. My view regarding understanding is that
4. we should grab any piece of understanding that we can get (along with skepticism and nudging,) and not be overly picky about formats and formalities.
NP=!P Given my 2type, getting to the NP=!P question, I firmly think that P=!NP.
In the case of NP=!P, a proof that NP is not P will also be a miracle of a sort, so perhaps (in accordance with 4. ) a tentative but rather detailed understanding of the kind we have is roughly what we can expect.
I remember meditating over here (perhaps a little under the influence of narcotics) about various formal ways that the difficulty of proving NP =!P (or rather a nonuniform version) may itself reflect a difficulty of the NP!=P nature.
The evidence for NP!=P. In my opinion the two main (related) pieces of evidence for NP=!P is first the huge failed implicit and explicit effort to find efficient algorithms to intractable tasks, and second the major coherent mathematical structure based on the NP=!P conjecture and related conjectures.
Surprises. I don’t think we have witnessed any mathematical surprise of magnitude like P=NP or even one order of magnitude smaller. So while bringing up surprises is clearly relevant, looking at it carefully does not support NP=P.
Discussing mathematical statements. For me an exciting question is how (and if at all) mathematical statements (like the NP=!P) conjecture can be discussed and to what extent people can give scientific/academic evidence and discuss such evidence. (I also care a lot about applied math statements.) I tend to believe that this is possible (and certainly interesting to try) to some extent. (In fact I am trying, on my and other blogs and related platforms.) But we can be skeptical about this issue as well.
Gil, thank you for this civil, wellreasoned post!
Remarkably enough, a comparably shocking mathematical discovery DID happen again, only six years later: Dan Shechtman’s discovery of quasicrystals in 1984.
Following Scott Aaronson’s excellent advice — that concrete mathematical ideas/questions/answers are best posted to StackExchanges — I have posted a hyperlinked summary of Shechtman’s quasicrystal work to MathOverflow as “Lessons from Crystallographic Classification,” which is offered in answer to Scott’s original question “Analogues of P vs. NP in the history of mathematics”.
Shechtman’s Burning Arrow  Shechtman’s work assailed not the theorems of crystallographic classification (which were mathematically unassailable), but rather the starting postulates; thus Shechtman’s work was received by established crystallographers as a transgressive GLLstyle “Burning Arrow” (as Khachiyan’s work was not).
Possibly for this reason, Shechtman’s work aroused a storm of sardonic rancor from eminent crystallographers (while Khachiyan’s was received quiescently); in retrospect this sardonic rancor reflected poorly upon the researchers who indulged in it.
Conclusion Shechtman’s quasicrystal work (arguably) teaches broad lessons in regard to PvsNP and in particular suggests natural questions. Do complexitytheoretic “quasialgorithms” reside within the gap between P and NP? Do the postulates that define the boundaries of the Complexity Zoo therefore require Shechtmanstyle revisions to accommodate not just new species of algorithms, but new phylums?
No one knows the answers to these tough questions.
Summary Quasicrystal history teaches three plain lessons: (1) mathematical postulates sometimes require transformational revisions; (2) it is neither necessary, nor feasible, nor desirable that everyone think alike in regard to such revisions; and (3) sardonic rancor contributes little to mathematical discourse … whereas Shechtmanquality Burning Arrows contribute much!
Dear John, the example regarding quasicrystals is very interesting! (As always it is not 1 to 1 but there are interestig similarities.)
Yes Gil … moreover, there is a natural overlap in regard to Subhash Khot’s Unique Games Conjecture (UGC) that I have posted as a MathOverflow UGC/quasicrystal comment.
It is very enjoyable to be alive in a century in which the same problems appear overandover — in eversurprising guises — to mathematicians, scientists, engineers … and even medical researchers!
Can the human limitations be considered any evidence of a scientific theory?
The conjectures about elegance or miracles attempting to support any scientific theory are strictly manmade; they remain within the boundaries of human imaginations.
The biological systems they themselves are miracles but they are taken for granted. The navigation used by whales, thousands of miles migration of the butterflies, are they not worth noting? It is only the manmade “miracles” that are rare. Most of the great “inventions” are the imitations of the biological systems, whether it is an airplane or a camera, they are at the best an imitation, developed over a long period—long after the discovery of the founding principles. So what does not exist in a natural biological system, in an explicit form, could be much harder to “invent”.
The NP v. P issue is an artificial phenomenon, far removed from the known biological processes. That might be the key reason that the majority has resorted to a more comfortable conjecture, that is, NP != P.
Sometimes there is a faint line between discovery and invention; sometimes it could be a matter of interpretation. While NP=P event is not quite a black swan, it just might be only a slight mutation of a black swan, an already discovered “ miracle”.
Elegance versus miracles… interesting metaphor! Thus if you want an elegant theory, you claim in the first place that miracles don’t happen… but when you’re actually expecting them, you’d better leave out elegance forever! And we – poor humans – are bound to always stand in between…
on a lower level i wonder if one can have a world where barringtons theorem is false,
your post reminds me of the post you did where you stated your answers to gasarch’s poll and also gave an account of a poll on NC1 and Bwbp in 1983 where most people thought them not equal. (the poll was fictional but I fell for it and commented bravo).
Let’s hope we all live to see the fate of the scientific arrogance about NP!=P.
I firmly believe that , but I would like to try to play devil’s advocate by guessing what Dick’s answer (and maybe Ken — I’m not sure where he stands on this question) would be to the point that there is a huge and delicate structure of complexity classes, all of which would collapse if . Surely, one might think, the fact that we have all those NPcomplete problems, all of which appear to be intractable, is evidence that something objective is going on? Is all that structure an illusion?
In a P=NP world, the answer might be something like this. No, it’s not an illusion. It’s just that our notion of reduction needs to be changed from polynomialtime reducibility to reducibility using some subset of P that can perhaps be formally defined and that corresponds to the kinds of polynomialtime algorithms we are familiar with today. So it seems to me that our feeling that “something must be going on” could be justified even if P=NP.
I agree with pretty well everything Gil writes above, but let me attack that too. (So I’m attacking myself in the process.) Gil says that surprises of the magnitude of P=NP just don’t happen in mathematics. But one might argue that that is begging the question, since Dick’s claim could be taken to be that it would not be a massive surprise if P=NP. Or rather, it ought not to be a massive surprise — clearly, given what people have said, it would in practice be a huge surprise. Somehow, we have to explain why it would be a huge surprise if P=NP, so the argument that surprises of that magnitude don’t happen is circular.
On the other hand, I find some of the examples of earlier surprises unconvincing. For example, the relationship between and li(x) seems to me to be one where the evidence in favour of the conjecture was purely observational. There weren’t any theoretical reasons for one always to be bigger than the other. Contrast that with the massive surprise there would be if the twinprime conjecture were false. That would be a surprise because it would show that a heuristic principle — that in a very precise sense the primes behave randomly — which has been confirmed in many many instances, is in fact wrong. I think the main point of Scott Aaronson’s post was to give similar reasons for believing that — that it has been subjected to tests (and not just attempts to find algorithms for NPcomplete problems) that, had it been false in general, it might well have failed. So even if we start out openminded, with a 5050 prior distribution, say, subsequent observations should result in a posterior distribution that is more like 99.9 to 0.1.
In the end, I think what convinces me that is that huge complexity can come from iterating a few very simple operations. There are many many examples of this phenomenon. If those operations are chosen as suitable operations on nbit sequences, then we can rapidly produce Boolean functions that appear to behave completely randomly. The assertion that they are genuinely pseudorandom is of course stronger than the statement that , but I see no reason to disbelieve it. (I’m not talking about pseudorandom generators based on the hardness of factoring — the evidence for that seems to me to be much weaker, though I’d still guess that it is hard.)
There’s a parallel here with the primes. My belief in the twinprimes conjecture is strengthened by the fact that it can be massively strengthened — roughly speaking to the statement that all configurations appear in the primes with the frequency you would expect after accounting for small divisors — and still appears to be true.
Prof. Gowers, the pointofview that you express is exceedingly congenial to simulationists and software engineers, for the following pragmatic reasons.
In the ordinary course of software/simulation verification and validation (V&V) it generically happens that engineers are confronted with algorithms whose claimed runtimes we are unable to verify. Naturally we wonder — and even formally postulate — that the runtime classification problem is not only empirically hard, and not only NPhard, but even is undecidably hard (in both mathematical senses of “undecidable”).
Supposing this hypothesis to be true — as is reasonable, given that it’s consistent with all of our practical experience and with with all of our present theorems — then what effective resolution(s) to can engineers hope for?
The leastradical resolution is definitional adjustment(s) to complexity theory, along comparable lines as the definitional adjustment(s) that quasicrystals induced in the International Union of Crystallography (IUC):
As everybody appreciates — but nobody really understands, least of all me — the collegial discussion of such revisions is historically associated to regrettably high levels of incivility, sardonicism, and rancor, that in retrospect contribute insubstantially to progress in any research discipline.
Summary Complexity theorists like Juris Hartmanis, Doron Zeilberger, and Charles Bennett have prepared a reasonably plain path forward (in literature surveyed on TCS StackExchange), and yet complexity theory has not yet exhibited its “Burning Arrows” (of cryptic algorithms) and/or found its Dan Shechtman (to shoot those cryptic arrows).
That is a distinguished role, to which young complexity theorists may reasonably aspire, and we can all wish them every success in it!
For the benefit of readers new to Gödel’s Lost Letter (GLL), the phrase “burning arrow” and/or “flaming arrow” is a recurring GLL trope that is associated to Gary Larson’s immortal Far Side cartoon — which regrettably Google Image does not find — in which a cowboy gazes in dismay at a newlyfired arrow and exclaims “Burning arrows! Are they allowed to do that?”
In GLLusage, the phrase “Burning Arrow” idiomatically refers to the natural uneasiness that we all feel whenever mathematical postulates alter … an uneasiness that is hilariously remediated by that The Far Side cartoon.
Tim, thanks for substantial comment. My own position is P ≠ NP, indeed circuit lower bounds of exp(Omega(n/log n)) for the most “compact” NPcomplete languages. This stops short of signing on to the exp(Omega(n)) circuit lower bound which derandomizes BPP and lots of etc. The logn divisor is separate from issues like the encoding size of an mclause formula or medge graph being ~mlog(m).
Our intent was to look for things that might have been said to have passed tests of the kind you say, but failed. Regarding twin primes, do the expectations you mention include that there should be arbitrarily large k such that some twinprime pair flanks a multiple of 2^k?
On that last question, if we naively assume that the probability of a number of the form being prime is around and that these probabilities are independent, then the probability of a twinprime pair of that form is around , so the probability that there exists such a pair with tends to zero. So I would not expect arbitrarily large such pairs.
However, that doesn’t take into account special properties of numbers next to powers of 2, which I know are important since they’ve been used to find very large primes. I don’t know enough about this to know how my naive heuristic estimates would need to be adjusted.
As an aside, numbers of the form are called Cunningham numbers. These numbers, with their algebraic and Aurifeuillian factors, are historically notable in that they are the subject of the Cunningham Project, which (to my knowledge) is the longestrunning (and still ongoing) computational project in the history of mathematics.
References and links regarding the remarkable properties of Cunningham number factorizations, and the practical relevance of these factorizations to problems in quantum simulation can be found in the concluding “Open Problems” section of the MathOverflow question “What is the ‘Tangle’ at the Heart of Quantum Simulation?” In brief, Cunningham number factorizations control the dimensionality of secant varieties of Segre/Veronese/Grassmannian varieties, which are the natural venues for computational simulation via Lindblad/Carmichael stochastic automorphisms (aka “quantum trajectory unravelling”).
The Soldier Healing Seminar’s (SHS) “Green Sheet Notes for 2013“ survey the presentday engineering understanding of varietal dynamics, and the (inpreparation) “SHS Red Sheet Notes for 2014” will include material concretely relating to Cunningham numbers, along the lines that the MathOverflow question foreshadows.
Observation Nowadays, few mathematical topics are so arcane that engineers (and medical researchers) cannot find practical applications of them, and few mathematical formalisms are so abstract that they cannot help in understanding computational simulation. Our present capabilities in observation, simulation, and regeneration fall far short of the fundamental limits that physics and mathematics impose … and WONDERFUL opportunities for young researchers and entrepreneurs are created by this shortfall!
I agree with Tim that without a mathematical proof our arguments are tentative and somewhat circular. Here is a question: Can we think about a hypothetical world where the common belief is NP=P based just on the very same evidence that we have today?
In the 60s: People believe more or less that computers can solve efficiently every natural finite combinatorial problem like, LP, IP assignement, scheduling and many others and they expect that the algorithms will gradually be discovered.
In the 70s: The Cook and Karp revolution: A major surprise! Wonderful! Now, people think “all we need is to find a single algorithm and this will enable us to solve tons of problems.” More and more such problems are found in the decades to come. People realize though that there are problems higher up in the hieararchy that will require even stronger algorithms.
This is compared to a situation of many different appealing conjectures in number theory that suddenly are discovered to be equivalent. Certainly a strong evidence that they are all true, right?
In the 80s: The common interpretation to AC^0 results and other lower bounds: Duhh!! we cannot compute parity with bounded depth circuits, so what? Parity is easy and this just show that bounded depth circuits are unreasonable. Three decades later people think: People thing: NP=P’s skeptics, what further progress can you report on now??
In the 90s: The interpretation of PCP: Wonderfull!! now it is enough to find an algorithm just for approximation (to many problems) and this will give us the desirable goal!
Practically good SAT solvers People gained confidence that effifcient universal SAT solver will be found.
How can we convince people in this other world that they are wrong? Do we have any common grounds with them?
LOL … Gil, can we be entirely sure that you yourself are not a ?
It is fair to say that presentday systems engineers already reside — effectively and even rationally — in your delightful fantasyworld of . As evidence of which:
(1) simulation capability has doubled every two years in the past, ever since 1940 (or earlier). Sixtyfour years and 32 doublings.
(2) No fundamental obstructions are known to prevent simulation capability from doubling every two years in the future, until 2075 (or later). Sixtyfour more years and 32 more doublings (we hope).
(3) Pascal’s Wager — when transcribed from theology to mathematics — provides a cogent reason to regard as undecidable … on the grounds that belief in PvsNP undecidability is least likely to imperil the sustainment of progress.
Questions Can belief that credibly promise us a better world? What complexitytheoretic beliefs most effectively assist us to escape (for example) the dystopian realities of Clay Shirky’s recent essay (Google finds it) “The End of Higher Education’s Golden Age” (2014) … a dystopian reality with which young researchers especially are alltoofamiliar?
Gil, your essays perform a great service in inspiring us to contemplate these tough questions.
Prof. Gowers,
… ” I think what convinces me that P\ne NP is that huge complexity can come from iterating a few very simple operations.”
This is a fairly intuitive notion. How would you distinguish the growth of n^100 from 100^n?.
At the least we would need some very intelligent tools to do an objective analysis.
I suspect the research has gone lot more into the depth of the analysis as opposed to looking at some more fundamental issues at the breadth first level.
One may be overlooked an important property in the solution of the problem. The famous result of Larry Stockmeyer says “PLANAR 3COLORABILITY IS POLYNOMIAL COMPLETE”.
The advice from this result don’t try to give an efficient algorithm for this problem. On the other hand I have a theorem “a planar graph has an chromatic number 3 iff it is crashfree”. Just have a look Stockmeyer’s the wellknown gadget which has been left as an exercise for the reader:
Sorry, just saw this post today! It’s certainly a much meatier response to my argument than Lubos’s. 🙂 I have two main disagreements with what’s written here:
(1) I don’t think it’s true that Khachiyan’s algorithm (or LP and SDP more generally) turn the class P into a “split P soup”—that is, split between problems that are in P for “normal” reasons, and those that are in P “but only for a crazy reason”—as I argued would have to happen if P=NP. There’s nothing that crazy about Khachiyan’s algorithm at all. On the contrary, as this post itself points out, LP was known “empirically” to be in P long before Khachiyan came along: it’s a convex optimization problem, so many local optimization methods (e.g., the simplex method) can be expected to work well in practice on it. (And certainly the duality theory of LP told us that LP couldn’t be NPcomplete unless NP=coNP.) Even more important, many of the practical things that we can do in polynomial time using LP, we can also do in polynomial time WITHOUT using LP (often purely combinatorially)! Maxflow and maximum matching are two classic examples. Likewise, many things we can do using SDP, we can also do (e.g.) just by computing a principal eigenvalue. In summary, LP and SDP are NOT like islands totally separated from the rest of P—they’re more like mountain peaks with plenty of other stuff leading up to them.
(2) I think the “parody” of my position—imagining that I wouldn’t have believed SDP could be in P, because it would lead to such “weird” consequences as MAXCUT being polytime approximable only to within 0.878, etc.—gets things ironically backwards. It’s not just that SDP’s were never particularly hard in practice, or that they’re NOT disconnected from the rest of P (see point (1)). Rather, it’s that the striking “electric fence” pattern—i.e., problems being solvable in polynomial time right up to the limit of NPhardness and not beyond that (even when the limit involves some weird numerical parameter)—is exactly what I was offering as an argument for P!=NP! Commenter sniffnoy, above, gets it exactly right when he says that, just blindly applying my “electric fence” argument, such a pattern would have led to the presumablycorrect prediction that SDP!=NP (even if one didn’t yet know that SDP is in P), rather than to the incorrect prediction that SDP is not in P.
Scott, please appreciate that Condition (3) of your MathOverflow question — “the conjecture was either proved or disproved” — may reasonably be regarded as unduly restrictive. Robert Heinlein’s juvenile SF novel Tunnel in the Sky (1955) conveys this point charmingly:
If psychologists can conceive four ways of escaping, and apes five, then surely complexity theorists can conceive more than two! 🙂
Scott, thanks. Incidentally, I’m swapping out the Valiant example (not LPrelevant like I misrecalled) for Lovasztheta, but that won’t be even as close as the Valiant one was trying to be. It was as “meaty” as we could manage, i.e. the “facts” may be on your side… 🙂
Regarding your lament that there’s a paucity of algorithmic techniques. There’s a very curious one which I thought of over a decade ago which hasn’t really been tried, but may yield fast algorithms for some problems which are currently intractable.
Consider the linear relaxation of a SAT problem, meaning that the variables instead of 0 or 1 can be any real value in between. Then a clause of the form a or b or c becomes a + b + c >= 1. A rather quirky technique can now be used. Pick a set of variables, or their negations, and try to find the minimum value for their sum. If the minimum comes up a noninteger value, then because all the variables must be 0 or 1 you can round up, and reintroduce the new clause. For example, if you find that a + b + (1c) >= 1.3, you can then ascertain that a + b + (1c) >= 2, and introduce that as a new clause, and repeat for a different set of variables and assignments. Since you’re doing this using floats in a real computer making it fully formal might be a bit tricky, but in practice requiring a value over an integer of greater than, say, a millionth, should be fine.
Doing this straightforwardly runs into some problems. The first is that the minimum assignment values have a tendency to be 0, because all the funny values are pushed into the nonincluded variables. That can probably be fixed by including all variables in the assignment to be minimized. The second is that for, say, 3SAT, it’s possible to set any variable to between 1/3 and 2/3 and satisfy any clause, so the roundups will hardly remove any of the possible solution space at all. The solution is to use specially designed problems where the minima will be at least 1/2 the number of variables. For example you could use a problem where all the clauses were of the form ‘At least 2/3 of the variables in this particular assignment to everything is correct’. I don’t believe problems of this type have ever been studies, although they appear at first blush to be much harder than random 3CNF problems for regular stochastic solving techniques.
So the overall algorithm is: Pick a random assignment to all variables. Find the minimum number of variables in that assignment which must be true using linear programming. Round up and reintroduce. Repeat until either one of the minimum assignments when rounded off to 0 and 1 produces a satisfying assignment to the original problem or you’ve shown that there is no possible solution. I suspect/hope that if the number and restrictiveness of clauses is set right then this approach will handily deal with problems which walksatbased approaches can’t touch, and that the linear programming instances encountered will be ones which simplex goes exponential on.
It should be easy enough to experiment with this, but I’ve never had handy access to the appropriate tools.
One aspect that I regard interesting from a wider perspective in Scott’s argument is him making the “close calls” the central issue, and regarding this view as a teaching of “Popperian science.” I am not sure at all about this, and, as I said (in the ShtetlOptimized’s discussion), the “close call” aspect is weaker for P=!PSPACE, and in certain ways stronger for hardness of unique games (which people are not at all convinced about). So I think this specific aspect is quite interesting and deserved to be discussed.
Incidently, in another discussion over Scott’s blog I had another interesting disagreement related to “Popperian science” with Daniel Lidar. Daniel regards the gradual process of finding a model that fits the DWAVE machine where proponents and opponents of the “quantumness” hypothesis are proposing in turns models to fit the machine’s behavior as manifesting “Popperian science,” while I regard it as a violation of basic teachings in statistical hypothesis testing. Again I think that this disagreement is interesting from a wider point of view.
It’s fun to see that pretty much all the firstrank mathblogosphere pundits — Gil Kalai, Scott Aaronson, David Deutsch, Michael Nielsen (etc) — agree that Popperian falsifiability is a wrongheaded view of science (or if not wrong, misleadingly incomplete).
The reasons differ pretty remarkably from persontoperson, though!
The strongest mathematical argument (known to me) is John P. A. Ioannidis’ muchcited “Why Most Published Research Findings Are False” (2005). In essence, Ioannidis argues — convincingly, as it seems to me — that datamining science generates a larger number of plausible scientific hypotheses than datamining science can ever falsify.
This perhaps is a concrete numerical reason why so many researchers nowadays argue that explanation has become an essential element of 21st century science *and* mathematics … such that experiments alone are not enough for science, and (perhaps more controversially) proofs alone are not enough for mathematics.
To particularly worthreading arguments for the view that mathismorethanproof are Bill Thurston’s “On Proof and Progress in Mathematics” (1994), and Alexander Grothendieck’s “Letter to Ronnie Brown” (1983); the latter includes a marvelous testimony:
This is why the potential undecidability of PvsNP (or any other problem) need not arouse in us any feelings of nervousness or rancor or dismay, but rather can inspire us to seek an understanding of *why* certain mathematical problems (seemingly) are stubbornly intractable and/or undecidable.
Google’s Search finds all of these above references, which all are wellworth reading and even (and it seems to me) all convey essentially the same message.
I’m of the opinion that, whenever the timebehavior an algorithm for an NPcomplete problem gets closer to polynomiality, then either:
– its correctness is provable and its behavior is undecidable (think of the alreadyknown algorithm which solves SAT in polynomial time iff P=NP)
– or its behavior is provable and its correctness is undecidable (think of the heuristic algorithms for the Travelling Salesman Problem).
Of course, this phenomenon is – in my view at least – nothing but a rough algorithmic analogue to the Eisenberg Uncertainty Principle: “the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.”
The two classes of NP problems that we try to separate – the feasible problems and the unfeasible ones – are somewhat reminiscent – to my perverted mind at least – of the Pauli Exclusion Principle: “no two identical fermions may occupy the same quantum state simultaneously.” Don’t ask me what a fermion is, I’ve cutpasted that from Wikipedia…. all I know is that the Pauli Principle gives rise to the hierarchy of electron layers in an atom… in very much the same way that P!=NP gives rise to the Polynomial Hierarchy in complexity theory. But maybe it’s just me…
I had already expressed in a previous comment on this blog that I was amused by the similarity between nuclear fusion and prime multiplication, one operation requiring much more energy – or more computing steps – than its reverse. Indeed, it can be said that prime multiplication – unprovably a oneway function – mimics nuclear fusion in an algorithmic way.
After all, those similarities shouldn’t be too much surprising, since the very possibility of classical computing ultimately lies on quantum grounds. Thus, we may imagine that a few more phenomenons have also been transferred from the quantum world into the world of classical computing…
… such as the waveparticle duality – discovered by Louis de Broglie – which has been transferred into our computing world in the form of the proofprogram duality – the socalled CurryHoward isomorphism. The wave viewpoint is known to prevail at lower energies, while the particle viewpoint becomes more relevant at higher energies.
Programming with proofs is called logic programming: the higherlever the language which you’re using, the closer you are to the proof viewpoint. On the other hand, the other side is called algorithmic programming. The lowerlevel the language you’re using, the more efficient you are, but it requires more energy on your part because you’re closer to the hardware… or if you prefer: you’re closer to the “quantum ground” mentioned above!
With so many analogies between algorithms and physics, now it’s no wonder anymore if mathematics is so wellsuited for describing our physical world!
Sorry, *imperative* programming (not algorithmic programming). This the actual opposite of logic programming. I tend to believe that most undecidable statements in computer science – if not *all* of them – are the virtual image of some quantum phenomenon.
Do you have a reference for this SAT solver that is polynomial time if P=NP? It gets drowned out by noise when I try to GOOGLE it.
There it is for the subsetsum problem:
http://en.wikipedia.org/wiki/P_versus_NP_problem#Polynomialtime_algorithms
I know it’s considered an elementary fact in complexity theory but I wasn’t aware of it. After Ken – and also Luke G – brought it to my attention in a few comments, I considered more seriously the undecidability of P=NP.
And it is time for crackpot to say the word defending P=NP position.
1. there was big surprise with Hilbert program of axiomatization of mathematics.
2. many NPcomplete problems can be encoded as polynomial problems, especially the decision versions. It is sufficient to look at quartic polynomials for problem definition in most cases, and sextic for 3SAT.
3. sum of squares of polynomials (SOS) can be reduced to SDP. Finding the minimum of quartic polynomial in 2 variables is nonconvex problem, but can be solved(for the purpose of showing P=NP) by SOS due to nontrivial reason (see any text for Hilbert 17 problem)
4. Testing positivity of polynomial is hard because there is no good method for that => Lassere hierarhy, which in a sense is similar to searching Nullstellensatz certificates, except it shold e more powerfull due to nonlinear transformations in SOS/SDP part.
5. Partition problem (NPcomplete): given vector of integers tell whether that . One need to define whether quartic polynomial is strictly positive or not. this is quartic polynomial and the size of monomial basis is polynomial in the size of the problem it is about .
5.epsilon MAXCUT in decision version is encodable in quartic polynomials.
6. positive polynomials form a cone, and if one can find the basis for this cone the problem of positive polynomial program (PPP) would be done and P would be equal NP. The only work I’m aware of going in this direction is done by Bruce Reznick (http://www.math.uiuc.edu/~reznick/paper53.pdf) and that is done by old ideas of Hilbert, i.e. of more than hundred years ago.
7. But there is an alternative way to create positive polynomials coming from NPcomplete problems, say partition problems. For example, polynomial is strictly positive. If one subtract small positive number say 0.1 the polynomial is still positive, but not a sum of squares. The same way it is possible to create many different examples of positive polynomials – take different values of a, for example, take , where T is large number say 100, and take the rest of a to be unsatisfiable, then changing the particular places of a equal T one gets the number of positive polynomials roughly equal to the number of monomials. I did not see this approach in the literature, may be because it is written by the unreadable language in contrast to Reznick works.
8. (7) is actually saying instead of solving particular problem let’s classify all NPcomplete problems encodable by quartic or even sextic polynomial into subspaces of quartic/sextic polynomials. Therefore, it is now the problem of engineering to design enough interesting examples of unsatisfiale problems that cover the space of monomials.
9. Some people claim there are much more positive polynomials than SOS polynomials with n going to infinity – therefore the above construction most probably will give nonsos positive polynomials after subtracting small positive integer. Moreover, since those polynomials are constructed from well defined examples it is easy to find their global minima, e.g. extreme points of positive polynomials cone.
I’m very interested in the literature going along this direction e.g. toward solving PPP via subspace classification, similar to quadratic forms classification into universal and definite forms. But here we have 3 classes here – universal, SOS and positive quartic/sextic polynomials.
Sorry for taking your time.
Sorry, in 7. the second sum is squared not individual terms
I am in the agnostic camp. I think the evidence for P not being equal to NP is pretty thin. Despite several the long posts that Scott has written his argument essentially boil down to the following:
“no one has been able to find a polytime algorithm for any NPhard problem so far”
which is not very convincing for me.
Inductive arguments about history of mathematics rely too much on homogeneity of mathematical world which I don’t understand.
Glad to see this deviating and nonpopular response. The problem is that most of these “nonbelievers” like Scott and Tim Gowers have achieved a recognition which tend to backup whatever they believe or claim.
Yes there is no mathematical foundation for the NP!=P belief. It is all based on “what if pigs could fly” conjecture. Some mathematicians/TCS people call them oracles.
To me, it all boils down to the wrong equation “mathematical existence” = “physical existence”. The reasoning seems to be: “if that algorithm existed, it would eventually get found, proved correct and efficient, run on a computer and then our lives would be changed forever.” But there are so many things in math that are without any physical existence – real numbers, to begin with…
” I think the evidence for P not being equal to NP is pretty thin. ”
But it’s not, at all.
” Despite several the long posts that Scott has written his argument essentially boil down to the following:
“no one has been able to find a polytime algorithm for any NPhard problem so far” ”
This is utterly false and deeply intellectually dishonest.
If you want an example of a big surprise: Matiyasevitch’s nailing Hilbert’s Tenth Problem, following works by Davis, Putnam, and Robinson. Georg Kreisel, when reviewing one of the earlier papers in that series, claimed that the result proved only had superficial relationship with Hilbert’s Tenth Problem…
Many mathematicians did not believe that any recursively enumerable set, including for instance the prime numbers, could be described as the positive range of a multivariante polynomial over the integers…
Thanks very much for that example. When I first heard the result around 1980, I thought it was a logical outgrowth of the previous work.