Skip to content

Is P=NP an Ill Posed Problem?

July 3, 2009

P=NP has an infinite number of possible outcomes


Russell Impagliazzo is one of the most original thinkers in complexity theory, and is famous for many wonderful things. One of his papers is on many worlds. The worlds are: Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania. See his paper for the definitions and see this for a recent conference on this topics.

Today I want to talk about a different take on P=NP and possible worlds that we live in. I think that the situation is much more complex than Russell’s view. As usual, I have a different view of things, which I hope you will find interesting.

Some of you may have been at Russell’s invited talk at STOC 2006. It was a great talk, as all his presentations are. But, the coolest part–for me–was that he arrived to the conference with transparencies, overheads. The room for the invited talk was very wide and not very deep. So it was critical that his slides appear on two projectors, otherwise part of the audience would be unable to see his slides.

Of course if he had used Powerpoint this would have been automatic. Since Russell arrived with plastic slides to the conference at the last minute there was a slightly-panicked run to Kinko’s. There they made identical copies of his colorful slides. Later, as he spoke and turned his slides the slides on the other projector also flipped–magically. Of course someone had the job of trying to keep this all in synch. It was a great talk, in any event.

Now let’s turn first to another famous open problem the Riemann Hypothesis.

Riemann Hypothesis

The prime number theorem states that {\pi(x)}, the number of primes less than {x}, is well approximated by the logarithmic integral:

\displaystyle  \pi(x) = \text{Li}(x) + E(x)


\displaystyle  \text{Li}(x) = \int_{0}^{x} \frac{dt}{\log (t)}

and {E(x) = o(x)} is the error term.

The Riemann Hypothesis (RH) implies that the error term, {E(x)}, is order {x^{\frac{1}{2}}}–ignoring logarithmic terms. But, the RH is “binary” in the sense that even one zero that is not on the critical line destroys this bound. More precisely, if there is a zero at {\sigma + it} with {\sigma > 1/2}, then the error term in the prime number theorem is at least {x^\sigma}–again ignoring some logarithmic factors. I thank Andy Odlyzko for pointing out this tight relationship between a non-critical zero and the error term.

This is a relatively simple behavior, one bad apple and the whole distribution of the prime numbers changes. Famously, Enrico Bombieri has said that “The failure of the Riemann hypothesis would create havoc in the distribution of prime numbers.”

P=NP is Different

I claim that our problem P=NP is different than the RH, since there is no “one bad apple” behavior. Consider first what a proof that P=NP could look like.

{\bullet} There might be an algorithm for SAT that runs in a reasonable time bound. This would be an immense result, and would change the world. Crypto-systems would all be destroyed, whole parts of theory would have to radically change. This is why many people believe that P=NP is impossible–at least that’s my impression–they say it would be too good to be true. I agree that it would be an amazing situation, but it is only one of a multitude of possibilities.

{\bullet} There might be an algorithm for SAT that is polynomial but with a very high exponent, for example, {O(n^{10})}. This would win the million Clay dollars, but would have a more philosophical impact. Crypto-researchers would worry that eventually the algorithm could be made practical, but it would not have any immediate practical impact. Research papers would have to do a song and dance: “we study an approximation to the following NP-hard problem, that is in P of course thanks to the famous result of X, but we feel our results are still interesting because {\dots}

{\bullet} There might be an algorithm that is polynomial in a very weak sense. What if its running time is

\displaystyle  n^{2^{2^{2^{2^{100}}}}}.

That would be great, I think you still get the money, but crypto people could still sleep at night, without any sleep aids.

{\bullet} There might exist an algorithm that runs in {n^{c}} time for some constant {c} that is not even known. Results like this are quite possible, if the proof of the algorithm’s correctness used some indirect argument. Again you still get the cash–I believe.

Now consider what a proof that P is not equal to NP could look like.

{\bullet} SAT might require exponential time. The best algorithm might run in time {1.2^{n}}, for example. This type of behavior is what many believe. There even is a “formal” conjecture that the best algorithm for 3-SAT will have this type of running time. Note, even here one needs to be careful: what if the running time is {1.0000001^{n}}? Recall Alan Perlis said:

For every polynomial algorithm you have, I have an exponential algorithm that I would rather run.

{\bullet} SAT might have wild complexity. Perhaps for some lengths {n} the best running is exponential, but infinitely often the best time is polynomial in {n}. This could happen.

{\bullet} SAT might be barely super polynomial: what if the lower bound is

\displaystyle  n^{\log \log \log n}.

Does this really mean anything? Yet it would technically resolve the P=NP question, would be an immense achievement, would win the Clay Prize, and probably many other prizes too. But does it really imply that SAT is “intractable?”

{\bullet} SAT might have no polynomial time algorithm, but there could be random algorithms {\dots}

The Riemann Hypothesis Again

You might be thinking what if the RH is true, could it have the same complex behavior that P=NP can? In particular, could the RH be true, but with a crazy error term:

\displaystyle  2^{2^{2^{100}}}x^{\frac{1}{2}}\log(x)?

Isn’t this the same issue that I raised above? The answer is that this cannot happen for the RH: The following is a theorem:

Theorem: If RH, then for all {x \ge 2657},

\displaystyle  | \pi(x) - \text{Li}(x)| < \frac{1}{8\pi}\sqrt x \log(x).

This is due to Lowell Schoenfeld, who actually proved that RH is equivalent to his inequality.

Is P=NP Ill Posed?

I love the P=NP question, but it occurs to me that in a sense the question is “ill posed.” David Hilbert said once:

“If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?”

After sleeping for a thousand years knowing if P has been proved to be not equal to NP, would not be nearly as meaningful as knowing that RH has been proved.

To make my point consider three players: Paula, Ted, and George. Paula is a practical minded computer scientist, Ted is a theory minded computer scientist, and George is the best gossip in the department.

There are two scenes. In the first P=NP and in the second P{\neq}NP.

George barges into Paula’s office without knocking. She is about to complain that she is in a meeting with Ted, when George says, “Ted did you hear that P=NP has been proved! I just got off the phone with Lance and the proof is correct.”

Paula sees Ted jump up at this and he says, “are you kidding?” George is out of breathe, but smiling. It’s no joke. “Unbelievable, unbelievable”, is all that Ted can manage.

Paula has never seen George this excited, “no it’s really true. The FOCS program committee accepted the paper, even though she did not submit it. I guess that’s a first, she just missed the deadline or something.”

Paula sits behind a large desk that is covered in books on circuit design and VLSI, says “this is great. Finally, you theory guys have done something that I can use. I can’t wait to get my team implementing the algorithm.” She sees Ted sit down, he is turning pale and mutters to himself, “Dammit, this wipes out Fred’s thesis. I told that idiot to finish it last spring.”

Paula ignores him, “this will change VLSI design and testing and–”

George interrupts her, “well not quite Paula. I hear that the running time of her algorithm is bad.”

“What? How bad is bad?”

“I think its running time is {n^{2^{2^{100}}}}, although I forget if there is a logarithmic factor too. But, P=NP!”’

George barges into Paula’s office without knocking. She is about to complain that she is in a meeting with Ted, when George says, “Ted did you hear that P{\neq}NP has been proved! I just got off the phone with Lance and the proof is correct.”

Paula sees Ted jump up at this and he says, “are you kidding?” George is out of breathe, but smiling. It’s no joke. “Unbelievable, unbelievable”, is all that Ted can manage.

Paula has never seen George this excited, “no it’s really true. The STOC program committee accepted the paper, even though she did not submit it. I guess that’s a first, she just missed the deadline or something.”

Paula who is listening from behind a large desk that is covered in books on circuit design and VLSI, idly checks her Seiko. It’s almost twenty to 12, she needs to leave for her lunch meeting soon.

Ted says, “I knew it, that crazy blogger–what’s his name?–always talking about P could equal NP. Well he was wrong. What an idiot.”

George says, “yeah.”

“So how does the proof go?”

“It’s a circuit lower bound, she proves SAT requires at least {n^{\log \log \log n}} size circuits. Avoids the usual roadblocks by using that new combinatorial lemma of Terry. Isn’t this wonderful.”

Paula who a moment ago was hardly listening says, “Did I hear you right? That’s not a very big bound. Let’s see if {n} is a million, let me see, that’s not even a quadratic lower bound. Were those log’s base two?”

“Of course they’re base two, log’s are always base two. But it’s a super polynomial lower bound, P{\neq}NP!”

My apologies to all fiction writers that have ever lived.

Open Problems

My point is simple: There is a huge variance in the potential answers to the question: Does P equal NP?

It’s not simple. Equal, not equal, Equal, not equal. There are an infinite number of possibilities. I think that this is what makes me think that we need to work hard on this problem.

A class of open problems is to prove something like Schoenfeld’s theorem for P=NP. Can we show that some of the above pathological behaviors cannot occur? Or can we show that one pathological behavior implies another? Or precludes another?

43 Comments leave one →
  1. Rafee Kamouna permalink
    July 3, 2009 1:37 pm

    One possibility: no proof of either cases, i.e. formally independent. Another, the underlying mathematical system harbors a contradiction, then, there is a proof of both P=NP and another of P=!NP.

    RH is not a question about logic, while P/NP may be. The evidence of seeing RH as true is much more stronger than seein P=!NP.

  2. anon permalink
    July 3, 2009 1:39 pm

    Richard: has anyone looked into “P”=?”NP” where problem-size is measured not in terms of a naive encoding but rather in terms of komolgorov complexity (or something analogous)?

    So that when you say an algorithm is O(f(N)) the ‘N’ isn’t the length of the input (naively encoded) but instead taken to be the length of a “compressed” input (“compressed” in the sense it’s the length of a ‘program’ — for use on some probably sub-turing machine — that when executed terminates after outputting the complete, “decompressed” input).

    • rjlipton permalink*
      July 3, 2009 2:23 pm

      There definitely has been some work on programs that operate on compressed objects. A simple example is suppose I give you a compressed string and you wish to see if the string contains a given pattern. This is a natural problem. The difficulty is that for weak compression methods such as “run encoding” it easy to prove nice theorems: you can find a pattern in time linear in the compressed string. The trouble is for more complex compression methods things quickly get hard to do.

      This definitely is a very nice problem. I think there is some recent research out there and would expect that more could be done.

  3. Suresh Purini permalink
    July 3, 2009 2:04 pm

    Very interesting and fun post. Thank you!

  4. July 4, 2009 12:57 am

    @anon: forgive me for my ignorance, but don’t different encodings yield at most a polynomially-related runtime? that is, you can’t really change exponential-time algorithms into polynomial-time algorithms simply by changing the run-time.

    • anon permalink
      July 4, 2009 1:33 pm

      cipher3d: I’ll explain what motivated my question.

      Saying a problem is in NP is saying that on a nondeterministic turing machine there’s an algorithm that’ll find an answer in polynomial time. An immediate consequence of that is that if you have a deterministic turing machine and a suitable oracle to consult — one that the deterministic machine can query at each point of possible indeterminacy in order to discover the ‘lucky’ branch to try first — then together the deterministic machine and the oracle could solve the problem in polynomial time, also.

      A practical consequence of this is that given a candidate ‘answer’ to a problem — for factoring, say, knowledge that “91 == 7 x 13″ — it’s trivial to derive-and-implement such an oracle for that particular problem, which is what allows you to verify the correctness of the answers to NP problems in polynomial time; I think it’s a little unusual to think of answer-checking as oracle-derivation, but bear with me for a minute.

      This means that, for example, if your nondeterministic ‘factoring’ algorithm is ‘is X prime? YES: add X to list of factors and halt; NO: pick a prime p that divides X, add p to the list of factors, and recurse on X/p”, then “91 == 7 x 13” means your deterministic machine can be said to execute like this: “is 91 prime? NO. What # shall I pick? Oh, how about 7? Ok, is 13 prime? YES. Done: 7,13”.

      Thus we wind up with the familiar two options:

      – V. FAST: ‘answer-to-verify’ (eg: list of factors), ‘verification’ (eg: multiply and compare-with-target)

      – V. SLOW: ‘raw integer’ (to factor), ‘factoring’ (not known to be fast)

      What I’m mainly interested in is understanding what it is that makes a hard instance of an np problem hard; it’s pretty clear that ‘size’ isn’t a good proxy for difficulty.

      For some problems there’re simple “compression” techniques that also happen to provide useful information to some hypothetical problem-solving machine; if you allow the problem-solving machine the ability not only to solve ‘decompressed’ problems but also the (partially) ‘compressed’ problem you wind up with an ability to gradually take a problem from ‘trivial’ to ‘fully difficult’. It is my hope that you could learn something nontrivial about the construction-and-also-the-distribution of hard problems by getting a deeper understanding of how each additional step of partial ‘decompression’ makes the problem instance (apparently) harder to solve.

      In the case of factoring the obvious “compression” technique for this approach is a language that lets you specify raw #s, or * and ^; eg one in which “(* (^ 2 25) (* (^ 3 72) (^ 7 13)))” means what you’d expect.

      This is a case in which the “compressed” form contains ‘extra’ information a factoring machine could use to speed up the time to solution; we’d get:

      – V. FAST: “(* (^ 2 25) (* (^ 3 72) (^ 7 13)))”, verify

      – NOT-AS-FAST: “(* (^ 2 25) (* (^ 3 72) (eval (^ 7 13))))”, verify-what-you-can, factor-the-rest

      – SLOWER-STILL: “(* (^ 2 25) (* (eval (^ 3 72)) (eval (^ 7 13))))”, verify-what-you-can, factor-the-rest

      … etc.,

      – V. SLOW: “(eval (* (^ 2 25) (* (^ 3 72) (^ 7 13))))”, factor-everything

      We know that V. FAST is polynomial and V. SLOW is not; is there some helpful way to show how running times must change as you gradually ‘erase’ the structural info (thereby gradually making the problem-specific oracle only partially-derivable, and perhaps step-by-step introducing inescapable points of indeterminacy).

      For ‘factoring’ it turns out that the representation that retains the factorization structure is also ‘compressing’ (for very large #s often very much so); this could be a coincidence, but it got me thinking.

      The original motivation for thinking this way was seeing that primes were in P but factoring was still hard; what intuition could explain this?

      In the framework sketched above: a prime doesn’t ‘compress’ in this representation; composite #s do compress, often remarkably. If there’s some way to characterize how the decompressor adds intrinsic difficulty upon executing each decompression step, then perhaps the correct intuition is that: the decompressor doesn’t get a chance to make things ‘difficult’ for primes, but in ‘decompressing’ a composite there’s lots of decompression operations, each of which adds possible indeterminacy.

      Thus my interest in work on computational complexity where inputs are measured not by raw length but either by their komolgorov complexity (or something like it), perhaps instead by the ratio of their ‘full’ form to their ‘complexity’: I was hoping there was either already work in this regard or impossibility results or whatnot (because even though I’m not necessarily interested in work on ‘compressed’ inputs the representations are compressing, so any blanket statements about work on compressed inputs ought to convey some information here, too).

      For problems other than factoring you’d want to find a representation of inputs that let you construct larger problem instances out of compositions of smaller ones; for something like SAT or traveling salesman finding useful ‘composition’ operators — ones that both help represent the inputs compactly but also supply useful information to a sufficiently-smart solver — seems a much more delicate prospect.

      I’m afraid this is already too long but I hope it helps explain what motivated me to ask the question. Thanks to Richard for hosting this blog and for all the wonderful commenters. Enjoy the holiday weekend.

  5. July 4, 2009 6:46 am

    Dear Dick, some time ago I wrote a little post on Russell’s “multiverse” with a semi-popular exposition here:

  6. Drew Byrne, Esq. permalink
    July 4, 2009 9:58 am

    Ah, to equal P or not to equal NP, is that the question?

  7. July 4, 2009 5:00 pm

    How lucky it is I stumbled on your blog post! I’m looking for a name for this “wild complexity” hardness assumption. Namely:

    ASSUMPTION: For every polynomial p(n), there are only finitely many values of n such that there exists a Boolean circuit of size p(n) that correctly solves every 3SAT instance of size n.

    I sure hope this is a standard complexity assumption!

    • none permalink
      July 7, 2009 1:42 am

      Shaddin, unless I’m missing something, for any n, there are only finitely many 3SAT problems of size n.

  8. priyananda permalink
    July 6, 2009 10:13 pm

    I get the feeling that the reason P ?= NP isn’t as clearcut as RH is because we have no concrete justification right now of using polynomial time as the basis of separation. It may be that P is too broad or narrow a class, so even P = NP doesn’t automatically gaurantee that a separation doesn’t exist between sets of problems.

    • Mikola permalink
      July 8, 2009 10:20 pm

      What about the Cook-Levin theorem? It gives some pretty interesting evidence for why certain problems in NP are much harder than others since they can simulate other problems via a polynomial reduction. The existence of these NP-hard problems is the main reason why P =? NP is such an important problem and gives a good justification for searching for a separation between polynomial time and exponential time problems.

      I think the bigger question is why there are so few lower bounds at all? We don’t have super-linear time bounds for any interesting problems in P. Even sorting is technically linear time with respect to the input complexity (since encoding n distinct elements takes at least O(n log n) bits). The reasons for this are still quite mysterious to me.

      • Serge Ganachaud permalink
        September 24, 2011 6:49 pm

        The only reason why there are so many “natural” NP-complete problems is the stability of polynomials functions under addition, multiplication and composition. While any proposed solution to SAT can be checked in polynomial time, there remains to find a more accurate lower bound. Polynomials were chosen for their properties of stability, not for the ease of the separation proof between P and NP. I suspect undecidability to be the price we’ll have to pay for stability.

  9. Jeff Rubard permalink
    July 10, 2009 5:23 pm

    Anon anent. Unlike “consciousness”, P and NP is an easy, easy problem: All The Fantastic Things “We” Could Do If It Were True are the only motivating forces behind closing it, since if you know the merest propositional logic unrelativized DTM’s can’t beat O(2^n) and — as two out of three WordPress suggestions suggest, though Google has an alternate take and the anonymous commenter a sharper formulation — the fact that “oracle” computation can be wedded to a polynomial-time algorithm on a DTM to create the appearance of P=NP does not require us to undo our “p’s and q’s”.

    • Alex permalink
      July 10, 2009 10:31 pm

      Care to clarify your paragraph-sentence?

      • rjlipton permalink*
        July 11, 2009 8:24 am

        Which sentence?

      • Jeff Rubard permalink
        July 11, 2009 3:00 pm

        It’s true, there are two: one is descriptively identified by Alex, though, and the other is just fun for people who speak and read English. I have a little song-and-dance I do about P and NP; the fundamental bits (along with some “hard thinking” about the problem involving the length of normalized proofs in natural-deduction lambda calculi) are on my blog. However, though I manage to have a little bit of Abstract Expressionism brightening up the place sometimes, it’s not quite as sharp as this one; and furthermore, I don’t really think blogs are a particularly good medium for scientific discourse.

        This I believe for reasons deriving from a “formal sociology” theory of the Internet involving considerations from modal logic, so I recently tried to put my current version on Usenet using Google Groups: as of the last time I checked, the post “P/NP and the Continuum Hypothesis: Together Again, for the Last Time” was only available on their United Arab Emirates franchise, but it’s come back into circulation. The basic idea is this: for a propositional-logic sentence of n literals, 2^n combinations of truth-values can satisfy it, and they are all *notionally* independent of each other; you can never eliminate the possibility that the final one you check will be the satisfying case.

        This is perfectly convertible with thoughts about the pencil-and-paper nondeterministic algorithm for checking satisfiability, or the one for its converse invalidity; that involves wading through the 2^n possible truth-functions which can compose a sentence of n literals in CNF, or any other way you choose it. So, if you believe in propositional logic — and non-Aussie logicians and electrical engineers alike suggest you do — an unrelativized DTM cannot solve SAT in anything faster than worst-case EXPTIME (there may be any number of approximation algorithms, but they will never be good enough to solve the problem by anything other than brute-force exponential-time work, i.e. the same old thing).

        There is the issue of oracle computation to consider, though. I am only fuzzily-wuzzily a computer scientist, and so I get the ha-ha funny element of “oracles” more than I get the proper recursion-theoretic account of relative computing, but what I have started suggesting recently is this: oracles are essentially equivalent to first-order quantifiers. This can be seen dimly if you consider the possibility of querying oracles as to whether one of Tarski’s satisfying sequences for quantifiers are “in A”: it can be seen very clearly if you consider the category-theoretic account of first-order quantification, where the “subobject classifier” takes an n-tuple and maps it on to a “truth-value” (0 or 1, natch).

        What’s that mean? Well, first of all it means that relativized computation has the decidability problems of first-order and higher-order logic (I guess you guys call this the “arithmetical hierarchy”), since adding subobject classifiers to a cartesian closed category to create an elementary topos adds more formal structure than the class of ‘properly’ decidable problems which are limned by CCC’s need. Secondly, the oracles that “seem” to make P=NP (like Hopcroft and Ullman’s “class of all true quantified Boolean sentences”, eh?) come out really would undo our understanding of what it is to compute *a la* the purely Boolean considerations adduced earlier, which are FSMs formally equivalent to computation once recursion is added.

        I don’t know, maybe this tension is just too essential, but maybe there’s a deeper problem we can eternally spike: I kind of think it’s CH, since the cardinality of 2^n for arbitrary n is that of the reals and with our oracles (which are of course real enough, since we say so) we have reasons to say aleph-null2^aleph-null, aleph-null==2^aleph-null — the oracles for SAT I suggest, where a truly random assignment of truth-values to the literals (or QBFs pertaining to the existentially quantified statement of the sentence’s invalidity) starts a properly P-time algorithm going — and a further reason to say we can’t know (Goedel and Cohen’s composite proof of the independence of CH from ZFC).

        That’s the long and the short of it: since my cash tends rather to the latter end, and the credibility necessary to get something like this said in a properly peer-reviewed journal is not part of “one of those things that have forever been true and will last till the end of time” like I consider the truths of mathematics to be, I just say it when I feel like it (sometimes consideration of a certain set motivates me). Take it for what it is worth.

      • Jeff Rubard permalink
        July 12, 2009 3:24 pm

        Since nobody has anything to say all of a sudden (which a Princeton mathematician once pointed out to me re: my Usenet maunderings), and since it probably wasn’t too essential that my tribute to John Kemeny got eaten by The Way WordPress Just Works, on to the “fun”, i.e. the insanity. Though Dr. Lipton can think hard about what the inscrutable Goedel had to say to von Neumann on his deathbed, the historical predecessor of Satisfiability is the legend of the “Rebus of Picardy”. Vico had this to say about it: “In northern France, there existed a hieroglyphic speech known as the rebus of Picardy, which as in Germany must have been speech by physical things; that is, by the hieroglyphics of Idanthrysus.” (Fisch edition, p.143, guess how I pulled it up.)

        I think a lot of educated people can feel Vico’s conjecture pretty deeply, although I’m not necessarily into deep feelings beyond a point, but my contention (which dates back to earlier in the decade, when I somehow assumed the P/NP problem had been solved) is that this is Satisfiability in another form: if such a practice were actual, and we can imagine this with Wittgenstein [In *Philosophical Remarks* he asked “What is the meaning of this grove of trees?” as seriously as only a gardener’s assistant in a monastery could] without really ever knowing whether it was real, we would have a “logical picture” we would have to determine a sense for by assessing its compositional articulation.

        This would essentially amount to testing for satisfiability, eh? We’d try out various linguistic interpretations until we hit on one that was The Right One, although I’d like to think it could still be wrong. In fact, of course — the formal considerations adduced above plus the old adage that “No man can know the hour of his own death” — which is true enough for the Standard American English semantic value of “No” — mean that physical reality can’t “tell us stuff” over and beyond the open normativity of language through mouth or hand. It just exists, and even the neural nets that would supposedly underwrite the legitimacy of such a practice are just big FSMs (see, although computer science is not such a big deal as mathematics, since it’s logic and applied logic at that, for “limited purposes” it’s useful enough).

        On the other hand, there is a “truth in painting” and a truth in beauty: that is to say, both aesthetic truth and the properties of things which are “so money” have that kind of ‘irreal’ character and sometimes it’s worth your while to try and figure out in what direction physical objects and commerce involving them are tending, so to speak. [However, I will say that *pace Erdos* mathematics doesn’t have to be beautiful, ’cause it often isn’t to my eye and those of many others — probably just a better idea to cultivate that impression.] However, when you choose to operate on that level you’ve vastly exceeded the resources of first-order logic, tied down by Loewenheim-Skolem and Compactness but IMO the most expressive complete logic: and though I have a personal joke about EXPTIME that’s dear to my heart, I think sometimes you just want to say “NEXPTIME, yo.”

    • Jeff Rubard permalink
      July 13, 2009 1:44 pm

      Enough fun in the sun with philosophy which was historical and materialistic, but is intended solely for the purposes of provoking reflection upon the situation of you and the people around you. Back to the rough ground. Obviously CS is a real science, and my personal feeling is that it is *pace* Simon (and, like I had to explain to Jon once, that really means I disagree, although I eventually picked up enough Latinism to understand the third level of the expression) the only true “science of the artificial”. This is based on the CS1 tyro’s gladness at learning that if you get a problem right in CS, you get it right, but also the observation that all of our “works and days” are polynomially bounded and whatever works, in economics or any other form of human praxis, better work algorithmically — not such a believer in *phronesis*, Rubard.

      In fact, all of us IT “trainspotters” can see the tension between the power of digital computing and the practical plans of organizations, including when there are “bugs in the system” that might just be there to bug you (a thought I previously mentioned to a working programmer, of whom I know many and to which decently-remunerative position I aspired after my dreams collapsed and before my head expanded, was that the great social-theoretic *moralist* of the postwar era, Michel Foucault, probably knew quite a few early and *late* programmers). And the truth — as I was telling QC — is that the Australasian logical viewpoints I adverted to before, where you work with a logic that can work around contradictions or work with the possibility of plural logical consequence relations, makes sense from an EE standpoint: though it be impolitic for the Beavertonian to say it who really knows about the floating-point Pentium bug way back when, and it’s probably politic to say solid-state physics is not the exactest of sciences.

      So on the practical level, which is in my view fundamentally a computational level, workarounds are real and we might never really know whether there was some catastrophic/chaotic/”quantum” phenomenon that caused more SAT solutions some one way. On the other hand, though, if there is gonna be a P/NP solution rather than arsing around with woo-woo conjectures and overshooting the moon with measure theory it’s got to be on the *theoretical* level, that is the logical level: and from the perspective of recursion theory BAs are essential and for them to behave properly the question must be closed for unrelativized TMs, negatively. On the first-order level, the fuzzword “oracle” should be precisified in terms of quantifiers if possible (I guess the lesson of Church-Turing is about how even the very best and most essential “programming” considerations can never motivate such a choice, though): I think it is, and what people want to say by giving P=NP oracles is covered by first-order completeness, that is to say a recursive procedure (proofs using a Hilbert calculus and a few rules of inference, e.g.), strengthened by recourse to a deterministic querying of the power set (models of quantification), can solve SAT lickety-split. It just can’t solve its own problems, i.e. the set of first-order truths about propositions is only r.e. and not recursive.

      Beyond that, my little joke intended to evoke the enigmatic teenage saying “See you on the flip side” contains a meaningful conjecture, though: *jeden und keine* problems are NEXPTIME-hard, since if NP=DEXPTIME=problems solvable either by rote exploration of the power-set or “ingenious” guesses therein, NEXPTIME is equivalent the computational power necessary for second-order logic, which as we all know contains all of mathematics but also either contradiction or incompleteness. Maybe people should buy the antipodean story on that level, but I’m just a West-Coast guy and so I’m interested in these considerations: the complete second-order logic, Henkin semantics, isn’t really “second-order” at all; it’s a way of linking “real abstractions” to the concrete first-order relations that make them real. Furthermore, once you get to the first-order level, it’s my contention (though this may just be me being stupid) is that all first-order truths are parts of decidable subtheories of first-order logic *a la* Presburger arithmetic, although we can’t have an algorithm for determining whether any given subtheory is decidable on pain of revoking Goedel.

      So don’t give up on concrete computational problems and the ‘logical’ solutions to them, just because Everything Could Be Illuminated — certainly not on the basis of the Wittgenstein quote I now “remember” was in *Philosophical Grammar*: I was once given the ‘exactly wrong’ interpretation of it by one of my CS superiors, but Monty Python told you something useful for once about that man.

      • Jeff Rubard permalink
        July 14, 2009 3:59 pm

        One error again in this one: I meant to say “on pain of revoking Church”, i.e. the foundational (and limitative) result of CS, his proof of the undecidability of the predicate calculus [“Noo, Zeus-brow-born Turing proved that what he was talking about was what he was talking about.”] Of course if the set of first-order truths was recursive, then second-order logic (as mentioned above, well capable of stating PA a la Frege) would have the same “problems” as first-order quantification, i.e. being complete but undecidable: in fact it is incomplete, thusly undecidability is a corollary of it. It wasn’t what I meant to say, though, and now that we’ve subtracted that tell me, the humble programmer-aspirant, what’s wrong with any of this rather than drawing paychecks you don’t deserve.

    • OberJuan permalink
      May 23, 2013 2:18 pm

      By that standard, ‘consciousness’ is solved. We are here. We know how to make more of us, and we have significant directive control over of what others of us are conscious. (In fact those two things are too easy, by many interpretations.) What more is there to the ‘solution’ of the ‘problem of consciousness’?

      The fact we intend to pursue ‘consciousness’ further means we want a more detailed and more constructive understanding of the issue. The same goes for P=NP. I think I agree with the poster that the details of any constructive proofs will be far more important than the solution of the problem, and a non-constructive proof might net us very, very little.

      In analogy, we will need to gut this problem the way philosophy and oracle theory gut consciousness. It exists. Is it easy to come by, or hard to come by? Is it truly necessary, or can it be decomposed into automated parts? If it is truly necessary, how often can we get almost all of our survival goals met without it? Etc.

  10. Uwe Brauer permalink
    July 20, 2009 7:36 am


    I am strongly reminded of the final fate of the Newtonian n-body problem.
    For n=3 Sundman (1913!) solved the problem in terms of a convergent power series (This was generalized to n>3 by Q. Wang around 1990).

    The problem with this solution is that the power series converges so slowly that it is useless for all practical purposes.

    Uwe Brauer

  11. adam o permalink
    September 25, 2009 11:56 pm

    Maybe I’m just out of the loop here, but is there an example of a (hopefully natural) problem in P for the which best known algorithm has so high a running-time as to yield it impractical on reasonable sized inputs?

    • rjlipton permalink*
      September 26, 2009 8:09 am

      Not clear. The approximation of the volume of a convex body is one that has a big exponent. But that is getting better, and is not huge anyway. The work on graph minors would seem to be a possibility. Take a look at this.

  12. matt permalink
    June 27, 2012 5:17 pm

    Actually, Paula’s wrong. It is just barely a quadratic lower bound. 1024=2^10. So one million is less than 2^20. 20>16 so log of 20 is a bit bigger than 4. And log of 4=2.

  13. matt permalink
    June 27, 2012 5:18 pm

    That should be, one million is just a little bit less than 2^20. Anyway, if you do the arithmetic, it’s just above a quadratic bound. Besides, I think STOC would consider even an n log^*(log^*(n)) circuit lower bound on anything as the biggest theory advance in decades.

    • rjlipton permalink*
      June 28, 2012 9:06 am


      i would go further. A lower bound of the form 100n seems to be beyond reach, and for sure cn for any fixed c is hopeless today. Of course someone may have a proof and are not telling us…hmmmmmmmmmmmmm

  14. October 22, 2015 10:16 am

    I have cracked P=NP

  15. July 2, 2018 3:03 pm

    “For P to equal NP, N must equal 1.”

    –My favorite P=NP joke.

  16. January 18, 2019 10:35 pm

    This is the answer to the P vs NP problem that I sent to the Clay Institute in September 2014.

    About this problem, I want to ask: All students (400) raise incompatibilities? I could (or the computer) put together a list of those unproblematic students and discard the others. Surely, they (the problematics) do not appear in the final list.

    In 2016, I invented the algorithm capable of doing this in a hypothetical machine.

    The goddess Fortuna will say…


  1. Twitted by llimllib
  2. Top Posts «
  3. Is P=NP an ill-posed problem? « SBP’s Random Walk
  4. Impagliazzo’s Multiverse « Combinatorics and more
  5. Weekend miscellany — The Endeavour
  6. Michael Nielsen » Biweekly links for 07/06/2009
  7. SAT Solvers: Is SAT Hard or Easy? « Gödel’s Lost Letter and P=NP
  8. Are Impossibility Proofs Possible? « Gödel’s Lost Letter and P=NP
  9. Theory Has Bet On P=NP « Gödel’s Lost Letter and P=NP
  10. Why Believe That P=NP Is Impossible? « Gödel’s Lost Letter and P=NP
  11. Guessing « Gödel’s Lost Letter and P=NP
  12. Professional Development 10/5 through 10/11/2015 | Code Ukemi
  13. Professional Development – 2015 – Week 42

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s