Skip to content

Arithmetic Hierarchy and P=NP

May 27, 2009


The complexity of open problems via the arithmetic hierarchy

images

Stephen Kleene is a famous logician who got his PhD under Alonzo Church at Princeton University. Kleene has many important concepts named after him: the Kleene hierarchy, the Kleene star, Kleene’s recursion theorem and the Kleene fixed-point theorem. His 1952 book, Introduction to Meta-mathematics, is a classic; more on his book in a moment, which while a classic has an unusual feature.

Today, I want to talk about the Kleene Hierarchy, which is now usually called the Arithmetic Hierarchy (AH). We can use the AH to classify problems, in a sense, that I will shortly make precise. In this classification, the P=NP question comes out with high marks, which is both cool–we have a hard open problem–and is uncool, since we have a hard open problem.

I had the honor of meeting Kleene at his retirement conference, which was held in June 1978 in Madison, Wisconsin. That summer I was also privileged to sit in the office of Barkley Rosser, who was away on leave. Somehow I thought a lot about logic that summer.

Kleene had many terrific PhD students: two I know well are Robert Constable and Richard Vesley. Richard, who is a wonderful teacher, taught me first order logic and recursion theory, and helped me solve my first “open” problem. But, that is another story, for another day.

There is a barber shop story about Kleene, which I cannot validate with certainty; and there is a retirement story that I can validate–since I saw it first hand.

While working for Church, Stephen was trying to show that the lambda calculus could compute anything “computable”. Unfortunately, Church and Kleene could not see how to compute even the following function:

\displaystyle x \rightarrow x+1.

Yes, even the lowly successor function was not obviously computable in their calculus. The story goes that one day Stephen was getting his hair cut, when he realized how to encode the successor function into the lambda calculus. It is claimed, he jumped out of the barber chair, with half a haircut, and ran to tell Church. Not quite Archimedes running naked through the streets of Syracuse, shouting “Eureka,” but close. I think the story, true or not, says more about the nature of the lambda calculus as a basis of computing than anything else, but that is my take.

At Kleene’s retirement conference the final talk was given by the great man himself. After a long and well deserved introduction Kleene stepped up to the stage, and placed his first overhead slide onto the projector. Before he could say a word there was a roar of laughter throughout the packed auditorium. He ignored the laughter and proceeded with his talk. The reason for the laughter was that the slide looked like a page from his famous book. The slide looked like this:

\displaystyle  1. \dots

\displaystyle  \vdots

\displaystyle  \quad 1.1 \dots

\displaystyle  \vdots

\displaystyle  \quad \quad 1.1.1 \dots

\displaystyle  \quad \vdots

\displaystyle  \quad \vdots

\displaystyle  2. \dots

Meta-mathematics his book, is known for its detailed numbering scheme that only a logician could love. Phrases like “by {2.4.1} we see that {2.7.4} is true” abound. We all thought that Kleene was making fun of his own style, and would soon switch into story mode. The audience stopped laughing as soon as it dawned on everyone that this was no joke. Kleene was going to read and present a very detailed argument on something. Too bad. I was hoping to hear stories about his advisor, or his results, or his students–his life. But that was not to be: we were treated to an hour of a detailed talk on some exotic topic in logic.

Arithmetic Hierarchy

The AH classifies all formulas according to their quantifier complexity. Thus, a sentence is in \Pi_{2} provided it is equivalent to,

\displaystyle  \forall x \exists y \psi(x,y)

and {\psi(x,y)} is quantifier free.

For example, Fermat’s Last Theorem (FLT) is in \Pi_{1}. This follows since we can write it in the following way.

\displaystyle  \forall x,y,z,n>2 \quad (x^{n}+y^{n} = z^{n} \rightarrow xyz=0)

In a reversal of history, if you know the polynomial hierarchy, then you know AH: just remove the restrictions on the quantifiers. There was a time when many great ideas of complexity theory could be found by taking a concept from the area of recursion theory and “just” making the concept feasible: diagonalization, complete sets, various notions of reducibility, oracles, and more. That was not always how they were actually discovered, but many of the key notions were first defined in recursion theory. There may be other concepts still lurking there {\dots}

Classification of Open Problems

We saw how we could write the FLT in AH. In a natural way we can classify any open problem of arithmetic by writing it as a formal statement in AH. Here are some other problems–some open and some known–with respect to their AH complexity. This idea of viewing a problem’s complexity via the AH is an old idea, but one that I find useful. I hope you will agree.

{\bullet} The Four Color Theorem is in \Pi_{1}. This is easy to see, since it states that for every planar graph, there is a {4}-coloring of the faces.

{\bullet} The Finite Simple Group Classification is in \Pi_{1}. This is the famous “conjecture”, that is now a theorem, that all finite simple groups are either one of the {26} so-called sporadic simple groups, or come from the known infinite families of finite simple groups.

These first two raise a annoying point. If a theorem is proved, then it can be written as anything. This is another way of saying that {1=1} is equivalent to the Four Color Theorem. What I mean above is: what is the best that you can do without using the proof of the theorem. Textbook exercises have this problem all the time: “without using X prove Y.” I have the same problem here.

{\bullet} The Jacobian Conjecture is in \Pi_{1}. This is, one of my favorite non-complexity conjectures. I will discuss it another time. The fact that it is of this form is not trivial, since it concerns the behavior of polynomials with complex coefficients. It requires a lemma to re-state it as a statement about integers.

{\bullet} The Riemann Hypothesis is in \Pi_{1}. The fact that this is in \Pi_{1} is plausible, since it states, of course, that all the non-trivial roots of the {\zeta(\sigma+it)} function have {\sigma=1/2}. However, the obvious way to encode this runs into many technical issues about the locations of the zeroes of the {\zeta(\sigma+it)} function. However, Jeff Lagarias to the rescue. He has proved the following wonderful result: Let {H_{n}} be equal to the {n^{th}} harmonic number,

\displaystyle  H_n = \sum_{i=1}^{n} \frac{1}{i}.

Consider, the statement (E), that for all {n \ge 1},

\displaystyle  \sum_{d | n} d \le H_{n} + \exp(H_{n})\log(H_{n})

with equality only for {n = 1}. Lagarias’ result is that the statement E is equivalent to the Riemann Hypothesis. This will show that the Riemann Hypothesis is in \Pi_{1}.

Even with Jeff’s result there is a small technical point. We must show that it is computable to test whether or not

\displaystyle  a \le \exp(b)\log(c)

for rational numbers {a,b,c}. By clearing the denominator of {a} and using the elementary fact that { r \log(c) = \log(c^{r})}, we can show that we need only be able to test

\displaystyle  a \le \exp(b)\log(c)

for natural numbers. This is easy: compute the right-hand side until the error is below {1/3}. Then, since {a} is a whole number, there can be no mistake.

{\bullet} The Twin Prime Conjecture is in \Pi_{2}. This is the famous conjecture that there are an infinite number of primes p so that p+2 is also prime. Such a prime is, of course, called a twin prime. The obvious way to say this is: for all x , there is a twin prime p that is greater than x

Where is P=NP in the AH?

The next question is where does P=NP lie with respect to AH? I do not know if either one of P=NP or P{ {\neq} }NP are in \Pi_{1} . I will explain the best that I know, and I hope that someone can improve these simple bounds–perhaps they have already? Of course, as I pointed out earlier, if the P=NP question is resolved, then better bounds must follow.

First, lets look at how to encode P { {\neq} } NP. We need the standard idea of adding a “clock” to a Turing machine. Let {[x,y]} be a pairing function on the natural numbers: that is {x,y \rightarrow [x,y]} is one-to-one and is easy to compute. Define,

\displaystyle  M_{[x,c]}

as the deterministic Turing machine that operates as follows on an input {y}. The machine treats {x} as a deterministic program, and simulates {x} on input {y}. At the same time the machine runs a counter that stops its execution after {|y|^{c}} steps. If the machine accepts before the counter stops, then it accepts; otherwise, it rejects.

Then,

Theorem: P { {\neq} } NP is a \Pi_{2} sentence.

Proof: Define { {\psi(x,c,y)} } to denote that

\displaystyle  M_{[x,c]}(y) = z

and either (i) {y} is in {{\mathsf{SAT}}} and {z} is “reject”, or (ii) {y} is not in {{\mathsf{SAT}}} and {z} is “accept.” I claim that P { {\neq} } NP is equivalent to

\displaystyle  	\forall x,c \, \exists y \, \psi(x,c,y). \ \ \ \ \ (1)

Suppose that P { {\neq} } NP is true. Then, I claim that (1) is true. Suppose that it is false, then it follows that there are {x,c} so that { M_{[x,c]}} correctly computes {{\mathsf{SAT}}}. This is a contradiction.

Next suppose that P=NP. Then, I claim that (1) is false. Let {x} be the program that runs in polynomial time and computes {{\mathsf{SAT}}} correctly. Then, this is again a contradiction. \Box

Second, now let us look at how to encode P=NP. It is easy to see that it is a { {\Sigma_{2}} } sentence, since it is the negation of the first sentence, P { {\neq} } NP. Thus, P=NP is:

\displaystyle  \exists x,c \, \forall y \, \neg\psi(x,c,y)

A question that I wonder about is : can we do better? Can we encode, for example, P { {\neq} } NP, as a \Pi_{1} sentence?

A Curious Property of {\mathsf{SAT}}

We have shown that P { {\neq} } NP is a \Pi_{2} sentence. Every such sentence can be viewed as defining a Skolem function. Recall if

\displaystyle  \forall x \, \exists y \, A(x,y)

is true, then there is a function f , called the Skolem function, so that for all x , A(x,f(x)) .

Let {f(i,c)} be the smallest natural number {y} so that {M_{[i,c]}} makes a mistake on the input {y}. Then, if P { {\neq} } NP is true, the function {f(i,c)} is always defined.

The curious property of this function is this. If {f(i,c)} grows “fast” for an infinite number of inputs, then while P { {\neq} } NP, my nightmare happens. In this case, there are an infinite number of {n} so that {{\mathsf{SAT}}} for inputs of length {n} has circuit size {n^{O(\log n)}}.

Theorem: Suppose that there are infinite number of i for which there exists a c so that

\displaystyle f(i,c) > 2^{2^{|i|+c}}.

Then, for infinitely many {n}, {\mathsf{SAT}} has circuit size {n^{O(\log n)}}.

Proof: Let {i>1} and {c} be so that

\displaystyle f(i,c) > 2^{2^{|i|+c}}.

Define {n = 2^{|i|+c-1}}. Note, that {c} is at most {\log n}. Then, {M_{[i,c]}} on all {y} of length {n} is correct, since

\displaystyle  y \le 2^{n} = 2^{2^{|i|+c-1}} < f(i,c).

The size of the circuit that simulates this Turing machine on inputs of length {n} is polynomial in {|i|}, {n}, and the running time of the machine. The machine, by definition, runs in time

\displaystyle  |y|^{c} \le n^{c} \le n^{\log n}.

Thus, the theorem is proved. \Box

Open Problems

Before I state some open problems, I want to thank Subrahmanyam Kalyanasundaram and Ken Regan for many helpful comments on this post. Subrahmanyam has helped me many times, and is terrific at helping improve the posts.

Ken pointed out several interesting things that I will have to leave for a future post. The main issues he raised are: First, there is previous work on the “curious” property done by Shai Ben-David in an apparently unpublished 1992 technical report. Further, Ken also points out that any NP-complete language can be used in the place of {\mathsf{SAT}}.

Second, that the placement of P=NP into AH has been thought about before–I am not surprised. Ken mentions that the logician Alex Wilkie repeated a rumor to the effect that P=NP was in \Pi_1 in a pub one of his publications in the mid-1980s, but nothing more ever came of it. Then, Ken recalls, that Tomoyuki Yamakami and others claimed this again around 2002-2003, but their claim was later withdrawn.

Finally, one of the reasons that a \Pi_{2} sentence {\forall x \, \exists y \, A(x,y)} can be unprovable in Peano Arithmetic is that the Skolem function for this sentence may grow too fast. This happens, for example, for the famous Paris-Harrington version of the Ramsey Theorem. So what the above theorem says is this: if P {\neq} NP is unprovable in Peano because the Skolem function grows too fast, then my nightmare about P=NP happens. This is the core idea of the paper of Ben-David. More on his interesting work in a future post.

So here are the open questions: What is the complexity of other problems? What is the complexity of your favorite problem? Is P=NP in \Pi_{1}?

About these ads
17 Comments leave one →
  1. Anonymous permalink
    May 27, 2009 6:16 pm

    This is easy: compute the right-hand side until the error is below 1/3.

    Is this proof complete? There’s probably something simple I’m missing, but how do you rule out the possibility that exp(b) log(c) might be incredibly close to an integer? (In which case it might be difficult to tell whether it is greater than or equal to that integer.)

    • rjlipton permalink*
      May 28, 2009 6:32 am

      No it is not. I thought that we were in a special case of a general theorem. The best I can say right now is that based on work in logic this is computable provided certain conjectures are true. I will get back with more details.

      We only need, I believe, a special case that exp(exp(x)) is transcendental for x an integer.

      • Lew permalink
        August 21, 2009 12:17 pm

        Were you able to overcome the obstacles mentioned to show that RH is in {\Pi_1}?

        In Lagarias’ inequality let \{sigma(n)} = sum of divisors of n, and {\lambda(n)} = right side. A TM to test if \{sigma(n)} \{sigma(n)} then halt with 1; if upper 0, and r, s rationals > 0 then RH is in {\Pi_1}.

        In any case, RH is in {\Pi_1} due to a formula containing only rationals. See http://www.cs.uleth.ca/~fodden/foddenresearch.pdf

      • Lew permalink
        August 21, 2009 12:31 pm

        Were you able to overcome the obstacles mentioned to show that RH is in PI-1?

        In Lagarias’ inequality let sigma(n) = sum of divisors of n, and lambda(n) = right side. A TM to test if sigma(n) sigma(n) then halt with 1; if upper 0 and r, s rationals > 0 then RH is in \Pi_{1}.

        In any case, RH is in \Pi_{1} due to a formula containing only rationals. See http://www.cs.uleth.ca/~fodden/foddenresearch.pdf

      • Lew permalink
        August 21, 2009 12:36 pm

        ** sorry for the messy formulas : please use the following **

        Were you able to overcome the obstacles mentioned to show that RH is in PI-1?

        In Lagarias’ inequality let sigma(n) = sum of divisors of n, and lambda(n) = right side. A TM to test if sigma(n) is less than lambda(n) proceeds by calculating better and better lower and upper bounds on alpha(n). If lower is greater than sigma(n) then halt with 1; if upper is less than sigma(n) then halt with 0. If sigma(n) equals lambda(n) for some integer the TM never halts.

        If it is a theorem that r to the e to the s equals e to the t has no solutions for t an integer greater than 0 and r, s rationals then RH is in PI-1.

        In any case, RH is in PI-1 due to a formula containing only rationals. See http://www.cs.uleth.ca/~fodden/foddenresearch.pdf

      • rjlipton permalink*
        August 21, 2009 3:15 pm

        Yes. Did not update. Another trick is to use a “dovetail” approach. For each x, check all the Lagarias inequalities up a larger and larger precision. In this way if there is a A > B where should be A < B, eventually we will discover that.

  2. May 27, 2009 7:37 pm

    Re the mention by Wilkie, the kind of “pub” I meant goes with “libation”, not “-lication” :-).

  3. Robin Houston permalink
    May 28, 2009 1:58 am

    The successor function is easy to encode in lambda-calculus. I heard the barbershop story about the predecessor function, which does require a clever trick.

    • rjlipton permalink*
      May 28, 2009 6:03 am

      I may have gotten that wrong, thanks.

  4. Anonymous permalink
    May 28, 2009 4:21 am

    I’d like to expand on this: (from the article)

    If a theorem is proved, then it can be written as anything. This is another way of saying that {1=1} is equivalent to the Four Color Theorem.

    Having checked how Wikipedia defines AM classification of first-order formulas, it seems to me that every formula without free variables (or what is called a sentence; like the one given for {{\mathsf{SAT}}}) is in { {\Sigma_{0}} }: Each sentence in first order arithmetic is either true or false in the standard model of arithmetic, so every sentence is either equivalent to {1=1} or {0=1} (a highly non-constructive proof!). And so the problem of deciding whether P{ {\neq} }NP is in { {\Sigma_{0}} }.

    In Hartley Rogers book on recursion theory arithmetical hierarchy is used to classify whole languages over natural numbers. Here, some languages can be defined by first-order formulas with one free variable and so the the classification becomes non-trivial: A statement like {\phi(n)} is no longer equivalent to formulas like {1=1} or {0=1}, since the truth of {\phi(n)} depends on the interpretation we give for the free variable. Now one can go on to prove things like {\Sigma_n \subsetneq \Sigma_{n+1}} when the sets {\Sigma_n} are concretely defined as consisting of decision problems.

    Maybe I’m splitting hairs here, since it’s quite clear what one means by asking “Is P=NP in \Pi_{1}?” — “Can you give me a single concrete example of a \Pi_{1} formula that you can prove to be equivalent to P=NP”. (constructive!)

  5. Bruce Kapron permalink
    May 29, 2009 9:07 pm

    With respect to the predecessor story:

    In Kleene’s own words (see “Reminiscences of Logicians” in Algebra and Logic, Springer LNM Vol. 450)

    “So I went to the dentist one day and he pulled two wisdom teeth, and while I was in the dentist’s office I figured out how to do the predecessor function.”

    I’ve always wondered why he didn’t bother to elaborate on the significance of thinking about this while at the dentist, given the way in which the predecessor is defined. The following way of defining the predecessor is not exactly the way that Kleene did it, but it comes down to the same thing. First of all we can define a paring function and projection functions fst and snd. Define the function F by

    (F x) = .

    Remembering that Church numerals act like iterators, we then define

    (PRED n) = (fst (n F ()).

    So we take the predecessor of n by taking n all the way down to zero and the back up to n-1 (which in hindsight is the only thing we could do using n as an iterator.) The implications of thinking about this while having a tooth extracted are too much to fathom.

  6. Bruce Kapron permalink
    May 29, 2009 9:22 pm

    With respect to the predecessor story (I’m reposting this since some of the notation in my last comment got mangled):

    In Kleene’s own words (see “Reminiscences of Logicians” in Algebra and Logic, Springer LNM Vol. 450)

    “So I went to the dentist one day and he pulled two wisdom teeth, and while I was in the dentist’s office I figured out how to do the predecessor function.”

    I’ve always wondered why he didn’t bother to elaborate on the significance of thinking about this while at the dentist, given the way in which the predecessor is defined. The following way of defining the predecessor is not exactly the way that Kleene did it, but it comes down to the same thing. First of all we can define a paring function pair and projection functions fst and snd. Define
    the function F by (F x) = (pair (snd x) (succ (snd x))). Remembering that Church numerals act like iterators, we then define

    (PRED n) = (fst (n F ((pair 0 0))).

    So we take the predecessor of n by taking n all the way down to zero and then back up to n-1 (which in hindsight is the only thing we could do using n as an iterator.) The implications of thinking about this while having a tooth extracted are too much to fathom.

    • rjlipton permalink*
      May 29, 2009 10:18 pm

      I heard a different version, but thanks for the correction.

  7. none permalink
    June 9, 2009 5:24 am

    Have you read Scott Aaronson’s article about the possibility of formal independence of P=NP? He says it is straightforwardly Pi-2, I think. He also describes a really cool result that I looked up, saying that if P=NP is independent of PA, then P!=NP but just “barely”, i.e. there is an O(n**f(n)) algorithm for SAT, where f is an unbounded but super slow growing function, e.g. slower growing than the inverse Ackermann function, and maybe slower than the inverse of any recursive function or something just about that slow.

  8. Timothy Chow permalink
    September 8, 2009 4:09 pm

    In the definition of the arithmetical hierarchy, the predicate \psi(x,y) doesn’t need to be quantifier-free; it just needs to be logically equivalent to a formula involving only bounded quantifiers.

    By the way, there is a way to get around the problem of “if a theorem is proved, then it can be written as anything.” For proved theorems, you can insist on logical equivalence. For example, you could say that a statement is \Pi_2 if it is logically equivalent to an explicitly \Pi_2 statement. By “logically equivalent” I mean that you are not allowed to use any axioms of arithmetic when proving one statement from the other; only the rules of first-order logic are allowed.

Trackbacks

  1. How can we express “$P=PSPACE$” as a first-order formula? [closed] | Question and Answer
  2. How can we express "$P=PSPACE$" as a first-order formula? | Question and Answer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,025 other followers