Skip to content

Proving Peano Arithmetic Partially Consistent?

November 27, 2017


An approach to consistency that could work…

Kurt Gödel is feeling bored. Not quite in our English sense of “bored”: German has a word Weltschmerz meaning “world-weariness.” In Kurt’s case it’s Überweltschmerz. We have tried for over a month to get him to do another interview like several times before, but he keeps saying there’s nothing new to talk about.

Today we want to ask all of you—or at least those of you into logic and complexity—whether we can find something to pep Kurt up. Incidentally, we never call him Kurt.

Gödel is of course famous among many things for proving that Peano Arithmetic (PA) cannot prove its own consistency—unless PA is already inconsistent. In the latter case, it would be able to prove everything; and this would imply that PA is useless.

This time we want to talk about whether PA might be proved consistent in weaker senses. The senses would escape the result of Gödel, which is usually called his Second Incompleteness Theorem. A hope is that they can be connected to open questions in complexity theory.

PA

Recall that PA is the first order theory of arithmetic with induction. Actually all we say today could be generalized to many other theories, but to help us focus let’s discuss only PA.

The meaning of consistency for PA can be encoded as follows: Let

\displaystyle  \text{Proof}(x,y)

mean that {x} encodes a proof in PA of the statement {y}. Formally this means that consistency of PA is

\displaystyle  \forall x \ \neg \text{Proof}(x, \ulcorner 0=1 \urcorner).

Most mathematicians believe that PA is consistent. The best argument for consistency is probably that the axioms of PA all seem “obvious.” That is they seem to conform to our intuition about arithmetic. Even the powerful induction schema—which pumps out one axiom for each applicable formula—says something that seems clear:

Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the induction).

This is from page 3 of the book Concrete Mathematics.

Yet not everyone believes that PA is consistent, and it follows that not everyone believes that PA is robustly useful. We recently covered the late Vladimir Voevodsky’s doubts. Ed Nelson in 2015 wrote a freely-available book titled simply Elements to argue that PA is inconsistent. This work is flawed, but is interesting that a world-class mathematician is seriously interested in showing something that few working mathematicians believe. The work ends with an afterword by Sam Buss and Terry Tao, in which they say:

We of course believe that Peano arithmetic is consistent; thus we do not expect that Nelson’s project can be completed according to his plans. Nonetheless, there is much new in his papers that is of potential mathematical, philosophical and computational interest. For this reason, they are being posted to the arXiv. Two aspects of these papers seem particularly useful. The first aspect is the novel use of the “surprise examination” and Kolmogorov complexity; there is some possibility that similar techniques might lead to new separation results for fragments of arithmetic. The second aspect is Nelson’s automatic proof-checking via TeX and qea. This is highly interesting and provides a novel method of integrating human-readable proofs with computer verification of proofs.

Nelson’s criticism of PA is well summarized in this talk by Buss, while it was Tao who articulated the flaw in Nelson’s particular Kolmogorov complexity argument for inconsistency, which we also covered here.

Weak PA Consistency

Our idea is to examine consistency from a computer science viewpoint and use this to make a weaker notion: a notion that can be proved and avoid the Gödel limit. The idea is the following:

Can we prove that PA is consistent at least for any proof that we are likely to ever see?

We can make this precise via the following simple notion, {\text{Con}(\text{PA};t)}:

\displaystyle  (\forall x : |x| \leq t) \ \neg \text{Proof}(x, \ulcorner 0=1 \urcorner).

Note that we mean the length of the proof in symbols, not steps; we could alternately treat {x} as a number and state the bounded quantifier as {(\forall x < 2^t)}.

Clearly for any {t} we can determine whether or not {\text{Con}(\text{PA};t)} is true or not. Of course as {t} grows the cost of checking that all proofs of length at most {t} in binary explodes. Yet there is no Gödel limit on this checking. We want to understand the following question:

Can we check that {\text{Con}(\text{PA};t)} is true for large {t}?

Let’s take a look at this next.

Cost Of Checking Proofs

Given a Boolean string {x} of length {t} we can check that it encodes a correct proof from PA in time nearly linear in {t}. We need only check that steps are either examples of an axiom or following from the usual rules of inference of a first order logic. Suppose for the rest we assume that this checking can actually be done in linear time: we are throwing out log terms, which will just help us avoid technically more complex statements. This assumption will not change anything in an important way.

This assumption shows that {\text{Con}(\text{PA};t)} can be checked in time {c^{t}} for some constant {c}. This means that we cannot hope to check this for even modest size {t}‘s. But can we check it much faster? If we could check, for example, that {\text{Con}(\text{PA};1,\!000,\!000)} is true, then we would know that all proofs of at most 1,000,000 bits do not led to contradictions. This covers all proofs that most of us will ever write down or even read. Note that the following is true:

Theorem 1 PA can prove {\text{Con}(\text{PA};T)} for any fixed {T}.

Define {f(t)} as the length of the shortest proof in PA of {\text{Con}(\text{PA};t)}. Clearly, {f(t)} is well defined—whether PA is consistent or not. A question is how slow can {f(t)} grow? Or looking at it another way, can there be short proofs that {\text{Con}(\text{PA};t)} is true? We can also ask this when {T = g(t)} for some function {g} such as {g(t) = 2^t}.

Relationship To Complexity

Note that one can prove theorems like this:

Theorem 2 If {\mathsf{P=NP}} then we can prove {\text{Con}(\text{PA};t)} in time polynomial in {t}.

This would allow us, at least in principle, to check huge length proofs. There are other complexity open problems that would allow us to even check much larger proof lengths.

We want, however, to be very concrete. We really want to know about {\text{Con}(\text{PA};1,\!000,\!000)} more than what happens for large {t}. Incidentally, Gödel used the same number 1,000,000 as a factor in his brief and cryptic paper (see this review) titled “On the Lengths of Proofs.” What he implicitly did there was apply the same method as in his incompleteness theorems to create formulas {\phi_t} expressing,

There does not exist a PA proof of {\phi_t} of length at most {t}.

He meant length to be the number of lines, but Wikipedia’s article on it speaks of symbol length. Interpreted either way, the truth of the formulas {\phi_t} is obvious: they don’t have proofs in PA of length {t}. They have proofs of length {2^{O(t)}} by exhaustive enumeration as above. If {\mathsf{P=NP}} then they have shorter proofs asymptotically—indeed this was the context of Gödel’s insight about {\mathsf{P=NP}} in his famous 1956 letter to John von Neumann. But again we want to be concrete.

Note that in a “meta” sense we already proved {\phi_t}. We know how Gödel’s construction works in general from reading only a finite piece of his proof. Then as we said its truth is obvious. We used {\log t} symbols to write down {t} but we can ignore that. So this is really a finite proof, certainly well under 1,000,000 symbols.

This is exactly what Gödel meant about the length-saving effect. The rub however is that our “meta” proof is assuming the consistency of PA—that is, the shorter proof is in the stronger theory {\text{PA}' = \text{PA} + \text{Con}(\text{PA})}. Gödel asserted (without proof) that the same effect can always be had by progressing to the next higher order of logic.

But going back to our sentences {\text{Con}(\text{PA};t)}, clearly assuming {\text{Con}(\text{PA})} is silly and we want to stick with first-order logic. We can consider changing the rules of PA but not that way. So our query becomes:

Is there a formal logic {\mathcal{L}}, that avoids the criticisms of PA referenced above and does not obviously entail the consistency of PA, such that {\mathcal{L}} can economically prove {\text{Con}(\text{PA};1,\!000,\!000)}?

We can alternatively talk about programs {P} that construct proofs and shift the question to the complexity—concrete or asymptotic—of verifying that {P} is correct. This could lead into questions about the (resource-bounded) Kolmogorov complexity of proofs. Assuming {\text{Con}(\text{PA};1,\!000,\!000)} really is true, the simple enumeration argument describes a proof with Kolmogorov complexity well under 1,000,000 symbols—but more than 1,000,000 steps would be needed to expand it. The verification angle, however, may even apply in cases of “insanely long proofs.” Other formal or “fromal” approaches might be considered.

Changing the Rules on Gödel

There was a flurry of work in this area for a decade-plus after Sam Buss’s innovative work in the 1980s connecting complexity questions to proofs in bounded arithmetics—that is with restrictions on the PA induction axioms or the underlying logic. We talked about his work here and here. For connections to lengths of proofs Sam himself has written two nice surveys and here is another by Pavel Pudlák. The connections extend to the Natural Proofs barrier against circuit lower bounds.

But as with many approaches to core questions in complexity, progress seems to have slowed. Perhaps it is because we haven’t been following as closely. So we are asking for news and opinions on what is important. And this is also why we are talking here about changing the questions and the rules of the game.

We have turned up some recent work on questions like ours that changes the rules. Martin Fischer wrote a 2014 paper titled “Truth and Speed-Up” and another titled “The Expressive Power of Truth.” The object of the latter is to find “natural truth theories which satisfy both the demand of semantical conservativeness and the demand of adequately extending the expressive power of our language.” The former shows that other theories including one called “{\text{PT}_{tot}},” while syntactically but not semantically conservative, give speedups for proving {\text{Con}(\text{PA};t)}.

We are not sure how to assess these results. The theory {\text{PT}_{tot}} stands for “Positive Truth with internal induction for total formulas” and is studied further in this 2017 paper. The emphasis seems however to be philosophical, relating to the effect of allowing truth to be a predicate. We don’t know how to connect these ideas to complexity questions but they do show scope for further action.

Open Problems

There are several open problems. Can we show that {\mathsf{P=NP}} or similar statements are equivalent to results about how far out we can check consistency? That is, if we could check that PA has no short contradictions, does this imply anything about complexity theory?

Another class of questions is: Is it useful to know that {\text{Con}(\text{PA};1,\!000,\!000)} is true? Does the fact that there is no short contradiction help make one believe in PA as a useful tool? I am not sure what to make of this. What do you think? What would Gödel think?

Advertisements
9 Comments leave one →
  1. Bhupinder Singh Anand permalink
    November 27, 2017 2:32 am

    “This time we want to talk about whether PA might be proved consistent in weaker senses. The senses would escape the result of Gödel, which is usually called his Second Incompleteness Theorem. A hope is that they can be connected to open questions in complexity theory.”
    ———–

    1. As to the first, both a weak (constructive but not finitary) and a strong (finitary) proof of consistency for the first-order Peano Arithmetic PA are given in the paper:

    ‘The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis’.

    The paper appeared in the December 2016 issue of ‘Cognitive Systems Research’, and addressed the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for evidencing such acceptance.

    It shows that Tarski’s classic definitions permit an intelligence—whether human or mechanistic—to admit finitary definitions of the satisfaction and truth of the atomic formulas of PA over the domain N of the natural numbers in two, essentially different, ways:

    (A) in terms of weak algorithmic verifiabilty; and:

    (B) in terms of strong algorithmic computability.

    Moreover, the two definitions correspond to two distinctly different—not necessarily finitary—assignments of satisfaction and truth, T(M) and T(B) respectively, to the compound formulas of PA over N.

    Further, the PA axioms interpret as true over N, and the PA rules of inference preserve truth over N, under both T(M) and T(B); whence:

    (i) if we assume the satisfaction and truth of the compound formulas of PA are always non-finitarily decidable under T(M)—as, for instance, in Gentzen’s cut-elimination argument—then this assignment corresponds to the weak standard (classical) interpretation M of PA over N; from which we may constructively, but not finitarily, conclude that PA is consistent;

    (ii) the satisfaction and truth of the compound formulas of PA are always finitarily decidable under T(B), and so the assignment corresponds to a strong finitary interpretation B of PA over N; from which, however, we may finitarily conclude that PA is consistent.

    2. As to the second, the paper shows that PA is not omega-consistent, thus making Gödel’s Second Incompleteness Theorem vacuously true, and its perceived limiting consequences for first-order arithmetic illusory.

    3. Finally, the question of whether the two proofs of consistency can be connected to open questions in complexity theory would seem to depend upon whether such questions are expressed arithmetically or set-theoretically.

    In arithmetic, the paper shows the existence of number theoretic functions that are algorithmically verifiable but not algorithmically computable. It is not obvious whether this distinction can be expressed set-theoretically.

    Kind regards,

    Bhup

  2. November 27, 2017 11:14 am

    Left on the cutting-room floor: a discussion of Harvey Friedman’s long proofs and paper, “Enormous Integers In Real Life.” To lead in from Gödel’s speed-up, take a short formula f having long but no short proofs in PA and make PA’ = PA + not-f. Then PA’ is inconsistent but every proof P of inconsistency is long (since a one-line edit makes a proof of f in PA). Say that PA’ is “spiked” by not-f. We can formalize this notion of “spiking” and make it a proposition in various ways by optionally conjoining it with (in)consistency. Then we get questions like, can we show that the axioms of PA are so natural that no “spiking” can be involved? If PA is inconsistent, must it be “spiked”? Again we ask whether the “spiking” angle has been addressed per se

  3. Dan Brumleve permalink
    November 27, 2017 2:10 pm

    I considered a closely related problem earlier this year which received a thought-provoking answer from Joel Hamkins: “Is an extension of a consistent theory that assumes its own finite consistency consistent?”
    (https://math.stackexchange.com/questions/2228824/is-an-extension-of-a-consistent-theory-that-assumes-its-own-finite-consistency-c). One of my interpretations is that the question of whether PA is consistent up to a certain proof length is essentially meaningless, the reason being that every true statement has that meaning.

  4. November 29, 2017 2:02 pm

    It seems that Pudlak’s paper you cite answers your questions. Theorem 6.3.2 shows that PA admits poly(n)-size proofs of Con(PA,n); see also http://users.math.cas.cz/~pudlak/fin-con.pdf. Theorem 6.3.2 does not seem to break things in Complexity Theory but a similar statement saying essentially that ‘there is a theory T such that for every theory S, Con(S,n) has poly(n)-size proofs in T’ conjectured false by Pudlak http://users.math.cas.cz/~pudlak/fin-con.pdf (page 193 below Problem 1) is indeed equivalent to a central problem in proof complexity: the existence of optimal proof systems; see Theorem 10.4.1. Pudlak’s conjecture is an instance of his feasible incompleteness thesis http://users.math.cas.cz/~pudlak/preface_toc.pdf.

    Perhaps it is also worth to note that the natural proofs barrier is a conditional proof complexity lower bound whose generalization motivated by the attempts to extend the lower bound to strong proof systems like Frege led to the theory of proof complexity generators; see the intro in Razborov’s paper http://annals.math.princeton.edu/2015/181-2/p01
    or Chapter 8 in Krajicek’s book https://www.karlin.mff.cuni.cz/~krajicek/fdraft.pdf.

    • December 7, 2017 12:20 am

      Thanks very much! It indeed answers our question with lower as well as upper bounds, while posing open problems about the case where the target length T is g(t) for other functions g besides the identity [we started out trying g(t) = 2^t]. We will mention this in a followup, but time may not allow making a one-piece self-contained version of Pudlak’s proof tailored to PA.

      • December 7, 2017 1:54 pm

        You are welcome. Right, the lower bound is interesting too since it is an n^epsilon-size lower bound on the size of proofs of O(logn)-size formulas Con(T,n). If one could make it work analogously in propositional setting, it would separate NP and coNP.

  5. December 7, 2017 2:21 pm

    Weiming is conducting a research on a new arithmetic using 1 and -1 as its binary alphabet: fx.wm3dao.com. An introduction can be found here: fuxi.wm3dao.com, and here: http://www.sciencemag.org/posters/artificial-intelligence. Let me know if anyone is interested in a collaborative work.

  6. December 8, 2017 1:43 pm

    weltschmerz. can really relate to that one. maybe a feeling that comes from many yrs of blogging eh? the germans have those exotic emotions also like “schadenfreude”. and hopefully not requiring medication like in the US…
    yedidia/ aaronson have some interesting new ideas reducing ZFC consistency to proving a TM halts. have long advocated research in this direction. its admittedly extremely challenging. all the really important stuff always is.
    https://www.scottaaronson.com/busybeaver.pdf
    https://vzn1.wordpress.com/2016/01/22/undecidability-the-ultimate-challenge/

Trackbacks

  1. Resumen de lecturas compartidas (noviembre de 2017) | Vestigium

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s