Skip to content

Infinite Objects And Deep Proofs

February 3, 2011


Are infinite objects needed to resolve important mathematical questions

Angus Macintyre is a top logician, especially in the area of model theory, and especially the model theory of fields. He is a Fellow of the Royal Society, and was awarded the Pólya Prize.

Today I want to talk about a relationship among Andrew Wiles’ proof of Fermat’s Last Theorem, Peano Arithmetic, and some of our open problems.
See an update at end of this post.

I have known Angus for many years, staring with my days at the Computer Science Department in Yale during the 1970′s. Angus, of course, was on the math faculty, and we became friends. He was very kind to me, a young theorist, who knew very little logic. I will always be grateful for his help and time to explain some of the then-new developments. In particular there was a time when I really understood the proof of the famous Paris-Harrington Theorem, thanks to Angus. I have discussed this before here.

An Inconsistent Number?

The Association of Symbolic Logic (ASL) now has three main publications: the Journal, the Bulletin, and the more philosophy-minded Review, all followed by “of Symbolic Logic” in their titles. Back then, however, there was only the Journal, and in a sense it had “everything.” I still recall looking at the Journal and reading an abstract with the title: “An Inconsistent Primitive Recursive Function.” The author claimed to have discovered a primitive recursive function {f(n)} that had two values at the same {n}. This surprised me, no shocked me, since this clearly was “impossible.”

I asked Angus about this, and he explained how things worked at the ASL. He said that any member of the ASL could give a talk at one of their meetings, without any control or refereeing. A member also could publish a short abstract of the talk in the Journal. This led, as you might imagine, to some unusual publications. Later at an ASL meeting I attended a talk by the same researcher, who then had found what he called an inconsistent number.

Clearly there are some issues with allowing any member to publish anything, but it does have some merits. Perhaps today with the cost of adding papers to conferences being so low, it would be interesting to open up the entire publication system. Of course we will then get people who are earnest, believe that have something important to say, but whose ideas are not correct. I wonder what you think about this.

Paris-Harrington

If you do not know the Paris-Harrington theorem, it is based on a “slight” variation on the famous Ramsey theorem. The variation is easily seen to be true, but cannot be proved in Peano Arithmetic (PA). More on PA in a moment, but the theorem is quite beautiful.

The standard Ramsey Theorem is:

Theorem: For any positive integers {n, k, m}, there is an {N} with the following property: no matter how we color each of the {n}-element subsets of {S = \{1, 2, 3,\dots, N \}} with one of {k} colors, there exists a subset {Y} of {S} with at least {m} elements, such that all {n}-element subsets of {Y} have the same color.

You may recognize this special case: color the edges of the complete graph {K_6} red or green. Then there is always a triangle all of whose edges are the same color. This is the case where {n=2} since we color edges (i.e., {2}-element subsets), {k=2} since we use two colors, {m=3} since we seek a triangle, and {N=6}.

Now consider a variant of Ramsey theorem, let’s call it the strong Ramsey theorem. It is almost the same, but insists that the set {Y} be large. Say a finite set {A} of natural numbers is large provided the cardinality of {A} is larger than the smallest element of {A}. Thus {A=\{1,2,3\}} is large, while {A=\{10,11,12\}} is not large. The strong Ramsey theorem is:

Theorem: For any positive integers {n, k, m}, there is an {N} with the following property: no matter how we color each of the {n}-element subsets of {S = \{1, 2, 3,\dots, N \}} with one of {k} colors, there exists a subset {Y} of {S} with at least {m} elements, such that all {n}-element subsets of {Y} have the same color. Furthermore, the set {Y} is large.

Just the insistence that the monochromatic set must be large makes a huge difference. The strong Ramsey Theorem can easily be proved from the infinite version of the standard theorem, by much the same reasoning as the original, but it cannot be proved in PA. That was the brilliant insight by Jeff Paris and Leo Harrington. The Paris-Harrington theorem is the following:

Theorem: The strong Ramsey theorem cannot be proved in Peano Arithmetic.

There are several proofs known now of this result. The original proof was based on model theory, while Bob Solovay shortly thereafter found an alternative proof based on showing that {N(n,k,m)} grew too fast to be provably total in PA. See this for a complete proof and technical discussion of the various approaches to their result.

Wiles’ Proof

The famous solution by Andrew Wiles to Fermat’s Last Theorem is a long proof. It is broken into two papers, one joint with Richard Taylor. The main part due solely to Wiles is over one hundred pages, and it builds on papers that total easily hundreds of more pages. Length is not the only measure of the complexity of a piece of mathematics, perhaps not even the right measure for most proofs. Wiles’ proof is more than long, it is deep. This should be no surprise, after all the proof solved a problem that had been open for several hundred years. But it is deep in a precise technical sense: it is yet unknown if the proof can be expressed, even in principle, in PA.

This is the main question that I wish to discuss today: can Wiles’ proof be expressed in arithmetic? There is a very readable article by Colin McLarty in the Bulletin on this very question. The title of his paper is: “What does it take to prove Fermat’s Last Theorem?”

Angus’ Work

McLarty discusses whether or not there is even in principle a proof of FLT that can be expressed in Peano Arithmetic. The folklore belief is that any non-artificial problem that we do routinely in mathematics should be provable in PA. As with many maxims this is false, since there are “natural” combinatorial problems that are beyond PA. We have just presented one: the strong Ramsey Theorem of Paris-Harrington.

Being unprovable in PA seems less likely for a natural Diophantine statement like Fermat’s, but it remains to be seen. Recall Fermat’s Last Theorem is:

\displaystyle  \forall p \geq 3 \ \forall x \ \forall y \ \forall z \left( x^p + y^p = z^p \rightarrow xyz=0 \right).

This is a statement in arithmetic, since each variable is restricted to be an integer.

McLarty points out that even though this is a simple universal sentence, it is unclear whether FLT can be proved in a theory as “weak” as PA. The issue is whether Wiles really needs certain infinite objects to prove this finite-sounding statement.

My understanding is that Angus is working on showing that Wiles’ proof can be done in PA. I believe this project is still ongoing, but should be available soon. The obvious question you might ask is: why bother? The current proof is understood by the experts, and even if there were a proof in PA, would it add any further insight?

I believe the answer is a two-fold yes. The main reason is the principle that understanding exactly what a proof requires gives us potentially new insight into what is proved. Thus Angus’ new approach will require nontrivial new insights. I quote McLarty:

Macintyre presents evidence but also shows how his claim remains to be verified by a great deal of further work in arithmetic, {\dots} It will require serious new arithmetic.

The other reason is that the study of Wiles’ proof from this logical viewpoint could have substantial impact on number theory. New ideas or methods could be discovered in this way.

Go Big, Go Deep

I think there is a potentially interesting lesson here. Most—all?—of the results that we prove in complexity theory can easily be proved in Peano Arithmetic. There is nothing wrong about this, but it is an interesting indication that perhaps our field has dwelled too long in a set milieu. Perhaps the key to unlocking some of the great mysteries of questions like {\mathsf{P} \neq \mathsf{NP}} is to “go big, go deep.”

I wonder if there are proof techniques that involve ideas far beyond PA that we have not used, and which would help break open some of important problems. Should we be teaching our graduate students methods from areas of mathematics that lie beyond PA?

Here is an example of a solution to a famous conjecture of Emil Artin that was solved by James Ax and Simon Kochen who used infinite structures—ultraproducts in particular. The problem seems to be just about finding solutions to equations:

Does every homogeneous polynomial of degree {d} over the {p}-adic numbers in at least {d^2+1} variables have a nontrivial zero?

They proved that the conjecture is “almost” correct: for each degree {d} the conjecture holds for all primes {p} large enough.

The reason I mention this example is the use of non-principal ultraproducts forces the proof to go beyond set theory. The standard set theory, ZF, is too weak to prove the existence of such infinite objects. Adding the classic Axiom of Choice is enough, actually it is too much. There is a more subtle issue which is that often proofs can be given that proof X and use powerful infinite structures such as ultraproducts, and later these objects can be removed from the proof. This still does not say that using powerful infinite objects was not useful in the discovery of the initial proof.

Open Problems

Do you think FLT can be proved in PA? What about our open problem? Do you think that more “infinitary” methods are needed to resolve our problems? If so, what will they look like?



Erratum/Addendum (2/6/11, by KWR)

Harvey Friedman has pointed out that we were wrong to impute that the Ax-Kochen theorem is unprovable in ZF set theory.  It is true only that the infinite objects used in the particular proof require a fairly minimal form of the Axiom of Choice for their construction.  Harvey drew our attention as noted here in Wikipedia’s Axiom_of_choice article that any theorem formalizable in the language of PA (and some more general systems) and provable in ZFC is provable already in ZF.  In fact, the Ax-Kochen theorem is provable in Peano Arithmetic itself, and Harvey is inquiring whether it is provable in Elementary Function Arithmetic (EFA) (a relevant paper by J. Avigad is linked there).

Harvey also contends that the following by him is a more-natural “strong Ramsey-type theorem”: Say a function {f: [n]^k \longrightarrow [n]^k} is “non-increasing” if for all {x, \max(f(x)) \leq \max(x)}.

THEOREM: For all {k \geq 1} and sufficiently large {n} , for every nonincreasing function {f} on {[n]^k} , there exist distinct {x_1,...,x_{k+1}} such that {f(x_1,...,x_k) \leq f(x_2,...,x_{k+1})} coordinate-wise.

As asserted here this is equivalent to the 1-consistency of PA; for some details see this draft paper.  Finally, he notes that McLarty’s paper is available generally on the Web, for instance here.  His conjecture regarding our “Open Problem” is that Fermat’s Last Theorem is provable not only in PA but even in EFA. We thank Harvey for the attention.

About these ads
30 Comments leave one →
  1. Luke permalink
    February 4, 2011 12:33 am

    This is a very fascinating topic! There was a debate on MO on this very question:

    http://mathoverflow.net/questions/35746/inaccessible-cardinals-and-andrew-wiless-proof

    I think a fair summary is that Wiles’s proof, as stated, goes well beyond PA–even beyond ZFC. However, a number of mathematicians believe it should be translatable to PA using known techniques.

    Personally, (though I saw little Grothendieck in my math education and so am not really qualified to have an opinion!) I’d be surprised if FLT isn’t provable in PA, and even weaker systems, as it seems to lack the combinatorial or Ramsey-like attributes of theorems outside PA.

    Related to the topic, there was a neat blog post on going beyond PA:

    http://xorshammer.com/2009/03/23/what-happens-when-you-iterate-godels-theorem/

    Intrigued, I picked up a copy of “Inexhaustibility” by Franzen. It gives a good sense of the enormous gap in strength going from PA to second order arithmetic. Though the book ends there, one can see second-order arithmetic is far away from the power of set theory, which is then far away from the power of large cardinals. It’s amazing how deep mathematics can go.

    Harvey Friedman has done a lot of work with theorems of arithmetic that require very strong systems. You can find his manuscripts online; one decent overview is the presentation “Unprovable Theorems”. The theorems tend to have a Ramsey-like flavor to them: they create an object that has enough combinatorial properties to prove weaker systems consistent. Classical examples of theorems requiring powerful axioms are Kruskal’s tree theorem and Robertson–Seymour’s graph minor theorem (which are both second-order statements, but IIRC there are derived first-order statements that lie outside PA).

  2. caRh permalink
    February 4, 2011 1:47 am

    (The Association FOR Symbolic Logic.)

  3. February 4, 2011 3:00 am

    I find that the entire system of journals and conferences is broken, hence my satirical YouTube series, Damn Cool Science giving its own take on open peer review.

    Of course, I’m still human, so I hypocritically continue to submit papers to journals for their sometimes completely arbitrary seal of approval. It’s nice to have an organization recognize my work, but it doesn’t really mean much if they also frequently vouch for junk and reject major science, as almost all journals and conferences tend to do through no direct fault of their own.

  4. February 4, 2011 4:20 am

    There are some results in TCS whose proofs need stronger systems than Peano Arithmetic, I think. These are mostly results that use extensions of Kruskal’s Tree Theorem in their proofs (this is probably already reminiscent to the Paris-Harrington principle, however). For more discussion on this topic see CSTheory SE: http://cstheory.stackexchange.com/questions/1923/which-interesting-theorems-in-tcs-rely-on-the-axiom-of-choice-or-alternatively/1946#1946

  5. none permalink
    February 4, 2011 5:46 am

    If my unwashed opinion is of any interest, 1) P != NP is provable in PA, for reasons discussed on Scott Aaronson’s paper on whether P vs NP might be independent. 2) The first actual proof will use stuff from outside of PA, but still from well inside of second-order arithmetic. For example, it might use induction on formulas under some ordering of higher type than epsilon-0, and therefore outside of PA. These orderings seem like a natural hunting ground for proofs, since the proofs will inherently live in powerful theories (i.e. stronger than PA), which may make them easier to find. The Prime Number Theorem is a famous example of a theorem whose original proof used powerful methods (complex analysis). It was conjectured that complex analysis or something like it was required to prove the theorem, but Erdős and Selberg independently found proofs in PA. Later proofs were found in much weaker theories.[1]

    Studying the strength of theory needed to prove a given theorem is called Reverse Mathematics, a fascinating field if you’re into such things. The Wikipedia article is pretty good:

    http://en.wikipedia.org/wiki/Reverse_mathematics

  6. none permalink
    February 4, 2011 6:14 am

    Whoops, I had P vs NP on the brain and misread FLT as P vs NP. Yes, it also sounds like FLT can be proved in PA, for the sorts of reasons Angus McIntyre discusses, even though I don’t understand that stuff at all. I’ve seen some arguments that Wiles’ proof can easily be reformulated in second-order arithmetic, which is stronger than PA but doesn’t need Grothendieck universes (stronger than ZFC) or anything like that.

    If I understand correctly, Harvey Friedman seems to think FLT can probably be proved in Elementary Function Arithmetic, which is muhc weaker than PA. Maybe he is here so I shouldn’t be saying such things since he can say them himself.

  7. February 4, 2011 6:29 am

    The Peano Axioms are too weak to prove Fermat’s Last Theorem. What is necessary is a stronger number system, based on a stronger algebraic term. A proof of Fermat’s Last Theorem in such a number system would be similar to this proof of Beal’s Conjecture:

    http://donblazys.com/

    Don.

    • Mark permalink
      August 7, 2011 7:19 pm

      Your proof is seriously flawed. Many competent mathematicians have tried to point this out to you, but you fail to listen. To save everyone the time, Don’s proof assumes that 1^(0/0) is both defined and determinate, which is not true.

      • August 8, 2011 10:46 pm

        Sorry “Mark”,

        My proof is quite correct and it is your reasoning that is “seriously flawed”. The expression 0/0 is indeterminate. It is not disallowed as are expressions such as 2/0. Thus, 1^(0/0) = 1 just as 1^n = 1.

        Don.

  8. February 4, 2011 9:26 am

    I would like to commend Colin McLarty’s writings in general. In addition to McLarty’s “What does it take to prove Fermat’s Last Theorem?” (201) that Dick referenced, readers of Gödel’s Lost letter and P=NP might also be interested in MacLarty’s recent articles “The last mathematician from Hilbert’s Göttingen: Saunders Mac Lane as philosopher of mathematics” (2007) and “The rising sea: Grothendieck on simplicity and generality” (2007).

    Students of mathematics—and students of engineering and medicine too!—will appreciate McLarty’s gift for chosing of quotations and aphorisms … his articles provide an accessible first-exposure to famous passages like the following, from Grothendieck’s Récoltes et Semailles:

    I can illustrate the second approach [to doing mathematics] with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado! A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. . . the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. . . yet it finally surrounds the resistant substance.

    Why might students of math, engineering, and medicine care about such passages? One answer is that the 21st century is providing us with a superabundance of problems, both practical and abstract. To solve these problems is not only a wonderful mathematical challenge, it is also an urgent practical and humanitarian necessity, that foreseeably will require us all (and students especially) to embrace every permutation of Grothendieck’s three proof methods: “hammers and chisels,” “softening fluids,” and “rising tides.”

    In particular, McLarty’s writings wonderfully summarize an immense body of work relating to the 20th century’s two great mathematical abstractions: universality and naturality. Unknown as abstractions before 1940, a generation of mathematicians like Mac Lane, Eilenberg, Grothendieck, Serre, Kolmogorov, Arnold, Gromov (and many more) have brought these two abstractions into the light of our conscious appreciation.

    Nowadays, the immense practical utility of abstraction in terms of universality and naturality is slowly dawning upon researchers across the entire STEM enterprise. These abstractions make it crystal-clear that mathematics is only partly about proving new theorems … it is equally about broadening, generalizing, and abstracting our understanding of older theorems. For researchers in engineering and medicine in particular, it is the structure associated to this broadening, generalizing, and abstracting that makes modern mathematics both feasible to grasp and powerful in applications.

    Thus, the broader the foundations upon which we all build our understanding—mathematicians, scientists, and engineers alike—the farther our collective problem-solving capabilities are likely to extend … and we all urgently need these capabilities to extend very far and very fast … farther and faster than ever before, in all of human history.

    Therefore, please let me thank you for yet another wonderfully thought-provoking post here on Gödel’s Lost letter and P=NP. :)

    • February 4, 2011 5:11 pm

      By the way (and somewhat to argue against my own post), for students especially it is a good idea to balance “high-brow” authors like Macintyre, Grothendieck, and McLarty against “low-brow” authors such as Robert Hamming, whose vinegary article “Mathematics on a distant planet” (1998) is a classic.

  9. Timothy Chow permalink
    February 4, 2011 1:23 pm

    I think it’s important to distinguish between psychological difficulty and logical strength; the two concepts are (nearly) orthogonal. Certainly to solve difficult problems, we should be open to any idea from any source. However, a logical system such as PA should not be thought of as a “set of ideas” and using a system with greater logical strength should not be thought of as “using new ideas.” The originality and power of an idea has little or nothing to do with what axioms are needed to formalize it. The fact that a particular proof can, after the fact, be seen to be formalizable in PA does not mean that the person who came up with the proof restricted his or her thought processes to a limited circle of arithmetical ideas.

  10. February 4, 2011 6:10 pm

    There is a wide issue raised here about the relevance of computational issues (as well as related matters of mathematical logic) to mathematics. If we have a proof which goes beyond PA is it “mathematically deeper”? Is Goedel theorem relevant to the practice of mathematics? Is the NP \ne P problem relevant to the practice of mathematics?

    (Goedel theorem is a great mathematical theorem and the NP=!P problem is a great mathematical conjecture but the question is how relevant they are to the bulk of mathematical activities. There are very remarkable applications of mathematical logic through model theory to number theory, but again this is different that the issues raised here.)

  11. Cristopher Moore permalink
    February 4, 2011 10:19 pm

    Another lovely example of a finitary statement about the integers which is not provable in PA, and which apparently requires induction over the ordinal epsilon_0, comes from Goodstein sequences. Wikipedia has a nice discussion at http://en.wikipedia.org/wiki/Goodstein's_theorem

    Cris

  12. none permalink
    February 5, 2011 3:37 am

    I certainly wouldn’t say more logical strength = deeper or more difficult proof. Rather the opposite, perhaps. The “elementary” proofs of PNT were harder to find than the earlier analytic proof. There are also some elementary theorems that go beyond PA by their very nature, e.g. Cantor’s diagonal proof that the reals are uncountable.

  13. February 5, 2011 5:32 am

    Decidability of FLT in PA per se may not be very illuminating!

    Since the arithmetical expression for FLT can be formally expressed in PA by some formula [FLT], then [FLT] can be undecidable in PA only (as I argue below) if we assume that PA is omega-consistent.

    Note that if PA is consistent, then it is omega-consistent if, and only if, we interpret the existential quantifier in a PA formula such as [(Ex)R(x)] as: ‘There is some natural number n such that R*(n) holds’ over the structure N of the natural numbers under the standard interpretation S of PA.

    The latter is the intuitionistically objectionable assumption that Aristotle’s particularisation holds over N.

    However, this is not the only interpretation under which the PA axioms and rules of inference interpret as satisfied/true over N.

    Since the atomic formulas of PA ((i.e., those without the logical constants that correspond to `negation’, `conjunction’, `implication’ and `quanti cation’) interpret as satisfied over N by the evidence provided by a Turing machine, Tarski has indicated how the satisfaction and truth of the compound formulas of PA involving these logical constants can also be inductively defined appropriately by the evidence provided by a Turing machine.

    Under such an ‘algorithmic’ interpretation T, the ‘algorithmically’ true formulas of PA form a proper subset of the PA formulas that are true under S.

    Moreover, it can be argued that the axioms of PA and the rules of inference interpret as ‘algorithmically’ true over N under T, and a PA formula is ‘algorithmically’ true over N under T if, and only if, it is PA-provable.

    If so, the PA formula [FLT] is decidable in PA (as also is Goedel’s ‘undecidable’ PA formula, since we can no longer assume that PA is omega-consistent or, equivalently, that Aristotle’s particularisation holds over N, without inviting contradiction).

    Moreover, Goedel’s primitive recursive relation xBy (where y is the Goedel number of the PA formula whose PA-proof is Goedel-numbered by x) can then be used to decide which of [FLT] and [~FLT] is provable.

    However, unless [FLT] is provable, this would not resolve FLT in the same sense as Wiles proof.

    Reason: If [~FLT] is PA-provable as above, we could only conclude that there is no Turing machine that can give evidence for FLT. We could not conclude that FLT is false, since the formal expression of each instantiation of FLT may still be provable in PA.

    In other words, FLT may be algorithmically verifiable as true (in the sense that any given instantiation of FLT may be verifiable as true by the evidence provided by some Turing machine).

    However (shades of PvNP), FLT may not be algorithmically computable (in the sense that there is no Turing machine that, for any given instantiation of FLT, will provide evidence to verify that the instantiation is true).

    If so, the proof can only be a meta-mathematical one (such as that which validates Goedel’s construction of a formally unprovable, but interpretively true, proposition in the first part of Theorem VI of his seminal 1931 paper).

    So, perhaps the extent to which one can – or should – tolerate non-finitary methods in meta-mathematical arguments may be best decided temporally on a case to case basis.

  14. Anonymous permalink
    February 5, 2011 12:20 pm

    We all make assumptions. Sometimes they can accelerate our scientific progress, and sometimes they can stand in the way. They usually take the form “those sorts of methods will not solve this problem” or “any sort of approach to this problem will likely involve blah” or “this is a deep problem, and therefore we will need deep tools”. And so, we plow along, generalizing the tools we know, learning as much as we can, in the hopes that one day we will learn the right tool to which we can apply our powers of generalization and solve the problem.

    But… every once in a blue moon, somebody comes along with a shockingly simple solution to an old problem that doesn’t quite fit into the frameworks we are familiar with, and even seems too good to be true, yet is. And then we try to explain how it is that we didn’t think of it, or how it really is just what we already know, though in a different form (but really isn’t). Some even pooh-pooh it, saying “But it doesn’t fit into our frameworks. It’s bad form to even look for such things. It’s not the right solution!”

    I suspect that very many of the problems out there that we consider “hard” probably have one of these “book proof” (to quote the late P. Erdos) solutions. Imagine: a simple 5 page proof of the ABC conjecture; or a 5 page construction (using no deep tools) of elliptic curves over Q having arbitrarily large Mordell-Weil rank; or a very short, simple algorithm to factor large integers in sub-exponential time; or a 5 page proof of the Riemann hypothesis.

    “Heresy, I tell ya! Discourage it! Discourage it!”, goes the refrain.

    Indeed, it isn’t a good idea to ONLY look for these solutions. At the same time, if we don’t indulge every now and then we likely won’t find them; and I personally would rather live in a world where we know they exist than in one where we don’t.

  15. February 5, 2011 9:01 pm

    In a related vein, the history of mathematics shows us that many—even most?—of the most startling mathematical advances of the past two centuries were associated to problems that, prior to the startling advance, were not widely recognized as open … which of course, is precisely the reason that the advances were startling.

    In his celebrated essay You and your research (1986) Richard Hamming vividly describes an instance of this phenomenon:

    “When Shannon first published his Information Theory (1947), most people recognized that the theorems were true but the proofs were inadequate. Professor Doob at Illinois took the strict view of proofs, and openly doubted, in his Math Review, Shannon’s mathematical integrity!”

    The details can be found in J. L. Doob’s Math Review [MR0026286 (10,133e) 60.0X] of Shannon’s A mathematical theory of communication.

    Here the point is that Shannon perceived mathematical beauty and opportunity associated to an practical problem (communication over noisy channels) in which many of his colleagues perceived mainly “schmutz.”

    A recurrent theme of recent Gödel’s Lost Letter and P=NP columns (as I read them) is that perhaps similar issues and opportunities are associated to the complexity classes P and NP. When reading standard texts on complexity theory, we are struck by the highly specialized yet extraordinarily broad definitions that complexity theory associates to commonplace words like exists, verifiable, and certificate … definitions that are far broader than the sense in which scientists and engineers use these words.

    In consequence, the classes P and NP, as these classes are conceived by complexity theorists, are both of them extraordinarily larger and richer than our everyday intuitions can easily encompass, and no doubt this richness is part of the reason that these classes are proving to be so very hard to separate (as Ketan Mulmuley’s lectures emphasize).

    In summary, the need for complexity theory to embrace proof systems beyond ZFC, or even simply PA, is not obvious one way or the other … and perhaps we ought to be alarmed that this possibility is being seriously considered? Because as Hamming says in another wonderful essay titled Mathematics on a distant planet (1998):

    “If whether an airplane would fly or not depended on whether some function that arose in its design was Lebesgue but not Riemann integrable, then I would not fly in it. Would you? Does Nature recognize the difference? I doubt it”

    Adapting Hammings reasoning to complexity theory, we are led to reflect that if it should happen that a similarly practical theorem, like the separation of P from NP, is eventually proved to require similar sophistication of proof technology, then we might reasonably conclude “Wow … so perhaps P and NP are not natural complexity classes after all … maybe we need to embrace ‘schmutzier’ definitions of complexity classes … so that we can prove rigorous separations with more natural proof methods.”

    For sure, it will be a lot of fun to watch the interplay of these Gödel-style and Hamming-style considerations in complexity theory!

  16. February 6, 2011 3:43 am

    I agree with those who have commented that depth and logical strength are different things. I would define depth (imprecisely) in something like the way that a computer scientist talks about depth of a circuit: a result is deep if it depends on a breakthrough idea that depends on a breakthrough idea that depends on a breakthrough idea that depends on a breakthrough idea that depends on a breakthrough idea that …

    • February 6, 2011 3:46 am

      One could of course argue against that above definition: some results in combinatorics, for instance, depend on just one idea, but that idea then goes on to have extraordinary ramifications (Erdos’s lower bound for R(k,k) being a case in point). I think I would say that the notion of depth I suggested above is a useful one, but I have no objection to people using the word in other senses too. But I myself would describe Erdos’s proof not as deep but as very important and influential.

    • February 6, 2011 7:13 am

      It is indeed surprising how “imprecise” (in Gowers’ phrase) notions like circuit depth can be. For example, there exists for every Turing machine M in P a circuit representation, accepting n-bit inputs, and having depth bounded by n^k for some (n-independent) k(M).

      But is it feasible in practice to concretely construct a circuit representation for given M and n? More generally, given a concrete M, are upper bounds to k(M) even concretely computable?

      Engineers and scientists expect the answers to be “yes,” yet complexity theory gives the answers “no” … at least this is the import (as I imperfectly grasp it) of a recent answer by Luca Trevisan given on MathOverflow (students might want to warm-up by consulting example 6.4 of Sanjeev Arora and Boaz Barak’s on-line text of Computational Complexity) .

      Here the point is simply that commonly math, science, and engineering, we have a satisfactory working understanding of notions like “depth”, “exists”, “runtime”, and “certification.” But this should not lead us to mistakenly believe that we have a satisfactory general understanding of these notions … or that these notions translate perfectly between disciplines … or that no adjustment of these notions will be associated to future advances.

  17. Richard Zach permalink
    February 7, 2011 5:15 pm

    I don’t know if this is in fact true, but the reason I once heard given for the ASL practice of not refereeing talks (and abstracts of talks) at ASL meetings is this: In the “old days”, Soviet and other Eastern bloc mathematicians attending conferences in the West had a communist party “shadow” who pretended to also attend the conference. Rigorous refereeing of abstracts would have resulted in the shadows getting their contributions rejected, and hence the bona fide mathematicians not being allowed to attend.

    • rjlipton permalink*
      February 7, 2011 11:00 pm

      Richard,

      Very interesting story….

      • none permalink
        February 12, 2011 7:16 am

        IIRC, Solomon and Anita Feferman’s biography of Alfred Tarski mentions that Soviet bloc mathematicians at those conferences tried to get their pictures taken with western mathematicians as often as they could, for somewhat related reasons.

  18. February 8, 2011 1:24 am

    Thanks for this very interesting post that touches on a number of important topics in current f.o.m. (foundations of mathematics).

    Readers may wish to take a look at http://www.cs.nyu.edu/pipermail/fom/2011-February/015281.html This is a recent posting made on the FOM email list, whose Archives are at http://www.cs.nyu.edu/pipermail/fom/ More information is available at http://www.cs.nyu.edu/mailman/listinfo/fom

    An account of the history of Concrete Mathematical Incompleteness is given in the Introduction to the book Boolean Relation Theory and Incompleteness, where a draft can be found at http://www.math.osu.edu/~friedman/manuscripts.html section 3. A final draft of the book will be placed there this month.

    Readers may also want to look at http://www.cs.nyu.edu/pipermail/fom/2010-October/015081.html http://www.cs.nyu.edu/pipermail/fom/2010-October/015082.html Note that McLarty speaks of a new proof of FLT by Kisin. I don’t know if this new proof will help in finding a proof of FLT in PA or even FLT in EFA (exponential function arithmetic).

    • rjlipton permalink*
      February 8, 2011 9:06 am

      Harvey,

      Thanks for all the information.

  19. Lorenzo Carlucci permalink
    February 8, 2011 3:11 am

    “This still does not say that using powerful infinite objects was not useful in the discovery of the initial proof.”

    The theory of Artin’s Braid Group gives another example of the use of infinitary methods to discover solutions to concrete problems. I would like to give a pointer to a very interesting paper on this topic by Patrick Dehornoy (published by ASL!): Another use of set theory, Bulletin of Symbolic Logic 2-4 (1996) 379-391 (available here)

Trackbacks

  1. Tweets that mention Infinite Objects And Deep Proofs « Gödel’s Lost Letter and P=NP -- Topsy.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,022 other followers

%d bloggers like this: