Skip to content

A Quest For Simple Hard Statements

July 28, 2018


Made-to-order statements that are not so simple

Harvey Friedman is a long-standing friend who is a world expert on proofs and the power of various logics. This part of mathematics is tightly connected to complexity theory. This first mention of his work on this blog was a problem about the rationals that says nothing of “logic” or “proofs.”

Today Ken and I would like to introduce some of Harvey’s work in progress. Update 8/16/18: Harvey has posted a full draft paper, “Tangible Mathematical Incompleteness of ZFC.”

Harvey’s work is a kind of quest. Recall a quest is a a long or arduous search for something. Ever since his 1967 PhD thesis, Harvey has been trying to find simple statements that are immensely hard to prove. This might seem to be a strange quest: Why look for hard statements; why not look for easy statements? Indeed. Since Kurt Gödel’s seminal work, logicians have been interested in finding natural statements that hide great complexity underneath. The measure of complexity is that the statements are true but cannot be proved in certain prominent logical theories.

Wikipedia has this to say about the famous theorem of Jeff Paris and Leo Harrington (with input from Laurence Kirby) that Peano Arithmetic (PA) cannot prove a certain strengthening of the finite Ramsey theorem:

This was the first “natural” example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel’s first incompleteness theorem.

See also this for history and related examples including earlier work by Reuben Goodstein that was developed further by Kirby and Paris.

What if we can find statements that (i) use arguably simpler notions than arithmetic, yet (ii) are unprovable in systems such as ZFC set theory that are vastly stronger than PA? They might require systems beyond ZFC, such as adding plausible “large cardinal” axioms, to prove. The foundational goal of Harvey’s quest is to demonstrate that realms of higher set theory long regarded as wildly infinitistic and intangible connect algorithmically to concrete and intimately familiar combinatorial ideas. He does also mention motives in complexity, such as in this recent popular talk, and we will develop this aspect here.

Unprovable Yet Simple

Introductory theory courses show how to define sets that are computably enumerable but not decidable. The complement is then definable by a formula {\phi(n)} whose only unbounded quantifiers over numbers are universal. Such formulas and the resulting sentences {\phi(n_0)} for any fixed value {n_0} are said to be {\prod_1^0}. For instance, the set {E} of numbers {n} coding Turing machines that accept no inputs is defined by saying that for all {x} and {c}, {c} is not a valid accepting computation of the machine on input {x}. For every effective theory {T}, be it PA or ZFC or stronger, the set {D} of cases {\phi(n)} that {T} can prove is computably enumerable. Thus {D} cannot equal {E}. Presuming {T} does not prove any false cases, this makes {D} a proper subset of {E}. Every {n_0 \in E \setminus D} (and there are infinitely many) yields a {\prod_1^0} sentence {\phi(n_0)} that is true but not provable in {T}.

Indeed, one can construct particular values {n_0} via Gödel’s diagonalization mechanism. The resulting sentence {\phi(n_0)} is simple in terms of quantifiers, but the “{n_0}” is definitely not simple. It bakes all the complexity of diagonalizing over proofs into the statement. Gödel further showed one can use a {\prod^0_1} sentence expressing the consistency of {T}, but this too references {T} and proofs explicitly.

The Paris-Harrington statements are natural, but they are {\prod^0_2} not {\prod^0_1}—that is, they have the form “(for all {x})(there exists {y}){[\cdots]}” where {[\cdots]} is first-order with bounded quantifiers only. They embody the idea that functions {f} giving {f(x) = y} for these quantifiers must grow infinitely often faster than any function PA can prove to be computable. The statement {\mathsf{P \neq NP}} is likewise {\prod^0_2}. Some time back we wrote a post on independence and complexity that also covered work by Harvey with Florian Pelupessy related to Paris-Harrington. In 1991, Shai Ben-David and Shai Halevi proved a two-way connection to circuit complexity that applies when an effective first-order theory {T} is augmented to a theory {T_1} by adding all true {\prod^0_1} sentences over its language as axioms. They cited as folklore a theorem that {T} and {T_1} generally have the same sets of provably computable functions and said in consequence:

[I]f {\phi} is a natural mathematical statement for which “{\phi} is independent of PA” is provable in any of the known methods for establishing such results, then {\phi} is independent of PA{\;_1} as well. [Likewise for ZFC and any similar {T} and its corresponding {T_1}.]

Well this is not in Harvey’s quest either. Here are several objections: Statements of Paris-Harrington type cannot be brought much below {\prod_2^0}. The theory {T_1} is not effective. Adding true numerical {\prod^0_1} sentences may be benign for provable growth rates but not for {\mathsf{P \neq NP}} and other statements we want to prove. Although it is often kosher to assume particular {\prod^0_1} statements like (forms of) the Riemann Hypothesis, allowing arbitrary ones begs the idea of proof. The situation is that either {T} is stronger than PA and the functions grow so insanely as to strain naturalness, or {T} is weaker but then the independence is weak too.

What we all want is to show independence from strong theories for simple and natural statements that while not literally in {\prod^0_1} form demonstrably have no more power. Then we may hope to apply his results to understand the hardness of statements like {\mathsf{P \neq NP}} at the ground level. According to Ben-David and Halevi, this will require ideas and constructions beyond “the known methods.” Harvey’s ideas begin by blending questions about the integers into questions about the rationals that leverage the latter’s dense order properties. Let’s turn to look at them now.

Theories on the Rationals

Consider the rationals {\mathbb{Q}}. How hard are statements about the rationals? It depends.

If we allow first order sentences that involve addition and multiplication over the rationals, then Julia Robinson proved, long ago, that the integers can be defined in this first order theory. This immediately implies the general unprovability results discussed above. We have covered this topic before here.

What is surprising, perhaps, is that there are complex statements that use only the order properties of the rationals. However, if we restrict ourselves to the first order theory of the rationals under the usual ordering {x < y}, then it is long known that:

Theorem 1 The first order theory {(\mathbb{Q},<)} is decidable.

Harvey’s ideas use mildly second-order statements about the rationals—with little or no arithmetic besides the {<} relation—to get his hard statements. This may be surprising or may not—it depends on your viewpoint.

Michael Rabin further showed that the second-order monadic theory of {(\mathbb{Q},<)} is decidable (see also this). So we need more than quantifiers over sets of rationals. If one allows an existential quantifier over triples, then it does not take much else to write down the rules for relations {A(x,y,z)} and {M(x,y,z)} to behave like addition and multiplication and use the quantifier to assert their existence, whereupon undecidability follows by Robinson’s result. It seems to us that this paper gives a ready way to show the undecidability also for pairs.

Harvey uses predicates defining sets of tuples of rationals and a single quantifier over them. He eliminates that quantifier by expressing the formula body using satisfiability in first-order predicate calculus with equality, obtaining the schematic form

\displaystyle  \phi = (\forall k)[\psi_k \text{ is satisfiable}],

where {\psi_k} uses only first-order quantifiers to talk about finite sets of {k}-tuples and is computable from {k}. The satisfaction relation is equivalent to the negation of a derivable contradiction. Hence {\phi} is semantically equivalent to a {\prod^0_1} sentence {\sigma}. The point is that whereas {\sigma} would encode proofs or computations, {\phi} is simple and free of navel-gazing.

Some Key Concepts

Harvey’s work is based on the standard notion of order equivalence:

Definition 2 Say {x=x_{1},\dots,x_{k}} is order equivalent to {y=y_{1},\dots,y_{k}} if

\displaystyle  x_{i} < x_{j} \iff y_{i} < y_{j}

for all indices {i} and {j}. We use {x \equiv y} to denote this.

Theorem 3 Order equivalence on {k}-tuples is an equivalence relation and there are only a polynomial in {k!} equivalence classes.

Of course, if the tuples are restricted to distinct rationals then there are exactly {k!} equivalence classes. The exact numbers of classes for each {k} when members can be equal are addressed in this paper.

Now we can define the key binary relation on sets {A,B} of {k}-tuples of rationals. It uses order equivalence on {2k}-tuples obtained by flattening elements of {A^2} and {B^2}:

Definition 4 Let {k} be fixed. Then {A \preceq B} provided

\displaystyle  (\forall x \in A^2)(\exists y \in B^2) \; x \equiv y.

Harvey calls this emulation and the equivalence {A \preceq B \wedge B \preceq A} he calls duplication. The notions naturally extend to {A^r} and {B^r} for fixed {r > 2}, when they are called {r}emulation and {r}duplication. In applications, {B} will be finite while {A} can be huge, but in writing {A \preceq B} we still regard {A} as being constrained by {B}.

The third key concept is a kind of invariance that is customizable. Its calibrations may determine the level of hardness of the statements obtained. They require identifying some elements of a {k}-tuple as being integers. Here is an example:

For all sequences {p_1,\dots,p_i} of nonnegative rationals, each less than the integer {i} where {1 \leq i < k},

\displaystyle  \begin{array}{rcl}  && (p_1,p_2,\dots,p_i,i,i+1,\dots,k-1) \in A\\ &\iff& (p_1,p_2,\dots,p_i,i+1,i+2,\dots,k) \in A. \end{array}

The case {i = 1} says that for all {p < 1},

\displaystyle (p,1,2,\dots,k-1) \in A \iff (p,2,3,\dots,k) \in A .

A stronger constraint is

\displaystyle  (p,n_1,n_2,\dots,n_{k-1}) \in A \iff (p,1+n_1,1+n_2,\dots,1+n_{k-1}) \in A

whenever {p < n_1,\dots,n_{k-1} < k}. Here {n_1,\dots,n_{k-1}} stand for integers that are not distinct and like {p_1,\dots,p_i} are not required to be in nondescending order. Harvey calls these various notions of {A} being stable.

The final core ingredient is the usual notion of {A} being maximal with respect to some property: {A} has the property but no proper superset of {A} does. It will suffice to consider sets obtained by adding one element to {A}. Now we can exemplify his statements {\phi} in simple terms:

For every finite set {B} of {k}-tuples of rationals there is a set {A} that is maximal with regard to {A \preceq B} and is stable.

There are similar statements for duplication and other variants of emulation. Their definitive treatment will come in papers after #106 and #107 on Harvey’s manuscripts page.

The Goal

The above concepts are all basic. They figure into many areas of combinatorial mathematics, for instance regarding preferences and rankings. None talks about Gödel numbers or Turing machines or proofs. The goal is to establish and classify instances of this phenomenon:

There are simple true statements {\phi} involving little more than {\equiv} and {\preceq} that require immensely strong set theories to prove.

Here “true” needs some explanation. If ordinary set theory cannot prove a statement {\phi}, with what confidence can we say {\phi} is true? The answer is that the aforementioned large cardinal axioms {\Lambda} which Harvey uses to prove {\phi} are widely accepted. We note this 2014 post by Bill Gasarch on how Fermat’s Last Theorem was originally proved using theory derived from the axiom {\Lambda} asserting the existence of a strongly inaccessible cardinal. This {\Lambda} is equivalent to having a Grothendieck universe, by which Alexander Grothendieck proved statements used by Andrew Wiles in his proof. Colin McLarty later showed how to remove this dependency and prove Fermat in a fragment of Ernst Zermelo’s set theory—but the relevance of {\Lambda} to “real mathematics” remains.

So the upshot will be that ZFC plus some {\Lambda} will prove {\phi} but ZFC alone cannot (unless ZFC is inconsistent). It is possible for ZFC to be consistent whereas ZFC {\cup \;\{\Lambda\}} is inconsistent. All the known large cardinal axioms {\Lambda} have these relationships to ZFC:

  1. ZFC can prove that {\Lambda} implies the consistency of ZFC.

  2. ZFC can prove that the consistency of ZFC implies the consistency of ZFC {\cup \;\{\neg\Lambda\}}.

The final—and long—step needed in Harvey’s proofs is to get point 1 with {\phi} in place of {\Lambda}. This exemplifies a reversal step in his program of Reverse Mathematics. From this, the above desired conclusion about ZFC not proving {\phi} follows. By point 2, ZFC cannot prove {\neg\phi} either. Nevertheless, we can accept {\phi} as true. This is intended to rest not only on heuristic arguments in favor of the particular {\Lambda} employed, as exemplified for Fermat, but on evidence that can be programmed.

Harvey’s ultimate goal will be realized when the effect of {\Lambda} is shown to be algorithmic. Here is a facile analogy. Suppose we build an infinite set {A} by a simple greedy algorithm. We get a “simple path” of finite sets {A_0,A_1,A_2,\dots} that grow successively larger: each {A_i} has cardinality {i}. We can view the process as structured by the ordinal {\omega} being the limit of {0,1,2,\dots,i} as {i} grows. The idea is that more-ramified search structures can be “governed” by higher ordinals and patterns of descents from them. The end product will nevertheless always be a countable set {A} of tuples and the evidence of progress will always be a longer finite path. Greedy {\omega} may give only maximality, whereas the large cardinal axiom implies the existence of an infinite {A} with all stipulated properties. Though it may provide no guidance on building longer and longer finite paths, it implies their existence so that computers can always find a longer path by a finite search. The concrete import of this and further motivations are described in this Nautilus article from last year.

Open Problems

Open problems are most engaging when easy to state: the simple attracts and the complex repels. Simplicity allows them to be stated also in the direction that most of us believe. Yet we may think they are true but don’t know how to prove them. The whole point of Harvey’s quest is to create more milestones that are simple but hard to prove statements. How much will it help to have such statements?

Update 8/16/18: As noted also above, Harvey has posted a 68-page paper with full details. We will be giving it further coverage with his continued expository assistance.

Advertisements
16 Comments leave one →
  1. Andy Arana permalink
    July 31, 2018 4:08 am

    I wonder if you see ways in which complexity theory might intercede here? For instance, in determining what statements are “simple” or “hard”, one naturally thinks of complexity theory, but it is not straightforward for me to see how those work in here. Maybe something about provably recursive functions? But another idea is to consider the the gaps between the simplicity of the statements and the “largeness” of the needed set-theoretic axioms, as a kind of structure, and to ask about the properties of this structure. It would be fascinating if there were some kind of “projection” of this structure into the complexity hierarchy. That’s to say, where large cardinal axioms correspond in some way with NP problems, and simple statements with polynomial algorithms; or something like that. Then we could study the complexity hierarchy by studying Harvey’s statements, and vice-versa. Well, it’s a dream.

    • July 31, 2018 10:22 am

      I’m not sure exactly what Andy is asking for (and maybe neither is he), but one step in the algorithmic direction that I am definitely working on is what I call Algorithmic Emulation Theory, where we strive for a purely algorithmic form of the second to last parchment display in the blog. (Emulation Theory is the name I give to all this stuff). The kind of story I am looking for could go like this. There is a wide natural family of natural greedy algorithms which do not result in much symmetry (stability). If you want to get some more symmetry (stability) you need to stop being greedy, and use more than ZFC. At first, such results will be closely tied to Emulation Theory, and the particular nondeterministic algorithms associated with it. But then maybe the leap can be made to rather general algorithms that are relevant to people who never heard of Emulation Theory or large cardinals.

  2. Sam Sanders permalink
    July 31, 2018 4:17 pm

    The goal that is mentioned is:

    “There are simple true statements {\phi} involving little more than {\equiv} and {\preceq} that require immensely strong set theories to prove.”

    Is there a (plan for) kind of template (like in Boolean relation theory) for this?

    Also, it would be nice to mention references for {\equiv} and {\preceq}, given that they are part of mathematics.

    • August 1, 2018 9:39 am

      Yes, in #108 which I am finishing up now, there is a complete classification of all “coordinate free order invariant/Z” equivalence relations that can be used in this basic Emulation Theory, and the classification includes the special cases mentioned in the blog post. The classification being correct is provable from large cardinals but not in ZFC. Just as in BRT = Boolean Relation Theory, there is a long string of more ambitious classifications for the future, but here this initial classification is in an intuitive sense more comprehensive than the 6561 BRT classification, and also much simpler to present.

    • August 1, 2018 11:42 am

      Some thoughts about the concepts. The order equivalence relation is used extensively in logic (order types) but also in combinatorics under the name “preferential arrangements”. See the link in the blog post. A preferential arrangement of a set is a total ordering of that set with ties allowed. The link to order equivalence is this: A k-tuple of rationals corresponds to a total ordering with ties on the index set {1,…,k}. E.g., i is considered smaller than j if and only if the i-th coordinate of the tuple is less than the jth coordinate of the tuple. Two k-tuples of rationals are order equivalent if and only if their corresponding total orderings are identical.
      The notion of an emulator, corresponds a very watered down special case of universal classes in universal algebra and universal logic. I.e., being an emulator is a universal condition of a very basic kind. Also a very watered down special case of “type inclusion” in the sense of elementary model theory. And duplicator corresponds to a very watered down special case of “type equality” again in the sense of elementary model theory. I find the intuitive idea in everyday life compelling. S is an emulator of B if and only if every “pattern” you see in S is already “present” in B. And of course S is a duplicator of B if and only if the same “patterns” “occur” in S and B. Look to Emulation Theory exploiting such connections.

  3. August 1, 2018 9:05 am

    Posting a comment e-mailed by Joe Shipman (“polymath“):

    Three points:
    (1) If you take out the “stability” condition, it is quite easy to prove the existence of maximal emulators by a greedy construction. The difficulty lurks in the combinatorial interactions between maximality and various notions of stability—trying to control both maximality and stability at the same time becomes impossible to do by a recursive construction that builds up but a nonrecursive solution is obtained by cutting down from sets that large cardinal combinatorics provide.

    (2) The themes used in this bare-bones setting of order-invariant relations on rational tuples seem likely to be transferable to previously explored mathematical settings. Friedman expects this will eventually lead to natural independent statements being obtained in many areas of traditional mathematics.

    (3) Finding existing open problems that are independent of ZFC may follow, but there is a certain bias against this, because previous attempts in those avenues of investigation which essentially require the logical strength of large cardinals will have been largely fruitless and there is a strong professional tropism towards areas in which proofs can be accomplished.

    • August 1, 2018 9:42 am

      I endorse this thoughtful posting by Joe – we have been in close contact about Emulation Theory, something I very much appreciate.

    • August 1, 2018 2:24 pm

      Thanks, I don’t know why WordPress was failing to process my original comment..

  4. August 2, 2018 4:32 pm

    It is nice to see well-written accessible explanations, a glimpse at modern metamathematics, especially Harvey Friedman’s recent work. It is often so difficult to find the right balance between professional language and hollow baby-talk. This text clearly has computer science audience in mind, but that is fine. It is indeed better to explain things to each audience separately. Wish there were more such articles!

    About mathematical problems that turned out to possess some unprovability — mind you this already happened before with Kruskal’s Theorem, Open Ramsey Theorem, Graph Minor Theorem, Hilbert’s First and Second problems, and you can extract instances of unprovability from the Tenth problem. Plus a whole world of second-order strong statements in Reverse Mathematics. It is just that nowadays, with Harvey Friedman’s work available — the standards of interesting unprovable statements became very high.

    On the issue of the “truth” of extremely unprovable statements. This is a highly violent ideological battleground. Personally, I think that the “realist” ideology (“true but unprovable”…) should not be considered or spread around as a default position. Kinda kills half of the magic.

    About Emulation Theory. This is a very exciting development. A maximality principle that secretly controls the gear-mechanisms of an indiscernibility proof is a fundamental novelty that Harvey Friedman has been trying in the last few incarnations in this thread of proofs. I am very much looking forward to the complete treatise on Emulations (as well as the new Inductive Inclusion Theory with Shifts). Open problems on emulations should be tried out by high school kids.

    I don’t see (or am ignorant as to) how modern methods of metamathematics can be fused with computational complexity theory immediately. However what seems inevitable is the use of high-power number theory and randomness as parts of [next generations of] combinatorial engines of unprovability proofs.

    • August 5, 2018 10:57 am

      I agree with Andrey that Lipton/Regan did a great job here, and that this is difficult to do for audiences of non experts (and actually even for experts, where expectations are higher). There are really several very distinct audiences for this kind of foundations of mathematics. These include theoretical computer scientists, applied computer scientists, mathematical philosophers, scientific philosophers, general philosophers, philosophical logicians, combinatorists, algebraists, analysts, number theorists, algebraic geometers, topologists, applied mathematicians (of various kinds), mathematical scientists, math literate scientists, mathematical social scientists, general scientists, scientific journalists, general journalists, high school math teachers, gifted high school students, general public, and of course set theorists, recursion theorists, proof theorists, model theorists – a good but surely incomplete relevant list. What works well for one of these audiences will not usually work nearly as well for another. In all of these areas, there are differing strengths and weaknesses, differing CWs and prejudices, and differing fundamental attitudes towards intellectual (and general) life, etcetera. Hopefully important enough ideas, if they surface, will eventually be effectively explained to each of such audiences. How effectively has this been done with, e.g., Goedel’s Incompleteness Theorems, Turing, Darwin, Einstein, Crick/Watson? Could this or can this or has this been done systematically?

  5. August 2, 2018 8:45 pm

    I wonder how general Harvey’s scheme for $\phi(A,B)$ (the notation indicates that the sets A,B are parameters) is. Is there a sense in which is is *maximally* general?

    In particular, is it possible to make precise in an interesting way and prove a result of this form:

    (*) If $\psi$ is a “true” (in the sense of the post, i.e. is a consequence of a “widely accepted” large cardinal axiom) statement that is independent of ZFC, then there exist a pair of sets (A,B) such that $\psi$ is “equivalent” to $\phi(A,B)$?

    For too strong a notion of “equivalent”, there are obvious counterexamples, e.g. those notions of equivalence that distinguish between the logical complexity of statements.

    If Harvey can prove a result like (*) for a wide enough notion of equivalence, then it would also provide an argument that all incompleteness phenomena are phenomena related to the existence of maximal stable sets (in addition to the argument that these incompleteness phenomena are “intuitive” to the practicing mathematician).

    But perhaps most ways of making (*) precise will be trivially true. I don’t know.

    • August 5, 2018 11:11 am

      hmflogic = Harvey Friedman
      Here are some thoughts that come to mind with this work in progress. I know that for these statements, asserting that ALL finite sets of rational vectors have a certain property, that there are specific finite sets of rational vectors where the statement is independent of ZFC. I conjecture that the range of statements that you get when fixing the various finite sets of rational vectors, represents the entire ensemble of logical strengths ranging from EFA (a lowest) all the way through SRP (the large cardinal system in question). So you get a combinatorial indexing of such logical strengths. Of course, we know that for any complete r.e. set we get all logical strengths whatsoever from instances of “being outside the r.e. set”. In a sense, it would be more interesting if my conjecture was false. Then we would have a mechanism for picking out distinguished logical strengths. But for picking out distinguished logical strengths, there are other mechanisms available that I have already looked at but this would be work further down the road.

  6. August 3, 2018 5:46 am

    A great posting which explains the quest of Harvey Friedman to the layman in a beautiful way. Harvey’s results are something like a mission impossible and I do not know anyone else who could come up with such strong results. Whether these have implications to computational complexity is not yet clear but in the past there have been situations where the study of the finite has been simplified by studying the transfinite.

    • August 5, 2018 11:51 am

      hmflogic = Harvey Friedman
      Whether or not such combinatorics will help with notorious lower bounds is of course very unclear as of yet. But something else is easier to at least anticipate. That there is a general good way of asking for obvious greedy algorithms to produce outputs that have certain “symmetries”. Perhaps to prove a reasonably general result that any obviously greedy algorithm in certain environments can be replaced by a necessarily non or less greedy algorithm whose output has the desired symmetries. NOTE: One of the environments that I already am playing in, in addition to the high school level emulation environment, is the environment of MAXIMAL CLIQUES (and even very strongly maximal cliques).

      The idea is that such a general result can only be proved using large cardinals, and in particular is independent of ZFC. Of course, there is the further question of what uses can be made of “symmetric outputs”, other than their intrinsic interest. Maybe symmetric outputs would prove to be good because they are more easily processed for some purpose or another. Ah! Here is a possible scenario. This stuff is useful for proving that certain objects exist that are of LOW COMPLEXITY (relative to certain input data, in some relevant senses), where formerly the only examples of such objects were obtained by specific greedy algorithms which generate objects only of (seemingly) HIGH COMPLEXITY. (maybe the hard fought symmetries are good enough to lower the complexity, say the circuit complexity.

Trackbacks

  1. Desperately Seeking Integers | Gödel's Lost Letter and P=NP
  2. Winner Of 2018 Knuth Prize Is: | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s