# A Quest For Simple Hard Statements

*Made-to-order statements that are not so simple*

Harvey Friedman is a long-standing friend who is a world expert on proofs and the power of various logics. This part of mathematics is tightly connected to complexity theory. This first mention of his work on this blog was a problem about the rationals that says nothing of “logic” or “proofs.”

Today Ken and I would like to introduce some of Harvey’s work in progress. **Update 8/16/18:** Harvey has posted a full draft paper, “Tangible Mathematical Incompleteness of ZFC.”

Harvey’s work is a kind of quest. Recall a quest is a *a long or arduous search for something*. Ever since his 1967 PhD thesis, Harvey has been trying to find simple statements that are immensely hard to prove. This might seem to be a strange quest: Why look for hard statements; why not look for easy statements? Indeed. Since Kurt Gödel’s seminal work, logicians have been interested in finding natural statements that hide great complexity underneath. The measure of complexity is that the statements are true but *cannot* be proved in certain prominent logical theories.

Wikipedia has this to say about the famous theorem of Jeff Paris and Leo Harrington (with input from Laurence Kirby) that Peano Arithmetic (PA) cannot prove a certain strengthening of the finite Ramsey theorem:

This was the first “natural” example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel’s first incompleteness theorem.

See also this for history and related examples including earlier work by Reuben Goodstein that was developed further by Kirby and Paris.

What if we can find statements that (i) use arguably simpler notions than arithmetic, yet (ii) are unprovable in systems such as ZFC set theory that are vastly stronger than PA? They might require systems beyond ZFC, such as adding plausible “large cardinal” axioms, to prove. The foundational goal of Harvey’s quest is to demonstrate that realms of higher set theory long regarded as wildly infinitistic and intangible connect algorithmically to concrete and intimately familiar combinatorial ideas. He does also mention motives in complexity, such as in this recent popular talk, and we will develop this aspect here.

## Unprovable Yet Simple

Introductory theory courses show how to define sets that are computably enumerable but not decidable. The complement is then definable by a formula whose only unbounded quantifiers over numbers are universal. Such formulas and the resulting sentences for any fixed value are said to be . For instance, the set of numbers coding Turing machines that accept no inputs is defined by saying that for all and , is not a valid accepting computation of the machine on input . For every effective theory , be it PA or ZFC or stronger, the set of cases that can prove is computably enumerable. Thus cannot equal . Presuming does not prove any false cases, this makes a proper subset of . Every (and there are infinitely many) yields a sentence that is true but not provable in .

Indeed, one can construct particular values via Gödel’s diagonalization mechanism. The resulting sentence is simple in terms of quantifiers, but the “” is definitely *not* simple. It bakes all the complexity of diagonalizing over proofs into the statement. Gödel further showed one can use a sentence expressing the consistency of , but this too references and proofs explicitly.

The Paris-Harrington statements are natural, but they are not —that is, they have the form “(for all )(there exists )” where is first-order with bounded quantifiers only. They embody the idea that functions giving for these quantifiers must grow infinitely often faster than any function PA can prove to be computable. The statement is likewise . Some time back we wrote a post on independence and complexity that also covered work by Harvey with Florian Pelupessy related to Paris-Harrington. In 1991, Shai Ben-David and Shai Halevi proved a two-way connection to circuit complexity that applies when an effective first-order theory is augmented to a theory by adding all true sentences over its language as axioms. They cited as folklore a theorem that and generally have the same sets of provably computable functions and said in consequence:

[I]f is a natural mathematical statement for which “ is independent of PA” is provable in any of the known methods for establishing such results, then is independent of PA as well. [Likewise for ZFC and any similar and its corresponding .]

Well this is not in Harvey’s quest either. Here are several objections: Statements of Paris-Harrington type cannot be brought much below . The theory is not effective. Adding true numerical sentences may be benign for provable growth rates but not for and other statements we want to prove. Although it is often kosher to assume particular statements like (forms of) the Riemann Hypothesis, allowing arbitrary ones begs the idea of proof. The situation is that either is stronger than PA and the functions grow so insanely as to strain naturalness, or is weaker but then the independence is weak too.

What we all want is to show independence from *strong* theories for *simple* and *natural* statements that while not literally in form demonstrably have no more power. Then we may hope to apply his results to understand the hardness of statements like at the ground level. According to Ben-David and Halevi, this will require ideas and constructions beyond “the known methods.” Harvey’s ideas begin by blending questions about the integers into questions about the rationals that leverage the latter’s dense order properties. Let’s turn to look at them now.

## Theories on the Rationals

Consider the rationals . How hard are statements about the rationals? It depends.

If we allow first order sentences that involve addition and multiplication over the rationals, then Julia Robinson proved, long ago, that the integers can be defined in this first order theory. This immediately implies the general unprovability results discussed above. We have covered this topic before here.

What is surprising, perhaps, is that there are complex statements that use only the order properties of the rationals. However, if we restrict ourselves to the first order theory of the rationals under the usual ordering , then it is long known that:

Theorem 1The first order theory is decidable.

Harvey’s ideas use mildly second-order statements about the rationals—with little or no arithmetic besides the relation—to get his hard statements. This may be surprising or may not—it depends on your viewpoint.

Michael Rabin further showed that the second-order monadic theory of is decidable (see also this). So we need more than quantifiers over sets of rationals. If one allows an existential quantifier over triples, then it does not take much else to write down the rules for relations and to behave like addition and multiplication and use the quantifier to assert their existence, whereupon undecidability follows by Robinson’s result. It seems to us that this paper gives a ready way to show the undecidability also for pairs.

Harvey uses predicates defining sets of tuples of rationals and a single quantifier over them. He eliminates that quantifier by expressing the formula body using satisfiability in first-order predicate calculus with equality, obtaining the schematic form

where uses only first-order quantifiers to talk about finite sets of -tuples and is computable from . The satisfaction relation is equivalent to the negation of a derivable contradiction. Hence is semantically equivalent to a sentence . The point is that whereas would encode proofs or computations, is simple and free of navel-gazing.

## Some Key Concepts

Harvey’s work is based on the standard notion of order equivalence:

Definition 2Say isorder equivalentto if

for all indices and . We use to denote this.

Theorem 3Order equivalence on -tuples is an equivalence relation and there are only a polynomial in equivalence classes.

Of course, if the tuples are restricted to distinct rationals then there are exactly equivalence classes. The exact numbers of classes for each when members can be equal are addressed in this paper.

Now we can define the key binary relation on sets of -tuples of rationals. It uses order equivalence on -tuples obtained by flattening elements of and :

Definition 4Let be fixed. Then provided

Harvey calls this *emulation* and the equivalence he calls *duplication*. The notions naturally extend to and for fixed , when they are called –*emulation* and –*duplication*. In applications, will be finite while can be huge, but in writing we still regard as being constrained by .

The third key concept is a kind of invariance that is customizable. Its calibrations may determine the level of hardness of the statements obtained. They require identifying some elements of a -tuple as being integers. Here is an example:

For all sequences of nonnegative rationals, each less than the integer where ,

The case says that for all ,

.

A stronger constraint is

whenever . Here stand for integers that are not distinct and like are not required to be in nondescending order. Harvey calls these various notions of being *stable*.

The final core ingredient is the usual notion of being *maximal* with respect to some property: has the property but no proper superset of does. It will suffice to consider sets obtained by adding one element to . Now we can exemplify his statements in simple terms:

For every finite set of -tuples of rationals there is a set that is maximal with regard to and is stable.

There are similar statements for duplication and other variants of emulation. Their definitive treatment will come in papers after #106 and #107 on Harvey’s manuscripts page.

## The Goal

The above concepts are all basic. They figure into many areas of combinatorial mathematics, for instance regarding preferences and rankings. None talks about Gödel numbers or Turing machines or proofs. The goal is to establish and classify instances of this phenomenon:

There are simple true statements involving little more than and that require immensely strong set theories to prove.

Here “true” needs some explanation. If ordinary set theory cannot prove a statement , with what confidence can we say is *true*? The answer is that the aforementioned large cardinal axioms which Harvey uses to prove are widely accepted. We note this 2014 post by Bill Gasarch on how Fermat’s Last Theorem was originally proved using theory derived from the axiom asserting the existence of a strongly inaccessible cardinal. This is equivalent to having a Grothendieck universe, by which Alexander Grothendieck proved statements used by Andrew Wiles in his proof. Colin McLarty later showed how to remove this dependency and prove Fermat in a fragment of Ernst Zermelo’s set theory—but the relevance of to “real mathematics” remains.

So the upshot will be that ZFC plus some will prove but ZFC alone cannot (unless ZFC is inconsistent). It is possible for ZFC to be consistent whereas ZFC is inconsistent. All the known large cardinal axioms have these relationships to ZFC:

- ZFC can prove that implies the consistency of ZFC.
- ZFC can prove that the consistency of ZFC implies the consistency of ZFC .

The final—and long—step needed in Harvey’s proofs is to get point 1 with in place of . This exemplifies a *reversal* step in his program of Reverse Mathematics. From this, the above desired conclusion about ZFC not proving follows. By point 2, ZFC cannot prove either. Nevertheless, we can accept as true. This is intended to rest not only on heuristic arguments in favor of the particular employed, as exemplified for Fermat, but on evidence that can be programmed.

Harvey’s ultimate goal will be realized when the effect of is shown to be *algorithmic*. Here is a facile analogy. Suppose we build an infinite set by a simple greedy algorithm. We get a “simple path” of finite sets that grow successively larger: each has cardinality . We can view the process as structured by the ordinal being the limit of as grows. The idea is that more-ramified search structures can be “governed” by higher ordinals and patterns of descents from them. The end product will nevertheless always be a countable set of tuples and the evidence of progress will always be a longer finite path. Greedy may give only maximality, whereas the large cardinal axiom implies the existence of an infinite with all stipulated properties. Though it may provide no guidance on building longer and longer finite paths, it implies their existence so that computers can always find a longer path by a finite search. The concrete import of this and further motivations are described in this *Nautilus* article from last year.

## Open Problems

Open problems are most engaging when easy to state: the simple attracts and the complex repels. Simplicity allows them to be stated also in the direction that most of us believe. Yet we may think they are true but don’t know how to prove them. The whole point of Harvey’s quest is to create more milestones that are simple but hard to prove statements. How much will it help to have such statements?

**Update 8/16/18:** As noted also above, Harvey has posted a 68-page paper with full details. We will be giving it further coverage with his continued expository assistance.

I wonder if you see ways in which complexity theory might intercede here? For instance, in determining what statements are “simple” or “hard”, one naturally thinks of complexity theory, but it is not straightforward for me to see how those work in here. Maybe something about provably recursive functions? But another idea is to consider the the gaps between the simplicity of the statements and the “largeness” of the needed set-theoretic axioms, as a kind of structure, and to ask about the properties of this structure. It would be fascinating if there were some kind of “projection” of this structure into the complexity hierarchy. That’s to say, where large cardinal axioms correspond in some way with NP problems, and simple statements with polynomial algorithms; or something like that. Then we could study the complexity hierarchy by studying Harvey’s statements, and vice-versa. Well, it’s a dream.

I’m not sure exactly what Andy is asking for (and maybe neither is he), but one step in the algorithmic direction that I am definitely working on is what I call Algorithmic Emulation Theory, where we strive for a purely algorithmic form of the second to last parchment display in the blog. (Emulation Theory is the name I give to all this stuff). The kind of story I am looking for could go like this. There is a wide natural family of natural greedy algorithms which do not result in much symmetry (stability). If you want to get some more symmetry (stability) you need to stop being greedy, and use more than ZFC. At first, such results will be closely tied to Emulation Theory, and the particular nondeterministic algorithms associated with it. But then maybe the leap can be made to rather general algorithms that are relevant to people who never heard of Emulation Theory or large cardinals.

The goal that is mentioned is:

“There are simple true statements {\phi} involving little more than {\equiv} and {\preceq} that require immensely strong set theories to prove.”

Is there a (plan for) kind of template (like in Boolean relation theory) for this?

Also, it would be nice to mention references for {\equiv} and {\preceq}, given that they are part of mathematics.

Yes, in #108 which I am finishing up now, there is a complete classification of all “coordinate free order invariant/Z” equivalence relations that can be used in this basic Emulation Theory, and the classification includes the special cases mentioned in the blog post. The classification being correct is provable from large cardinals but not in ZFC. Just as in BRT = Boolean Relation Theory, there is a long string of more ambitious classifications for the future, but here this initial classification is in an intuitive sense more comprehensive than the 6561 BRT classification, and also much simpler to present.

Some thoughts about the concepts. The order equivalence relation is used extensively in logic (order types) but also in combinatorics under the name “preferential arrangements”. See the link in the blog post. A preferential arrangement of a set is a total ordering of that set with ties allowed. The link to order equivalence is this: A k-tuple of rationals corresponds to a total ordering with ties on the index set {1,…,k}. E.g., i is considered smaller than j if and only if the i-th coordinate of the tuple is less than the jth coordinate of the tuple. Two k-tuples of rationals are order equivalent if and only if their corresponding total orderings are identical.

The notion of an emulator, corresponds a very watered down special case of universal classes in universal algebra and universal logic. I.e., being an emulator is a universal condition of a very basic kind. Also a very watered down special case of “type inclusion” in the sense of elementary model theory. And duplicator corresponds to a very watered down special case of “type equality” again in the sense of elementary model theory. I find the intuitive idea in everyday life compelling. S is an emulator of B if and only if every “pattern” you see in S is already “present” in B. And of course S is a duplicator of B if and only if the same “patterns” “occur” in S and B. Look to Emulation Theory exploiting such connections.

Posting a comment e-mailed by Joe Shipman (“polymath“):

Three points:

(1) If you take out the “stability” condition, it is quite easy to prove the existence of maximal emulators by a greedy construction. The difficulty lurks in the combinatorial interactions between maximality and various notions of stability—trying to control both maximality and stability at the same time becomes impossible to do by a recursive construction that builds up but a nonrecursive solution is obtained by cutting down from sets that large cardinal combinatorics provide.

(2) The themes used in this bare-bones setting of order-invariant relations on rational tuples seem likely to be transferable to previously explored mathematical settings. Friedman expects this will eventually lead to natural independent statements being obtained in many areas of traditional mathematics.

(3) Finding existing open problems that are independent of ZFC may follow, but there is a certain bias against this, because previous attempts in those avenues of investigation which essentially require the logical strength of large cardinals will have been largely fruitless and there is a strong professional tropism towards areas in which proofs can be accomplished.

I endorse this thoughtful posting by Joe – we have been in close contact about Emulation Theory, something I very much appreciate.

Thanks, I don’t know why WordPress was failing to process my original comment..

It is nice to see well-written accessible explanations, a glimpse at modern metamathematics, especially Harvey Friedman’s recent work. It is often so difficult to find the right balance between professional language and hollow baby-talk. This text clearly has computer science audience in mind, but that is fine. It is indeed better to explain things to each audience separately. Wish there were more such articles!

About mathematical problems that turned out to possess some unprovability — mind you this already happened before with Kruskal’s Theorem, Open Ramsey Theorem, Graph Minor Theorem, Hilbert’s First and Second problems, and you can extract instances of unprovability from the Tenth problem. Plus a whole world of second-order strong statements in Reverse Mathematics. It is just that nowadays, with Harvey Friedman’s work available — the standards of interesting unprovable statements became very high.

On the issue of the “truth” of extremely unprovable statements. This is a highly violent ideological battleground. Personally, I think that the “realist” ideology (“true but unprovable”…) should not be considered or spread around as a default position. Kinda kills half of the magic.

About Emulation Theory. This is a very exciting development. A maximality principle that secretly controls the gear-mechanisms of an indiscernibility proof is a fundamental novelty that Harvey Friedman has been trying in the last few incarnations in this thread of proofs. I am very much looking forward to the complete treatise on Emulations (as well as the new Inductive Inclusion Theory with Shifts). Open problems on emulations should be tried out by high school kids.

I don’t see (or am ignorant as to) how modern methods of metamathematics can be fused with computational complexity theory immediately. However what seems inevitable is the use of high-power number theory and randomness as parts of [next generations of] combinatorial engines of unprovability proofs.

I agree with Andrey that Lipton/Regan did a great job here, and that this is difficult to do for audiences of non experts (and actually even for experts, where expectations are higher). There are really several very distinct audiences for this kind of foundations of mathematics. These include theoretical computer scientists, applied computer scientists, mathematical philosophers, scientific philosophers, general philosophers, philosophical logicians, combinatorists, algebraists, analysts, number theorists, algebraic geometers, topologists, applied mathematicians (of various kinds), mathematical scientists, math literate scientists, mathematical social scientists, general scientists, scientific journalists, general journalists, high school math teachers, gifted high school students, general public, and of course set theorists, recursion theorists, proof theorists, model theorists – a good but surely incomplete relevant list. What works well for one of these audiences will not usually work nearly as well for another. In all of these areas, there are differing strengths and weaknesses, differing CWs and prejudices, and differing fundamental attitudes towards intellectual (and general) life, etcetera. Hopefully important enough ideas, if they surface, will eventually be effectively explained to each of such audiences. How effectively has this been done with, e.g., Goedel’s Incompleteness Theorems, Turing, Darwin, Einstein, Crick/Watson? Could this or can this or has this been done systematically?

I wonder how general Harvey’s scheme for $\phi(A,B)$ (the notation indicates that the sets A,B are parameters) is. Is there a sense in which is is *maximally* general?

In particular, is it possible to make precise in an interesting way and prove a result of this form:

(*) If $\psi$ is a “true” (in the sense of the post, i.e. is a consequence of a “widely accepted” large cardinal axiom) statement that is independent of ZFC, then there exist a pair of sets (A,B) such that $\psi$ is “equivalent” to $\phi(A,B)$?

For too strong a notion of “equivalent”, there are obvious counterexamples, e.g. those notions of equivalence that distinguish between the logical complexity of statements.

If Harvey can prove a result like (*) for a wide enough notion of equivalence, then it would also provide an argument that all incompleteness phenomena are phenomena related to the existence of maximal stable sets (in addition to the argument that these incompleteness phenomena are “intuitive” to the practicing mathematician).

But perhaps most ways of making (*) precise will be trivially true. I don’t know.

hmflogic = Harvey Friedman

Here are some thoughts that come to mind with this work in progress. I know that for these statements, asserting that ALL finite sets of rational vectors have a certain property, that there are specific finite sets of rational vectors where the statement is independent of ZFC. I conjecture that the range of statements that you get when fixing the various finite sets of rational vectors, represents the entire ensemble of logical strengths ranging from EFA (a lowest) all the way through SRP (the large cardinal system in question). So you get a combinatorial indexing of such logical strengths. Of course, we know that for any complete r.e. set we get all logical strengths whatsoever from instances of “being outside the r.e. set”. In a sense, it would be more interesting if my conjecture was false. Then we would have a mechanism for picking out distinguished logical strengths. But for picking out distinguished logical strengths, there are other mechanisms available that I have already looked at but this would be work further down the road.

A great posting which explains the quest of Harvey Friedman to the layman in a beautiful way. Harvey’s results are something like a mission impossible and I do not know anyone else who could come up with such strong results. Whether these have implications to computational complexity is not yet clear but in the past there have been situations where the study of the finite has been simplified by studying the transfinite.

hmflogic = Harvey Friedman

Whether or not such combinatorics will help with notorious lower bounds is of course very unclear as of yet. But something else is easier to at least anticipate. That there is a general good way of asking for obvious greedy algorithms to produce outputs that have certain “symmetries”. Perhaps to prove a reasonably general result that any obviously greedy algorithm in certain environments can be replaced by a necessarily non or less greedy algorithm whose output has the desired symmetries. NOTE: One of the environments that I already am playing in, in addition to the high school level emulation environment, is the environment of MAXIMAL CLIQUES (and even very strongly maximal cliques).

The idea is that such a general result can only be proved using large cardinals, and in particular is independent of ZFC. Of course, there is the further question of what uses can be made of “symmetric outputs”, other than their intrinsic interest. Maybe symmetric outputs would prove to be good because they are more easily processed for some purpose or another. Ah! Here is a possible scenario. This stuff is useful for proving that certain objects exist that are of LOW COMPLEXITY (relative to certain input data, in some relevant senses), where formerly the only examples of such objects were obtained by specific greedy algorithms which generate objects only of (seemingly) HIGH COMPLEXITY. (maybe the hard fought symmetries are good enough to lower the complexity, say the circuit complexity.