Can We Prove Better Independence Theorems?
An approach to independence with more complexity dependence
Florian Pelupessy recently defended his PhD thesis at the University of Ghent in Belgium. In joint work with Harvey Friedman, he found new short proofs for two independence results from Peano Arithmetic. One result is the famous “natural” Ramsey-theoretic independence result proved by Jeff Paris and Leo Harrington in 1977, while the other is a different Ramsey-type result obtained in 2010 by Friedman. Pelupessy also maintains a page with links on “phase transitions” in proof theory—meaning cases where a slight change in values of parameters makes a statement go from being provable to independent.
Today I want to talk about whether we can prove that some of our open problems are independent of Peano Arithmetic or other theories.
When we cannot prove something several ideas come to mind. Perhaps it is false, or perhaps it is time to try and prove the opposite statement. Or perhaps it is time to move on and try to work on some other problem. Or perhaps the reason we have failed to solve the problem is that it is actually unprovable. Kurt Gödel’s famous Incompleteness Theorem shows, roughly, that any powerful enough theory must either be inconsistent or incomplete.
That is the theory must either prove everything—it is inconsistent—or it must miss proving some true statements. One idea that keeps coming up, one question that we hear often is: could our favorite open problems like be unproved, since there are no proofs? It’s hard to find a needle that is not in the haystack.
We have talked about independence before—here—but I wanted to add some new thoughts on this issue.
An Independence “General Nonsense”
Ken and I were both part of a bunch of people interested three decades ago in whether a general kind of independence result had larger significance. The general idea is that there are computable functions that outgrow every function that Peano Arithmetic can prove to be total. Moreover, one can majorize them in a sense by predicates that are very easy to compute:
Theorem: For every computable we can compute a function such that:
- is computable in linear time and log space;
- for every there are infinitely many such that ;
- the function mapping to the least such that makes infinitely often; and
- for all , .
Indeed Ken’s 1996 paper with Heribert Vollmer showed how to compute in various notions of logarithmic time. Think of the values as being “colors”: red, green, blue… Then colors with infinitely many colors, but such that each one of the following false statements is consistent with Peano:
- The range of is almost all red.
- The range of is almost all green.
- The range of is almost all blue.
To apply this, make a language that consistes of the red strings in , the green prime numbers, and the blue strings that belong to the Graph Isomorphism language. Then since all these languages belong to and is easy, belongs to . However, it is consistent with Peano to believe that is a finite difference of any one of these languages. If in fact , then it tumbles out that is neither in nor -complete, and not polynomial-time equivalent to Graph Isomorphism either. Hence all of these facts are independent. This applies regardless of what machine code is used to specify formally. One can churn out myriad similar instances.
This kind of theorem once seemed exciting, but has proved unsatisfying, because it doesn’t really say much about complexity. So here we want to think of ideas that are more concrete about computations.
The BPP Hierarchy Problem
Call a nondeterministic Turing Machine (NTM) decisive provided on an input either at least paths accept or at least of the paths reject. We assume in these machines that all paths have the same length with binary branching. That is, they have the same probability if the NTM is viewed as a randomized Turing machine.
The complexity class contains those languages accepted by a decisive NTM that runs in time . The classic open problem is: does more time help for these complexity classes? More precisely is
for any ? This is widely believed to be true, but is open even in the oracle world. See the last section of this post for a summary on this.
I have often thought about trying to prove something like the following. There is a language so that is not in , and yet it is consistent with Peano Arithmetic (PA) to suppose that is in . This is plausible, and stays “safely” short of proving what we really want. Still even this seems difficult.
I would like to share an idea about how one might attack this problem. To make the explanation “fun,” let’s call an NTM program a Knight if it is decisive and runs in at most linear time; let’s call a Knave if it runs also in linear time but is only stipulated to be decisive for inputs of length . It can be decisive elsewhere, but must be decisive for these length inputs. Here is the computable function mentioned above that Peano Arithmetic cannot prove to be recursive, or even bounded by a recursive function. So a Knave can look like a Knight for many inputs.
Let be a NTM that runs always in time . It operates as follows. On input it does the following: It checks that . If not then it rejects. If it is true, then we check that is decisive on all inputs of length at most . If it fails, then again we reject. But if it works, then it simulates on the input for steps and flips the answer.
Let be the language accepted. We claim that is not in . This follows since we diagonalize over both Knights and Knaves.
What can we say about being in ? The time bound is not a problem. The problem is whether there is a decisive machine that accepts . In order to study this consider the formal sentence:
What this says is that there are an infinite total recursive number of inputs that make indecisive. Note that if this is false, then is decisive for all but a finite set of inputs. Since , like most complexity classes, is closed under finite differences of its languages, falseness implies that is in .
The critical idea is that I believe, but cannot prove, that this sentence is consistent with PA. The intuition is that provided grows very fast, if PA could find an infinite number of bad inputs, then this would contradict the growth of .
I must admit this seems close, but it does not yet work. Perhaps someone can fix the above argument.
Can the above argument be made to work, and yield more interesting independent statements?
Ken suggests a problem that may be a stepping-stone. Define to be the class of languages accepted by polynomial-time NTMs such that for some poly-time NTM , Peano can prove that and are complements. Then , and by enumerating proofs, we can construct a language in to which every language in reduces.
This is cool: if every language in is provably so, then has a complete set. The question is, does itself have a complete set, regardless? The hitch with is that proving a nontrivial accepts the complement of seems to involve proving that Peano is consistent, which Peano itself cannot do. But perhaps a coding trick can construct a , and importantly a formal sentence representing it, which can be proved while avoiding a fatal self-reference.