Skip to content

Proving That Proving Is Hard

February 23, 2010


On the computational complexity of the theory of the real numbers

Alfred Tarski is one of the great logicians of the twentieth century. He is famous for the formal definition of truth, the decidability of many important theories, and many other results—one of the strangest is his famous joint theorem with Stefan Banach. This theorem proves that any solid ball of one pound of gold can be cut into a finite number of pieces and reassembled into two solid balls each of one pound of gold. Pretty neat trick.

Today I want to talk about the complexity of theories. There are some classic results that I think should be wider known, and their proofs use techniques that may have application elsewhere.

In the early days of theory there was a great deal of interest in the complexity of decision problems. I think these are natural problems, often they could be answered with strong lower bounds, and further some of them are close to practical questions. For example, the theory of the reals occurs in many actual problems—especially special cases of the theory of the reals. The theory restricted to existential sentences is quite natural and arise in many applications.

Upper Bounds on the Theory of the Reals

Tarski worked on various theories with the main goal of proving their decidability. Since Kurt Gödel’s famous result on the incompleteness of Peano Arithmetic, a goal has been to find weaker theories that are complete. Such a theory has two neat properties:

  1. the theory can either prove or disprove all statements in the theory;
  2. the theory is usually decidable.

The latter only needs the theory to have a reasonable axiom system, then it will be decidable. Given a sentence {\phi} in such a theory the decision procedure is simple:

Start enumerating proofs from the theory. If a proof of {\phi} appears, then say the sentence is valid; if a proof of {\neg \phi} appears, then say the sentence is invalid.

Note, one of {\phi} or {\neg \phi} must appear, since the theory is complete; and both cannot appear, since the theory is consistent. A complete theory is defined to always be consistent.

Tarski turned away from natural numbers to real numbers. The theory he studied is the theory of real numbers. It has a language that allows any first order formula made up from {\{+,-,\times,=\}}, and its axioms are those for a field, with additional axioms for a real closed field. The last are axioms that state that every odd degree polynomial must have a root.

One of the main results of Tarski is:

Theorem: The first order theory of the real numbers is complete.

He used a method that is called quantifier elimination. A theory has quantifier elimination if the theory has the property that every formula is equivalent to an open formula: a formula with no quantifiers. Tarski theorem immediately proves that the theory is decidable, but left open the actual computational complexity of its decison problem. Theorists to the rescue.

Lower Bounds on the Theory of the Reals

The following table lists some of the basic results concerning decidability of various theories. The problem is: Given a sentence {\phi} in the theory is the sentence provable or not.

Clearly for propositional logic the problem is NP-complete, and has unknown computational complexity

Predicate calculus concerns whether or not an arbitrary formula is logically valid; as long as there is at least one predicate letter {A(x,y,\dots)} of arity at least two, the theory is undecidable. The trick is it is possible to encode the behavior of a Turing machine into a sentence using {A}, and show that the sentence is valid if and only if the Turing machine halts.

Michael Fischer and Michael Rabin in 1974 proved several beautiful exponential lower bounds on the complexity of any decision procedure for various theories. I will discuss today their work on the theory of real numbers.

Of course no theory that includes propositional formulas can have an easy decision procedure, unless P=NP. My opinions aside, it is reasonable to expect the theory of reals to be much more powerful than NP. This expectation turns out to be true: the interplay between the ability to add and multiply real numbers with predicate calculus’s abilities allows powerful encoding tricks that cannot be done in pure propositional logic. Together these two features, operations on reals and predicate calculus, allow strong lower bounds to be proved.

Fischer and Rabin proved:

Theorem: In the theory of reals, there is a constant {c>0} so that there are true sentences of length {n>n_{0}} whose shortest proof in the theory is {2^{cn}}.

This is a very strong theorem. It is unconditional, and shows that the proofs themselves are huge. This is stronger than just proving they are hard to find, since if the proofs are large there can be no method that always proves valid theorems quickly—the proofs are just too big.

Lower Bounds on the Complexity of Theories

I thought that it might be useful to recall how such proofs are obtained. They are quite clever, in my opinion, and rely on several neat tricks. Partly I hope that these tricks could be used elsewhere in complexity theory.

How do you prove lower bounds on a theory? If the theory of the reals could define the set of integers, then the theory would be undecidable. Since the theory is decidable by the famous result of Tarski, it must be the case the integers are undefinable. What Mike and Michael show is how to define a very large set of integers, with a relatively small formula. This allows them to use the same techniques that proved the predicate calculus was undecidable to prove their own result.

For example, imagine we could define {x \in \mathbb{N} \wedge x < 2^{2^{n}}} by {B_{n}(x)} where {B_{n}(x)} is a “small” formula. Then, the plan is to use this to simulate the halting problem for Turing machines that do not use too much tape. An easy way to see that this will work is to look at a simpler application. Suppose that {P(x)} stands for

\displaystyle  \forall r \forall s \ B_{n}(x) \wedge B_{n}(r) \wedge B_{n}(s) \wedge rs = x \rightarrow r=1 \vee s=1.

Then, clearly {P(x)} defines the primes below {2^{2^{n}}}. The sentence,

\displaystyle  \forall x \exists y \ B_{n}(x) \wedge B_{n+1}(y) \wedge y>x \wedge P(y) \wedge P(y+2)

says that for any {x<2^{2^{n}}} there is a twin prime above it. This would be very exciting if it had a short proof—actually even weaker statements would be exciting. What Mike and Michael prove is in general such sentences are hard to prove—they do not, unfortunately, show this particular sentences is hard. Oh well.

A Recursion Trick

Mike and Michael do construct a formula {B_{n}(x)} that defines

\displaystyle x \in \mathbb{N} \wedge x < 2^{2^{n}}.

Their construction uses recursion: {B_{n}} is defined in terms of {B_{m}} for some {m<n}. This should come as no shock, since recursion is so useful in all of computer science. However, in order to make their theorem work, they need to avoid making too many recursive calls. They use a “trick” that was discovered by Mike Fischer with Albert Meyer and independently by Volker Strassen.

Let {F(x,y)} be any formula and consider the conjunction:

\displaystyle  G = F(x_{1},y_{1}) \wedge F(x_{2},y_{2}).

It follows that {G \leftrightarrow H} where

\displaystyle  H = \forall x \forall y \big[ \left((x=x_{1} \wedge y=y_{1}) \vee (x=x_{2} \wedge y=y_{2}) \right) \rightarrow F(x,y) \big].

The reason this construction is useful is simple: it is the size of the resulting formulas. The size of {G} is roughly twice that of {F}, but the size of {H} is only the size of {F} plus a constant. This is one of the key tricks used in their proof: it keeps the size of the formula {B_{n}} small.

A Name Trick

In their construction of {B_{n}} they also use many variable names. The naive method would use an exponential number of variables, and this would cause their construction to explode in size. They cleverly reuse names—in this way avoid having to use very long names. It is not too far from a trick that was used, many years later, by Adi Shamir in his famous proof that {\mathsf{IP}=\mathsf{PSPACE}}.

Mike and Michael use this trick. Consider this formula,

\displaystyle  	\forall x \dots \forall x A(x) \dots \ \ \ \ \ (1)

There is a name collision between the first {x} and the second {x}: we would usually write this with different variables as

\displaystyle  \forall x \dots \forall x' A(x') \dots

to make it clearer. However, the name collision is allowed in first order logic formulas, since it is easy to disambiguate the meaning of the formula (1). The importance of this trick is it allows the re-use of variable names; this re-use saves the number of variables needed and this reduces the size of the formulas {B_{n}}.

Lower Bounds on the Theory of the Complex Numbers?

I believe their results hold for this case too, but they do not seem to explicitly state this. Perhaps this is a well known result, but I cannot track down the reference.

Open Problems

Can we use any of the tricks here in other parts of complexity theory? The recursion trick seems to me to be quite powerful. Perhaps we can use this trick in another part of theory.

A Postscript

Congratulations to Michael Rabin for being awarded the Dan David Prize for 2010. Wonderful.

16 Comments leave one →
  1. anon permalink
    February 23, 2010 2:36 pm

    How about Proving that, Proving that Proving is Hard, is Hard?

  2. anon permalink
    February 23, 2010 3:22 pm

    The recursion trick is useful in showing that TQBF (True Quantified Boolean Formula) is PSPACE-complete.

  3. none permalink
    February 24, 2010 4:45 am

    Just because proofs are huge doesn’t necessarily make them hard to find. Richard Borcherds pointed out (I have the impression this is something well known by logicians, but he’s the one I heard it from) that you can write a Gödel-like sentence that is true, but instead of saying “I cannot be proved”, says “I cannot be proved in less than 10**1000 symbols”. The sentence is provable since you can enumerate proofs that are big enough, and since it’s true, there’s no shorter proof. However, the big proof has low complexity–it’s not hard to find. And in fact by adding a new axiom (specifically the consistency of the system) the proof becomes trivial (since the sentence is true).

    One of these days I want to read Boolos and Jeffrey’s book “Computability and Logic” which I’ve heard has a bunch of stuff like this in it.

    • DTML permalink
      February 24, 2010 12:39 pm

      I’m sorry but your comment does not make sense. “10**1000 symbols” is still just a constant number of symbols. The theorem by Fischer and Rabin mentioned here provides an exponential lowerbound on the size of proof with respect to the size n of the input formula. So even when you already know the proof, simply write down this proof will take exponential time.

      • none permalink
        February 25, 2010 3:18 am

        Well fine, but again, “difficult to find” and “lengthy to write down” are not the same thing. Like the decimal expansion of (10**n)-1 is exponentially long in |n| but easy to compute. Also what is the proof length in the Fischer-Rabin theorem if you introduce reflection principles (iterated consistency statements) to the axioms, etc.?

    • DTML permalink
      February 25, 2010 10:16 am

      This whole business of proof complexity is always about searching for polynomial size proofs. If there’s no polynomial size proof, then it’s the end of the story.

      But you have a point, because we might say that the proof is given implicitly, which allows us to encode long proofs. Each implicit proof is a polytime Turing Machine M(i) (i in binary), where M(i) returns the proof line at position i. For implicit proof, you might want to have a look at the Masters thesis by Pavel Sanda, “Implicit propositional proofs”. I’m not aware of any strengthen version of the Fischer-Rabin theorem that says even implicit proofs are hard to find. But maybe someone here can give us more pointers.

      • none permalink
        February 25, 2010 7:56 pm

        I’ll try to find that thesis about implicit proofs–I’d never heard that term before. My main question is whether just because there’s no short proof in a particular formal system, doesn’t mean there isn’t one in a stronger system that’s still considered to have reasonable axioms. PA has no proof of CON(PA), but if you don’t believe PA is consistent, why do you care what you can prove in it? If you believe PA then you should be willing to use PA+CON(PA)+CON(PA+CON(PA))+…. in your proofs (“iterated reflection”). I think that means all the Pi-0-1 statements in the signature of PA become provable, but I’m not sure of that. I also seem to remember that all arithmetic statements are decidable if you iterate the consistency statement through the transfinite countable ordinals, but then you no longer have an effective theory.

  4. Peter Juhl permalink
    February 25, 2010 4:55 am

    >This theorem proves that any solid ball of one pound of gold can be cut into a finite number of > pieces and reassembled into two solid balls each of one pound of gold. Pretty neat trick.

    It actually says form and shape identical to the original ball, the mass would be different

    http://en.wikipedia.org/wiki/Banach-Tarski_paradox

    • caRh permalink
      February 28, 2010 1:00 am

      Also, since each one of the finitely many pieces is allowed to be infinite, the trick is not that neat after all.

  5. February 28, 2010 12:33 pm

    The main result described in this post is really fascinating and the techniques for shortening formulas seem cool. Quantifier elimination apparently also has applications to optimization—actual hard-core problems that one could program and solve, which I’d like to explore more sometime.

    So I feel a bit bad offering mildly critical remarks on a great post. But… wouldn’t the statement of the Fischer-Rabin result be a bit clearer if the inequality between n and n_0 were written in the reverse order, so that it would be clear that n_0 is the length of the formula? And…it might seem harmless, but the use of “ball of gold” in stating the Banach-Tarski theorem bothers me. The theorem is not about balls of gold; it’s a piece of mathematics. Properties of balls of gold are the province of physics, chemistry, and everyday life. The theorem probably has little relevance to the properties of balls of gold. It’s possible some naive blogosphere-navigators may come across this statment and be misled. Of course, if they are so easily misled they’re likely to encounter bigger problems than this one… still…

    I think questions of how mathematics relates to physics and reality are fascinating and somewhat important… I guess the use of this example, without maybe some cute parable introduced by “if gold really behaved like … which of course it doesn’t…” ..just seems to make a little light of this relationship. Not that all mathematics should have a close relationship to reality…but I like to see the nature of this relationship, or lack of it, treated seriously enough to avoid casually turning theorems into statements that taken literally, probably misrepresent both the physics and the mathematics.

    Sorry if that comes off sounding like a rant. I don’t mean it to. Learning about the Fischer-Rabin result from such a clearly written post was a real pleasure and I mean to look into it, and the implications for optimization, further.

  6. August 18, 2010 7:31 pm

    “Clearly for propositional logic the problem is NP-complete, and has unknown computational complexity”. Don’t you mean NP-hard ? Because if it was complete, the complexity would be known.

Trackbacks

  1. Proving That Proving Is Hard « Gödel's Lost Letter and P=NP | Drakz Free Online Service
  2. Proving That Proving Is Hard « Gödel's Lost Letter and P=NP | Drakz Free Online Service
  3. uberVU - social comments
  4. Wine, Physics, and Song » Blog Archive » Interesting links for Sunday 2/28/2010: political econ of protectionism, complexity of the theory of the reals,
  5. Thanks For Sharing « Gödel’s Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading