Skip to content

A Limit of First Order Logic

January 17, 2010


A limit of first order logic concerning function definitions

Sam Buss is a logician who works in proof theory, and is one of the world’s experts on bounded arithmetic. He has proved some quite neat theorems—I will discuss his work in more detail in the future, since proof theory is closely related to the P=NP question.

Today I want to talk about a much simpler problem from logic. I hit upon it while trying to prove a theorem. I failed in proving my theorem—so far at least—but I discovered an interesting limitation of first order logic. This limit is not new, but I do not think it is as widely known as it should be.

I have always loved mathematical logic. I took undergraduate courses with two famous logicians, and was lucky to work with two other famous ones. Richard Vesley taught my first logic course; I took an advanced course, also as an undergraduate, from Helena Rasiowa. Later, as a graduate student I worked with Don Loveland. Finally, as a young assistant professor at Yale, I talked to Angus Macintyre a great deal about logic. He was kind enough to explain how the famous Paris-Harrington Theorem worked—among many other things.

Those I took courses from—Vesley and Rasiowa—each had their own unique styles of teaching. Since, my last discussion was on my current class on complexity theory I thought I might say something about their different styles of teaching.

{\bullet } Vesley read from our textbook, which was Elliott Mendelson’s famous book on logic. He would read a bit, and then we would discuss what he had just read. He often seemed to be confused, and we had to help him out. I think this was a great way to make us listen very carefully to his every word. I do not remember what I thought then, but my guess now is he knew the material very well. My guess is that his “confused” method was just his way to teach the material. A great course.

{\bullet } Rasiowa lectured, without any notes, without a textbook, and without any motivation. Yet it was one of the best courses I have ever had. She was a visiting professor at Case, where I got my undergraduate degree, who taught a course on advanced topics in logic. She had a way of presenting the material that made it hard to ask questions: there was almost no motivation, just definitions, theorems, and proofs. One time we started a new topic on proof tableaux’s. We had no idea why we were starting to discuss this topic. None. One brave soul raised his hand, she called on him, and he asked “why are we studying this?” She answered with a stern voice, You will understand. That ended the discussion of motivation, and we moved back to definitions, theorems, and proofs. We did eventually understand, but not for many weeks.

Let’s turn now to a problem in first order logic.

A Problem With First Order Logic

First order logic is well known to be extremely powerful, yet also to have strong limitations. For example, first order logic cannot in general define the transitive closure of a relation.

I know about these limits, but recently ran into another limitation that I was unaware of. The limit has to do with defining functions implicitly with first order statements.

The problem I ran into was simple: Consider a statement of the form,

\displaystyle  \forall x \forall y \exists a \exists b \ S(x,y,a,b).

What this says is: For each {x} and {y}, there are {a} and {b}, so that {S(x,y,a,b)} is true. For example, {S} could be the formula:

\displaystyle  (a^{2} = x) \wedge (b^{2} = y).

The situation I ran into was that I wanted to express the following:

  1. That {a} depends only on {x}.
  2. That {b} depends only on {y}.

Note,

\displaystyle  \forall x \exists a \forall y \exists b \ S(x,y,a,b)

makes {a} only a function of {x}, but {b} is still a function of both {x} and {y}. Of course re-ordering variables to form

\displaystyle  \forall y \exists b \forall x \exists a \ S(x,y,a,b)

makes {b} only depend on {y}, but again {a} depends on both.

I tried for a while to express what I wanted in various ways by some more complex tricks. None worked. No matter what I did one of {a} and {b} wound up depending on both {x} and {y}.

I finally got to the point where I could actually prove that there is no way to do this. By then, I guessed that this was known. Perhaps, well known, but not well known to me.

Sam Buss to the rescue. I asked him about the issue, and he answered immediately:

This is a well-known construction, known as a Henkin quantifier or branching quantifier (after Leon Henkin). It is usually denoted as a rectangular block:

\displaystyle  \begin{matrix} \forall x \exists a \\ \forall y \exists b \\ \end{matrix} S(x,y,a,b) \ \ \ \ \ (1)

The meaning of (1) is just what I needed: the notation means that for each {x} and {y} there are {a} and {b} so that {S(x,y,a,b)} is true. Further, the value of {a} only depends on {x}, and the value of {b} only depends on {y}.

Sam added: “Statement (1) is known not to be expressible in first-order logic, so you are correct that it cannot be encoded in standard first-order logic.”

If there had been a way to encode this I would have had a neat result. Oh well.

Open Problems

I still am interested in situations where you can get around this problem. I believe that for certain statements {S} it may be possible to define what I wanted in first order logic. I conjecture that there should be some property that will allow the branching quantifier to be defined for such statements.

About these ads
21 Comments leave one →
  1. Dave permalink
    January 17, 2010 11:35 am

    If a depends only on x, and b depends only on y, then doesn’t this mean that you can write S(x,y,a,b) as the conjunction of two predicates S_1(x,a) and S_2(y,b)? Then you could write the formula as ((forall x)(exists a) S_1(x,a)) and ((forall y)(exists b) S_2(y,b))

    What is an example of a predicate S in which all four variables would need to be examined to determine its truth value, yet the predicate could not be broken up into S_1 and S_2 in this way, even though the variables depend on each other in this limited way?

    • rjlipton permalink*
      January 17, 2010 11:39 am

      I had a statement of the form S(x,y,f(x),g(y)) that involved all four in a complex way. I knew that f and g existed, but I could not use them in the sentence. Thus, I needed the first order sentence to encode, in a sense, that f and g existed.

    • Robert permalink
      January 18, 2010 5:49 pm

      What about ((forall x)(forall y)(exists a)(exists b) S(x,y,a,b)) and ((forall y)(forall x)(exists b)(exists a) S(x,y,a,b))? I understand that it has been stated that it cannot work. Seems i have something to learn.

      • rjlipton permalink*
        January 18, 2010 6:19 pm

        In both cases x,y come before a,b. Thus, a,b each can depend on x,y.

  2. January 17, 2010 3:03 pm

    Could you write the properties that f and g satisfy using a first-order formula?

    Let us assume that this is the case using a first-order formula phi(f,g) [f and g are functional symbols]. It maybe the case that there are several f,g satisfying phi(f,g), but perhaps you do not need to worry only about one particular f,g. In this last case, it may be enough to consider the first-order formula
    (forall x,y) S(x,y,f(x),g(y)) and phi(f,g)

  3. Lance permalink
    January 17, 2010 3:18 pm

    I believe you can express 3-CNF SAT in this model.
    Take S(x,y,f(x),g(y)) to be where f(x) describes the truth values of the variables of the xth clause, g(y) the truth value of the yth variable, and S checks that the f(x) makes the xth clause true and consistent with g(y).

    This is the basic reason that MIP is stronger than IP.

  4. Noam Zeilberger permalink
    January 17, 2010 6:40 pm

    You might be interested in Samson Abramsky’s paper, Socially Responsive, Environmentally Friendly Logic, where he talks about branching quantifiers from the point of view of game semantics.

  5. January 17, 2010 9:23 pm

    Not quite related to your problem, but here is another interesting account of what a logician (and a Chemistry Professor at MIT) happened to witness in their teaching profession.

    The Logician is the well known Hartley Rogers and the incident that happened is here.

    The results of the same prank with the Chemistry Professor, August Witt, with an unexpected twist, is also interesting and is here

  6. none permalink
    January 18, 2010 6:29 am

    The classic example of a non-firstorderizeable statement is “X is a finite set”.

    Terry Tao has a blog post on sentences needing branching quantifiers:
    http://terrytao.wordpress.com/2007/08/27/printer-friendly-css-and-nonfirstorderizability/

    See the comment of 28 August, 2007 at 9:26 am by Andy D about when statements with branching quantifiers can be expressed in FOL.

    If I understand it right, there is a theorem of Vaanaanen saying that branching quantifiers give the full power of second-order logic. See Feferman (2006) cited in the Wikipedia article “independence-friendly logic” for details.

  7. Kristoffer permalink
    January 18, 2010 1:33 pm

    I really only know enough of first order logic to realize that this probably is an amazingly stupid question, but I’m really curious as to why this can’t be solved by introducing a pair of parenthesis? I figure that it may not be allowed, or that it perhaps might not alter the meaning of the statement…

    \displaystyle \forall x \exists a( \forall y \exists b \ S(x,y,a,b) )

    • Kristoffer permalink
      January 18, 2010 1:36 pm

      Edit:

      I tried copy and paste of the formula, but that obviously didn’t work well…

      I guess the best I can do is forall x, exists a( forall y, exist b( S( x, y, a, b ) )

  8. rjlipton permalink*
    January 18, 2010 1:52 pm

    The () do not limit scope, they just help make things readable. The rule is this: an exists y uses all the variables that came before y to its left. This linear ordering is the reason that first order cannot do the branching quantifier.

    Your question is a quite nice one—not amazingly … It would be nice if could control the scope. The problem is like the toothpaste tube: if you allow a non-linear ordering, then you would lose some of the nice properties that first order theories have. Squeeze here and something happens elsewhere.

  9. Jesse Alama permalink
    January 18, 2010 7:19 pm

    Yours is a good example motivating not just Henkin branching quantifiers, but also independence-friendly logic. The basic notation in IF logic, an extension of first-order logic, is the slash notation: x/t is intended to mean that x is independent of t. Your example, in IF notation, would be: ∀xbyb/x. There is some debate within the logic community about just how innocent an extension of FOL IF-logic really is.

    • rjlipton permalink*
      January 18, 2010 11:19 pm

      Very interesting. Is it known just how powerful the extension is? I have heard some about it.

      • Jesse Alama permalink
        January 18, 2010 11:35 pm

        I’m not sure there’s a quick answer. IF logic is clearly more expressive than FO (and pays a price for that by giving up, e.g., recursive axiomatizability), but it seems intuitively clear that it is not as expressive as full SOL. The inventer of IF logic, Jaakko Hintikka, argues that, to some extent, IF logic is a kind of first-order logic, whereas Sol Feferman argues that it is more akin to full SOL, and indeed there are mathematical results that support this conclusion (specifically, comparing the complexity of the validity problem for IF logic to SOL). The Stanford Encyclopedia of Philosophy entry on IF logic summarizes these issues (which are both mathematical and philosophical).

      • none permalink
        January 21, 2010 6:29 pm

        I haven’t read the article carefully enough to digest it, but according to Solomon Feferman, if I understand properly, IF-logic is equivalent to second-order logic. The article is here: http://math.stanford.edu/~feferman/papers/hintikka_iia.pdf

      • Pietro Galliani permalink
        February 18, 2013 9:10 am

        Late reply (I just reached this blog from a Wikipedia link), but since this has not been answered yet… as proved in Enderton 1970 and, independently, in Walkoe 1970, Branching Quantifier Logic (and, therefore, IF Logic) are as expressive as *Existential* Second Order Logic – that is, to the fragment of SOL in which second-order variables are quantified only existentially.

  10. Anonymous permalink
    January 23, 2010 12:37 pm

    A very surprising limit of First Order Logic is that it allows to prove unbelievable assertions, like the following one:All men have 7 legsProof:Every man has a number of legs equal to 2 + the number of legs of nobodyNobody has 5 legsTherefore every man has 2 + 5 = 7 legs.:-)

  11. February 4, 2011 12:22 pm

    Does the formula “forall x exists a forall y exists b S(x,y,a,b)” asserts couple of functional dependencies a=f(x,y) b = g(x,y) in the predicate S(x,y,a,b) ? The standard way to express it is “forall x_1 x_2 a b S(x_1,y_1,a,b) & S(x_2,y_2,a,b) -> x_1=x_2 & y_1=y_2″.

Trackbacks

  1. Infinite Objects And Deep Proofs « Gödel’s Lost Letter and P=NP
  2. First order expression for functional dependency | Q&A System

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,217 other followers