Skip to content

Logic In Action

July 28, 2011


Exponential-time algorithms running on large data

Moshe Vardi is perhaps the world’s foremost expert on the application of logic to practical computations. His paper “Reasoning about Infinite Computations” with Pierre Wolper won the 2000 Gödel Prize. Infinite computations may not seem practical, but the magic is in the finite machines that perform them, and how requirements expressed in various logics translate into machine operations.

Today we wish to describe this logic-automata correspondence, and the larger issue of what distinguishes a “practical” algorithm.

Vardi gave three lectures on July 11–13 in this year’s H.M.S. Coxeter Lectures of the University of Toronto Fields Institute. They were also part of the Institute’s Summer Thematic Program on the Mathematics of Constraint Satisfaction.

I (Ken) had the pleasure of attending them while playing in the 2011 Canadian Open Chess Championship July 9–17. I wrote a tournament report for the prominent Susan Polgar chess blog. Besides reuniting with some old chess friends as described there, I spent other time at the university meeting visitors there I hadn’t seen in some years.

Audio and slides of the lectures are available here (scroll down). A running theme was that once we know a task is `elementary’, can we get it down to single-exponential time? So that we can actually run it on large data? That may seem quixotic to our understanding of the class {\mathsf{P}} as standing for “Practical” but life goes on, and so do processor cycles.

The Lectures

The titles of his three lectures hint at the humorous but earnest spirit in which they were delivered:

  1. And Logic Begat Computer Science: When Giants Roamed the Earth
  2. From Philosophical to Industrial Logics
  3. Logic, Automata, Games, and Algorithms.

The first lecture was also called, “From Aristotle to Pentium.” Although much of this story is familiar, Vardi supplied historical details deeper than the beaten track, and unexpected angles of approach.

For instance, we all know that computers are based on Boolean logic, 0 and 1, and that circuits use logic gates. Thus it may seem strange to hear that in 1901, mathematical logic as a field seemed to have the least of any branch of mathematics to offer humanity. As Cosma Shalizi begins this note quoted by Vardi, the highlight of its previous 30-or-so years had been only the development of infinite cardinal numbers. Moreover the next 30 years were to seem a total car wreck:

  • Bertrand Russell finds a gaping hole in Gottlob Frege’s formal set theory.
  • Russell’s own programme to formalize mathematics in a strong logic requires 379 dense pages before it can derive {1 + 1 = 2}, and only with the assistance of Alfred North Whitehead, who later denounces it as what this Wikipedia page terms “an overarching ontological mistake.”
  • Though David Hilbert passionately championed the same programme, it is due to be blind-sided by Kurt Gödel’s incompleteness theorems, Hilbert himself recognizing this as defeat. [Update 4/1/13: not sure about this last sentence]

Indeed, as he noted, the lives of all the luminaries up through Gödel, Alan Turing, and John von Neumann would be punctuated by tragedy. Yet within 50 more years, the “defeated” programme was up and running industrially-vital applications all over the world. Our take on the “Classification Project” at the beginning of Vardi’s second lecture is that the key to success for logic was seeing weakness rather than strength as a virtue.

Is it True? Can it be True?

Every system of logic provides a formal language in which to express certain assertions and properties. The language can be tailored to the kind of structure one wishes to analyze and the particular relations one needs to formulate. Here are three kinds of structures:

{\bullet} Strings: Let {X_c(i)} denote that the character in position {i} is {c}. Then

\displaystyle  \phi = (\forall i,k): (X_1(i) \wedge X_1(k) \wedge k > i) \longrightarrow (\exists j)(i < j \wedge j < k \wedge X_0(j))

is a logical assertion about (particular) strings. It says that between every two {1}‘s there is at least one {0}.

{\bullet} Graphs: Let {E(u,v)} stand for the edge relation. Then

\displaystyle  \psi = (\forall u,w): E(u,w) \longrightarrow (\exists v)(E(u,v) \wedge E(v,w))

is an assertion about graphs. It says that every edge is involved in a triangle.

{\bullet} Time Series: Now let {X(c)} denote that {c} happens at a given time; we are suppressing the index {i} of the time. Then

\displaystyle  \eta = \mathsf{G}(X(1) \longrightarrow \mathsf{N} X(0) \wedge \mathsf{F} X(1))

is an assertion of temporal logic—in fact, LTL for linear temporal logic. It says that whenever a {1} happens, a {0} happens next and a {1} happens sometime in the future. Here the logic suppresses the index {i}.

Given an assertion {\phi} and a structure {\mathsf{S}}, we can ask the following questions:

  • Model Checking: Is {\phi} true for the structure {\mathsf{S}}?
  • Satisfiability: Can there be a structure {\mathsf{S'}} in which {\phi} is true? (Maybe where {\mathsf{S'}} is a small change from {\mathsf{S}}?)

We can translate an ordinary Boolean formula {\phi_0} into a formula {\phi} over binary strings so that {\phi} is satisfiable if and only if {\phi_0} belongs to {\mathsf{SAT}}. Thus logical satisfiability is “usually” {\mathsf{NP}}-hard. Model-checking in the Boolean case just means seeing whether a given truth assignment evaluates to true, but in the presence of quantifiers it can become harder.

When {\phi} encodes conditions for a skyscraper or nuclear reactor to be managing its environment, or for a processor’s gate array to give correct results, these are more than 64 million dollar questions. If you weaken the logics to just what you need to express these conditions, you have a better chance of answering the questions.

Mechanizing Formulas

For every formula {\phi}, we can define the language {L_{\phi}} of structures in which {\phi} is true. In the case of Strings this is an ordinary formal language. For other structures we can identify the language with some means of encoding the structures. Thus our questions become the familiar ones of whether a given string belongs to the language, or whether the language is (non-)empty.

If {\phi} has free variables, we can incorporate them into the structures. For example, consider strings {x} of some length {n} over some alphabet {\Sigma}, and an index variable {i} that ranges over places {0\dots n-1}. We can make “double-decker” strings {x'} over alphabet {\Sigma \times \{0,1\}} in which the {1} components indicate value(s) of {i}.

For a free first-order variable {i}, we can impose the restriction that exactly one double-character in {x'} has {1} as on the second deck. For representing a set {I} of index variables, we need no such restriction. This is intuitively why second-order logics that are monadic, meaning they allow quantifying over relations—but only unary relations which denote sets—play a distinguished role.

What began to be realized in the late 1950’s is that by restricting the logics, the languages {L_\phi} have forms that we already know how to handle via machines. Over strings, monadic second-order logic gives exactly the regular languages. First-order logics give important subclasses of the regular languages. Thus finite automata, NFA’s as well as DFA’s, become the machines to consider.

Over other structures, you have to consider other kinds of machines. And when you’re considering open-ended time intervals or sequences of events, it is actually an impediment to have a symbol “{n}” standing for a finite length of string or size of structure. For mathematical reasons as well, it became surprisingly natural and necessary to consider machines with infinite words as input, and an acceptance cirterion that involved entering a “good” state infinitely often.

The problem finally became, how to build the right machine {M} from the formula {\phi}?

It’s Elementary…Or Not

The useful fact about formulas is that they are built up inductively from smaller formulas. For instance, we may have {\phi = (\exists k)\psi}. Suppose we have a deterministic machine {D} for {L_{\psi}}.

Now {D} deals with strings that have a “deck” for values of {k}. {L_{\phi}}, however, involves strings without this deck. Instead, a machine {M} for {L_{\phi}} has to be built to “guess” possible values on this deck, before employing {D}. Thus we are led to {M} being nondeterministic. In the case of finite automata on finite strings we can use the sets-of-states construction to convert {M} to a DFA {D'}. However:

  • For automata on infinite words, the sets-of-states construction doesn’t work, and something more involved is needed;
  • For a universal quantifier you need complementation or something else, which is even more involved; and
  • Even so, the step can blow up the state space by an exponential.

If we have {m} quantifiers, we may iterate {n\rightarrow 2^n} for {m}-many times. If {m} depends on the size of the formula, then the blowup may be an unbounded stack of exponentiated {2}‘s.

This means a function that fails to be elementary. There is a formal notion of elementary in logic. A neat theorem shows that the languages definable in elementary logic are exactly those decided in time {2}-to-the-{2}-to-the-{2}…-to-the-{2^n} power, for some finite stack of {2}‘s. That is, a language is elementary if and only if it is decidable in elementary time.

Non-elementary time is “hopeless”—and for some logic problems, non-elementary lower bounds show that we cannot do better. But for others there is a smarter way to cut down the state space when managing quantifiers in a formula, yielding an elementary-time algorithm. But generally always there is some exponential in the time, so is this still hopeless?

Running…

Again we are telling the story in shortcutted form, and hence cutting off some of Vardi’s carefully-told points. We invite you to follow his slides, and the audio if you have time—for many deft bits of humor in the lectures drew appreciation from the packed room at the Fields Institute. But let us make an analogy:

  • In polynomial complexity, the real import of the Cobham-Edmonds thesis is that once some polynomial-time algorithm is found, the gangway is opened for combinatorial mavens to set about reducing the exponent, and (hopefully) the other constants on the running time. Once you know the exponent is finite, you can grab it and work on it.
  • In computational logic, the same role is played by the stack of exponentials. Once you know it is finite, you can work on reducing it.

In one example, a double-exponential time construction was reduced in a series of eight papers to {(0.76n)^n}, where the “{0.76}” is mysteriously tight in worst-case.

Another example showed how choice of logic can be a game-changer. Linear temporal logic (LTL) has the same expressive power as first-order logic with natural-number variables. The satisfiability problem for first-order logic extended to include computation traces—in some way analogous to our “decks” example above—has a non-elementary lower bound. The corresponding satisfiability problem for LTL, however, belongs to single-exponential time, and is in fact {\mathsf{PSPACE}}-complete.

Something about LTL enables building branching tableaus that help control the blowup in eventual machines. The third lecture detailed how the translation from LTL to machines on infinite words involves another game-changing element, namely games themselves, to create alternating machines that wind up being more succinct than what you’d get by insisting on nondeterministic, let alone deterministic, machines.

Isn’t {\mathsf{PSPACE}}-complete supposed to be bad news? Apparently not. The second lecture ended with a “Reduction to Practice” for LTL, with three implementations from 1983 to 1995. The third lecture ended by reiterating the importance of

Games/alternation as a high-level algorithmic construct,

with the

Bottom line: Alternation is a key algorithmic construct in automated reasoning — used in industrial tools.

Open Problems

Why do these exponential-time algorithms run tolerably well in practice? How do you know when they aren’t doing the job well or quickly enough?

I wound up asking just one very particular question of this kind, about shortcuts in hash-table representations of big machine state spaces that sometimes compromise the logic itself so that some erroneous conditions (such as bugs in gate arrays) don’t get detected—before having to dash downtown for my chess game.

If computational logic seems to have retreated again as a field, what new boon may it have to offer? This was Vardi’s first general question.

[Update 4/1/13: I am unable to trace a source I recall as saying Hilbert graciously congratulated Gödel—more the opposite appears to be the case, with Hilbert decrying “excessive finitism” in Gödel’s setup.]

11 Comments leave one →
  1. July 28, 2011 12:41 pm

    Although I say “we” in some places per my recollection of some of Dick’s views, actually he has been taking the kind of no-Net retreat some other fellow bloggers are known to enjoy, so all the opinions—and any errors—are mine.

  2. July 28, 2011 3:32 pm

    i wonder what the median complexity of these kinds of problems is.

    even better, i wonder what the “fattest” chunk of the problem space is that can be handled in polynomial time, for some of these industrial problems. it should be sparse, but if it’s riddled with typical instances, that’s interesting.

    s.

  3. Javaid Aslam permalink
    July 29, 2011 8:11 pm

    Just as a reminder– how limited is the promise of the polynomial-time algorithms:
    A time complexity of O(n^15) == O(10^13) years > a trillion year, for n=100.

    This estimate is based on the fastest CPU (uni-processor) available today- IBM 5.6GHZ, Z196 with 92 instructions/sec, where
    clock frequency = 5.6 GHz, and 1 year = O(10^7) seconds.
    The smallest step of the sequential algorithm is assumed to take one clock cycle.

  4. anonymous permalink
    July 31, 2011 8:25 pm

    What’s the best reference for learning about basic logic (preferably from a computer science perspective)? This material was never covered in any of my undergraduate or graduate courses.

    • July 31, 2011 11:04 pm

      I like Mathematical Logic by Ebbinghaus, Flum, and Thomas—can’t believe it’s 27 years old already as compared to Boolos & Jeffrey it felt modern. It gets into Go”del’s Theorems right away in Chapter 1, and emphasizes formal structures more than logical syntax.

  5. August 4, 2011 5:35 pm

    I heard the first talk by Moshe “And Logic Begat Computer Science: When Giants Roamed the Earth” twice (and maybe more), at Yale and at the Hebrew University. Great talk. I think Moshe mentioned to me that it was vidiotaped at UCLA and can be found somewhere, but I did not find it. I also heard Moshe’s second talk “From Philosophical to Industrial Logics”, at HU and it was very interesting and added to my curiosity on the relations between philosophical logic, mathematical logic abd computer science. (See this MO question http://mathoverflow.net/questions/62401/logic-in-mathematics-and-philosophy .) I think we also both share the opinion that controversies and skepticism (in our scientific fields, and science in general) are welcome, but I am also a bit skeptical about this opinion as well (and perhaps Moshe too).

  6. Neel Krishnaswami permalink
    August 11, 2011 10:46 am

    For instance, we all know that computers are based on Boolean logic, 0 and 1, and that circuits use logic gates. Thus it may seem strange to hear that in 1901, mathematical logic as a field seemed to have the least of any branch of mathematics to offer humanity.

    This claim seems rather implausible. To pick just a handful of examples, in the thirty years prior to 1901, Frege had invented predicate logic and quantifiers, Cantor had invented set theory, Dedekind had constructed the real numbers, and Hilbert had invented the axiomatic method and axiomatized geometry. 19th century mathematics reads strangely to us because these guys utterly remade the language of mathematics in their image.

Trackbacks

  1. Digital Butterflies and PRGs « Gödel’s Lost Letter and P=NP
  2. Tenth Linkfest
  3. Interstellar Quantum Computation | Gödel's Lost Letter and P=NP
  4. Some Rice News | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading