Skip to content

Finite State Automata, Binary Decision Diagrams and Presburger Arithmetic

March 7, 2009


On the power of finite state automata

images4

Mike Sipser is a complexity theorist who has done important research into questions surrounding the P=NP question. He is on my short list of people who might one day make a breakthrough on P=NP, partly because of the “rumor” that he does “think” about this famous problem. Partly because he is very strong problem solver. (An observation: if you do not think about a problem ever, it is really hard to solve it.) At conferences people talk about P=NP and for years the answer to the question: “but who is really thinking about P=NP?” was Mike. I hope so.

Mike is also the author of one of the best books on classic complexity theory. Well written, nice coverage, and some terrific extras that make for a great textbook. Actually one of his exercises is today’s subject. I feel that I had something to do with that exercise. I have the pleasure of knowing Mike since he was a graduate student at Berkeley. I believe I can say that I have taught some of what Mike knows–he took my complexity class at Berkeley. I certainly did not teach him all he knows. But I will take credit for teaching him something.

When I say finite automata you probably do not think “powerful”. Everything about them is easy. Their languages are closed under just about any reasonable operation. Most questions about them are decidable, and they cannot recognize even as simple a language as {\{a^{n}b^{n} \mid n \ge 0\}}. But they are very powerful when used properly. In future posts I will expand on this theme and show how understanding them better could be the key to solving open questions in theory. More on that in the future.

Today I want to show that we can use finite state automata to prove that Presburger arithmetic is decidable. Presburger arithmetic is the first-order theory of the natural numbers with addition. It consists of all the true sentences about the natural numbers that only involve addition. Mojzesz Presburger, a student of the famous logician Alfred Tarski, showed in his master thesis of 1928 that there was a decision procedure for his theory. While Presburger’s proof was not that difficult, afterall it used a method pioneered by his advisor, the result is of great significance today. It is believed, by many, that he should have gotten a Ph.D. for it, but Tarski apparently did not think it was enough.

Presburger Arithmetic

Presburger arithmetic is, as I said already, the first order theory of addition. Sentences in this theory can be placed into the following form by standard methods of logic:

\displaystyle  \forall x \exists y \forall z \dots A(x,y,z,\dots)

where the quantifiers range over the natural numbers. The formula {A(x,y,z,\dots)} can be assumed to consist of boolean combinations of equations: note the formula has no quantifers. Each equation can involve variables and only the operation of addition. Thus,

\displaystyle  x + y = z

is a legal equation. For example, the theory can define the order relation {x<y} by

\displaystyle  \exists z (x + z + 1 = y).

The following true sentence says that there are an infinite number of natural numbers:

\displaystyle  \forall x \exists y \exists z (x + z + 1 = y).

It really says that for any {x} there is a {y} that is strictly larger than {x}. The problem that Presburger solved is given a sentence from his theory determine whether or not it is true. He did this by using what is called quantifer elimination, we will present a completely different proof that uses finite state automata.

The Decision Procedure

The proof depends on the following critical insight: finite state automata can do addition. What exactly do I mean by this? Let’s try to write down a language that represents “addition”: a natural approach is the following,

\displaystyle  L = \{ x\#y\#z \mid x + y = z \}

where {x,y,z} are {0,1} strings that encode numbers in the usual way. Thus, {x_{0}x_{1}\dots x_{n}} stands for the number

\displaystyle  x_{0} + 2x_{1} + \dots + 2^{n}x_{n}.

The trouble with this language is that no finite state automata can recognize it. The issue, of course, is that the automata cannot “remember” the value of {x} exactly after it reads the symbol {\#}.

The key is to select a clever representation so that a finite automata can recognize the language. To do this let’s consider the following language:

\displaystyle  LL = \{ \textsf {merge}(x,y,z) \mid x + y = z \}

where {\textsf {merge}(x,y,z)} is defined to be the string

\displaystyle x_{0}y_{0}z_{0}*x_{1}y_{1}z_{1}* \dots * x_{n}y_{n}z_{n}.

Note, if one string is longer than another then just use {0}‘s to pad them out to the same length. Then,

Lemma 1 The language {LL} is accepted by a finite state automata.

The reason this can be done is the automata now only has to remember the value of the “carry” from the last triple of bits. This is bounded independent of the length of the numbers. Using clever representations is what Binary Decision Diagrams (BDD’s)–invented by Randy Bryant–are all about. BDDs use clever representations–just like we do. Bryant’s work has revolutionized the world of digital logic, and I will definitely devote a post to this in the future.

However, lets get back to the proof that Presburger arithmetic is decidable. As usual call a set recognized by some finite state automata regular. All of our finite state automata will be deterministic. Consider, again, the example sentence \Pi :

\displaystyle  \forall x \exists y \exists z (x + z + 1 = y).

I will show you how to check this sentence. The general case will follow in the same manner.

The plan is this. Let {A} be the following language:

\displaystyle  A = \{ \textsf {merge}(x,y) \mid \exists z: x + z + 1 = y \}.

We will show that it is regular. Then, we will consider the language

\displaystyle  B = \{ \textsf {merge}(x) \mid \exists y \exists z: x + z + 1 = y \}.

We will again show that this language is regular. Finally, consider the language

\displaystyle  C = \{ \textsf {merge}(x) \mid \forall y \forall z: x + z + 1 \neq y \}

this will also be shown to be regular. Finally, we will check that this set is empty. The key point is that if {C} is empty, then the sentence \Pi must be true. Note, the last step is easy, since deciding if a finite state automata accepts any string is doable in polynomial time (in the number of states).

It remains only to show why {A,B,C} are all regular. Why is {A} regular? The answer is that we make critical use of the power of nondeterminism. We know that the following language is regular:

\displaystyle  AA = \{ \textsf {merge}(x,y,z) \mid x + z + 1 = y \}.

Let the deterministic automata that accepts this language be {MM}. We will construct a new nondeterministic finite state automata {M} for {A} by simulating {MM}. Note, {M} gets as input {\textsf{ merge}(x,y)}, but {MM} gets as input {\textsf{ merge}(x,y,z)}. So how does {M} simulate {MM}, since it does not have {z}? The answer is that {M} just guesses values for the missing {z}‘s. It accepts provided the guesses are correct; otherwise, it rejects. The machine {M} shows that {A} is regular. Of course that means that the language A has a deterministic finite state automata.

The same trick handles the other set {B}. The set {C} is just the complement of {B} and is even easier: recall that regular sets are closed under complements. Note, this uses that the finite state automata for C is deterministic. This a is critical point: The machine {M} that we get for the set {A} is nondeterministic, so we must convert it to a deterministic finite state automata to satisfy our “induction”. This follows by the famous power set construction. However, the price that we pay is that for each quantifier we increase the number of states by an exponential.

Open Questions

The decision procedure we outlined here has both good and bad features. On the positive side, I think its simple and shows the power of finite automata in a way that you may not have known. Also it can prove more. For example, suppose we add to the theory of addition the predicate “{x} is a power of {2}”. Then, the new theory is still decidable. The point is that checking whether or not a binary number is a power of two can easily be done by a finite automata. Indeed we can add any predicate that can be checked by a finite automata in our representation. A more complex example is “disjointness”: two numbers {x} and {y} are disjoint if for each {i}, either {x_{i}} or {y_{i}} is {0}.

On the negative side the procedure we have outlined is not very efficient. Not even close. It uses the power set construct over and over. Thus, the running time is a stack of two’s: {2} to the {2} to {\dots} a number of times that is dependent on the number of quantifiers. This is a much worse running time than the best known upper bound. I mention in passing that Mike Fischer and Michael Rabin (1974) proved that {2^{2^{\Omega(n)}}} is a lower bound on the running time. I will get back to this beautiful and important result in the future.

There are two types of interesting open questions. The first concerns extensions to Presburger arithmetic. What happens if we add to Presburger predicates like “{x} is a prime”. Does this still have a decidable theory? The conventional wisdom is no, since the theory can easily encode the twin-prime conjecture: are the an infinite number of {n} so that {n} and {n+2} are both prime. I believe that it remains open whether adding this predicate makes Presburger undecidable. See this for a fuller discussion. There are other extensions that would also be interesting. It is known that if one adds too powerful a new predicate, then Presburger becomes undecidable.

The second class of questions concern sub-theories of Presburger arithmetic. In modern compiler theory there are often questions about linear expressions that one needs to check. Often they can be expressed in Presburger arithmetic, but they use only a small part of the theory. Thus, an open question is to find fragments of Presburger arithmetic that have efficient decision procedures. This is an active area, but I not say anymore about it today.

19 Comments leave one →
  1. Jeremy H permalink
    March 7, 2009 11:31 am

    I think language C is wrong as currently written. The existential quantifiers should be universal quantifiers.

    • rjlipton permalink*
      March 8, 2009 8:53 am

      Will check right away…rjlipton

    • rjlipton permalink*
      March 8, 2009 9:12 am

      Jeremy,

      Thanks for finding the “typo”. I really appreciate your seeing the problem. Again thanks.

  2. March 7, 2009 1:05 pm

    Dear Dick, there is a problem related to Presburger arithmetic which is worth mentioning. Consider first integer programming. This is a notorious NP complete problem. But in bounded dimension is was proved to be polynomial by Lenstra. Add one quantifier. Now the problem includes the well known Sylvester coins problem. Again in bounded dimension the problem is in P, this was proved by Ravi Kannan. Now add another quantifier. Is the problem still in P for bounded dimension for two quantifiers? for ant bounded number of quantifiers?? This is not known.

    • rjlipton permalink*
      March 8, 2009 8:53 am

      Very neat addition. So two is already unknown…rjlipton

    • August 31, 2017 11:04 am

      Well, A new paper “Short Presburger arithmetic is hard” by Danny Nguyen, Igor Pak settles this problem in the less expected direction. https://arxiv.org/abs/1708.08179

  3. DC++ permalink
    March 10, 2009 12:48 am

    Nice…..

    • rjlipton permalink*
      March 10, 2009 6:56 am

      You win as the shortest comment yet. Once at a POPL conference a talk when on for 30 minutes. My question was one word “pointers”. Their answer was long since their methods could not handle pointers…

      Thanks for kind comment.

      rjlipton

  4. March 21, 2009 1:01 pm

    LOL

  5. March 25, 2009 1:30 pm

    Another interesting open question is the complexity of the existential fragment of Presburger arithmetic extended with the binary “divides” predicate. Lipshitz showed decidability in 1976. (Moreover the tiniest extensions are all undecidable.)

    This theory is also related to automata (of sorts), as follows: Consider a one-counter machine in which, in addition to the usual increment, decrement, and zero-test operations, one has transitions of the form “+u”, “+v”, “-w”, for various integer-valued parameters u, v, w. The “parametric reachability problem” for such machines is whether there exist values for the parameters so that, in the resulting concrete one-counter machine, a target final control state is reachable from the initial control state (with initial counter value 0), assuming the counter must always remain non-negative. This problem is in fact equivalent to the satisfiability problem for existential Presburger with divisibility and hence decidable
    (see http://web.comlab.ox.ac.uk/people/Joel.Ouaknine/publications/onecounter09.pdf ),
    but of unknown complexity.

  6. Stavros permalink
    July 10, 2009 12:49 pm

    The singular of “automata” is “automaton”.

  7. Anonymous permalink
    August 1, 2009 8:36 am

    Let the deterministic automata that accepts this language be {MM} . …. Translate this blog into different languages. …
    rjlipton.wordpress.com/…/finite-state-automata-binary-decision-diagrams-and-presburger-arithmetic/

    Musatov

  8. May 23, 2010 6:56 am

    One can get a better bound for the automata-theoretic approach by bounding the size of the automata in the construction. See “Bounds on the automata size for Presburger arithmetic”,
    http://portal.acm.org/citation.cfm?doid=1342991.1342995

  9. June 26, 2012 7:04 pm

    Regarding Moshe’s comment: recent work shows that one can do more than bound the size of the automata. The time taken to construct an automaton for a Presburger formula can be done in 3EXPTIME, see http://drops.dagstuhl.de/opus/volltexte/2012/3419/pdf/35.pdf

    Dick, instead of adding “x is a power of 2”, add the binary predicate div_2(x,y) := “x is a power of 2 and x divides y”. There is an automaton for this predicate. The interesting fact is that (modulo coding) every regular relation is first-order definable in (N,+,div_2). This is the B\”uchi-Elgot-Trakhtenbrot theorem in a FO setting instead of in the standard WMSO setting. In this sense FO(N,+,div_2) is universal for the regular relations.

    There is a systematic study of structures (such as (\N,+)) for which the domain and atomic relations are regular (modulo coding). For an overview see http://www.math.ucla.edu/~asl/bsl/1402/1402-001.ps

  10. Martin Michael Musatov permalink
    January 23, 2015 8:38 pm

    Could you just take y + 1 – z regarding MM?

Trackbacks

  1. On The Intersection of Finite Automata « Gödel’s Lost Letter and P=NP
  2. An excellent symposium about the theory of computation | Formal Languages and Automata Theory Course
  3. Counting Is Sometimes Easy | Gödel's Lost Letter and P=NP
  4. Danny Nguyen and Igor Pak: Presburger Arithmetic Problem Solved! | Combinatorics and more

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading