# On The Intersection of Finite Automata

* Can we beat the classic emptiness test for the intersection of finite automata? *

George Karakostas and Anastasios Viglas are two excellent theorists, who both have their PhD’s from the Princeton computer science department. I was the PhD advisor of Viglas, and later had the pleasure of working with both of them on several projects.

Today I want to talk about finite state automata, and an open question concerning them. The question is very annoying, since we have no idea how to improve a classic algorithm. This algorithm is decades old, but it is still the best known. There seems to be no hope to show that algorithm result is optimal, yet there also seems to be no idea how to beat it. Oh well, we have seen this before.

It was a great pleasure to work with George and Anastasios. I enjoyed our work together very much, and they have continued working together over the years—on a wide variety of topics.

I really do not have any really great stories about them, or even one of them. One story that does comes to mind is about steaks. George likes his steak cooked well done, as I do too. I know that is probably not the proper way to order a steak, but that is the way I like them. But, when George says “well-done” he means “well-done.” I always loved the way George would explain this to our server: They would ask George, “how would you like your steak cooked?” And George would say, “well done—actually burn it.” This usually caused some discussion with the server, and George had to be insistent that he really wanted it “burnt.” I like someone who is definite.

Let’s turn from steaks to finite automata, and to a question about them that is wide open.

** Intersection of Finite State Automata **

The power of deterministic finite state automata (FSA) is one of the great paradoxes, in my opinion, of computer science. FSA are just about the simplest possible computational device that you can imagine. They have finite storage, they read their input once, and they are deterministic—what could be simpler? FSA are closed under just about any operation; almost any questions concerning their behavior is easy to decide. They seem “trivial.”

On the other hand, FSA are more powerful than you might guess—this is the paradox. While FSA cannot even count, they can do magic. I have already pointed out that FSA play a major role in the modern analysis of hardware—BDD’s, they can be used to prove the decidability of complex logical theories—Presburger’s Arithmetic, and they are used everyday by millions to do searches. Every time you search for a pattern in a text string, especially when you use “don’t cares”, the search is done via FSA technology. Simple, yet powerful.

I want to raise a question about FSA that seems to be interesting, yet is unyielding to our attempts to understand it. In order to understand the question we first need some simple definitions. If is a FSA, let be the language that it accepts, and let be the number of states of .

One of the classic results in the theory of FSA is this:

Theorem:If and are FSA’s, then there is an FSA with at most states that accepts .

The proof of this uses the classic cartesian product method: one creates a machine that has pairs of states, and simulates both and at the same time. This is why the simulation uses states. A corollary of this result is the following:

Corollary:If and are FSA’s, then there is an algorithm that runs in time and decides whether or not is empty.

George, Anastasios, and I noticed that there is no reason to believe that this cannot be improved. I guess that is a double negative, what I mean is: let’s find an algorithm that solves the emptiness problem for in time linear in the description the two FSA’s. Note, there are FSA’s and so that the smallest number of states of a machine so that

is approximately the product of the number of states of and . Thus, a faster algorithm must avoid the step of building the product automata. But, this is precisely the issue I have raised before: the algorithm for determining emptiness is free to doing anything it wants.

** Our Results **

We worked hard on trying to beat the standard approach and could not improve on it. So eventually we wrote a paper that in the true spirit of theory, assumed that there was a faster than product algorithm for the intersection of FSA. Then, we looked at what the consequences would be of such an algorithm. Our main discovery was that if there existed faster methods for the intersection of automata, then cool things would happen to many long standing open problems.

You could of course take this two ways: you could use these results to say it is unlikely that any algorithm can beat the product algorithm, or you could use these results to say how wonderful a better algorithm would be. It’s your choice. I like to be positive, so I tend to hope that there is a faster algorithm.

Suppose that are FSA each with at most states. We asked what would the consequences be there was an algorithm that ran in time and determined whether or not there was an input they all accepted. That is determined whether or not the following language is empty or not:

Call this the *product emptiness problem* (PEP). Note, if is allowed to be arbitrary, then the answer is known. Dexter Kozen proved long ago that in this case the problem is complete. Thus, we have to restrict ourselves to either fixed , or allow algorithms that run in time

for example.

Here are a couple of results from our paper that should help give you a flavor of the power of assuming that PEP could be solved more efficiently, than the product algorithm. As usual please check out the paper for the details. All the results assume which means that for any fixed there is an algorithm for solving PEP with FSA’a where each has at most states in time , for any .

Theorem:If , then there is an algorithm for the knapsack problem that runs in time , for any .

Theorem:If , then there is an algorithm for the factoring problem that runs in time , for any .

I have discussed the following earlier.

Theorem:If , then

for any .

** A Sample Proof **

None of the proofs of these theorems are too difficult, but the most complex is the last result. This result uses the block respecting method. Here is a sample proof that should demonstrate how is used. I will outline the proof of the factoring theorem: actually proving a type bound.

Let be the FSA that given the input checks whether or not

and neither nor is . We assume that and are bit numbers and is a fixed bit number that we wish to factor. Then, select three distinct primes each about size . Look at the PEP,

There are two cases. If there are no solutions, then must be a prime. If there are solutions, then suppose that is one. Then,

But, so that this is an equality, and

The above is a primality test. We make it into a factoring algorithm by adding a simple test: we insist that the first automata also test that lie in some range. This requires only a polynomial factor increase in the number of states of . But, we can then do a binary search to find a factor of .

** Open Problems **

Can one solve the PEP in time faster than the product algorithm? Even for two automata this seems like a neat problem. I find it hard to believe that the best one can do is to build the product machine, and then solve the emptiness problem.

I do have one idea on a possible method to improve the PEP in the case of two automata. Suppose that and are two FSA. Then, construct the non-deterministic pushdown automata that operates as follows on the input : The machine reads the input and pushes the input onto the pushdown store; at the same time then machines checks whether or not the input is accepted by . Then, the machine pops off the input from the pushdown and checks whether or not the value coming off the pushdown, (the reversal of ), is accepted by the machine .

This can be done by a non-deterministic pushdown automata that uses at most states. The emptiness problem for pushdown machines is in polynomial time. If it could be done is less than quadratic time, then at least for two automata we would be able to beat the product algorithm. An open problem is can this be made to work?

Isn’t the problem of non emptiness for the intersection of a

fixednumber of automata in NLOGSPACE? One guesses a word in the intersection and checks on the fly that it is accepted by all the automata, which only requires to remember one state for each automaton at any point?On the emptiness of a pushdown automaton, the algorithm that comes to my mind is to construct an equivalent grammar of at worst cubic size and test it for emptiness in linear time.

An equivalent construction for the emptiness of $L(A)\cap L(B)$ would be to intersect $L(A)\cdot L(B)^R$ with the set of even palindromes on the same alphabet $\Sigma$, which can be described by a context-free grammar of size linear with $|\Sigma|$. But again the usual algorithm for the emptiness of the intersection of the language of a context-free grammar and a regular language is at worst cubic in the size of the automaton for the regular language.

Some more insights might be found in the inter-reductions between context-free parsing and boolean matrix multiplication (Valiant, JCSS 10, 1975, and Lee, JACM 49(1), 2002).

Another point could be that the pda is very special. It does only one turn. Perhaps that can help?

The point is to do it faster than n^k. You are right that is easy. But how easy?

I liked this post! I had no idea that people still did work with FSA’s anymore!

A question on the primality test. Is there a reason you chose three primes of length 2n/3, as opposed to other numbers of primes that multiply large enough?

Sure. Great point. Just wanted to make the point. By selecting more and smaller primes you would get even faster time bounds.

It’s nice to see people revisiting questions in automata theory. Automata theory used to be one of the core areas of theoretical computer science. In fact, the “A” in “SIGACT” used to abbreviate “Automata”. Over the last 15 years, however, the STOC/FOCS community lost interest in automata, and now the “A” in “SIGACT” abbreviates “Algorithms”. During this period, however, automata theory has become highly relevant to the theory and practice of computer-aided verification, as well as database theory. Some questions in automata theory, open since 1960, have been settled only this year!

A possible way to reach the expected solution is to let these two languages be mirrored languages, that is, L(A) = L(B), and LEARN the relations between them as memberships, such as, LITERAL(b) is the member of WORD(a), WORD(b) is the member of CLAUSE(a), and so on. Then it permits checking relations in kn time. The concept was introduced by Wang, Boolos, Parsons and others as iterative set theory.

This problem can be very nicely treated in the framework of parameterized complexity. First, it is not very difficult to give a parameterized reduction from k-Clique to k-PEP: one way to do this is to let the alphabet be the set of vertices, each FSA accepts words of length 2 times (k choose 2) describing the endpoints of the (k choose 2) edges, every FSA checks if the endpoints of each edge are really adjacent, and the i-th FSA checks if the k-1 edges incident to the i-th vertex of the clique all use the same vertex for the i-th vertex. This reduction shows that k-PEP is W[1]-hard and hence not fixed-parameter tractable (that is, no f(k)n^O(1) algorithm) unless FPT=W[1].

Second, we observe that this reduction transforms the parameter linearly (or more precisely, does not change it at all, since the number of FSA’s is exactly the size of the clique we are looking for). Thus a f(k)n^o(k) algorithm for k-PEP implies that there exists a f(k)n^o(k) algorithm for k-Clique. Now we can use the results of Chen et al. “Linear FPT reductions and computational lower bounds” (STOC 2004) saying that if there is a f(k)n^o(k) algorithm for k-Clique, then the Exponential Time Hypothesis fails: n-variable 3SAT can be solved in time 2^o(n).

Note that this is actually a stronger statement than what appears in your paper: not only ruling out the possibility of algorithms with running time n^o(k), but also algorithms with running time, say, 2^2^2^2^k n^o(k).

I find it quite interesting that your paper essentially invented the same line of reasoning as the Chen et al. paper to obtain tight lower bounds in the exponent.

Thanks. Two issues first the algorithms can be random, how does that work with the fixed viewpoint? Second, it still is possible that such algorithms exists…no?

Sorry, I was sloppy: by “ruling out the possibility” I meant “does not exist unless something unlikely happens”. So if algorithms with the given running time exist, then the Exponential Time Hypothesis fails. The same argument shows also that if there is a randomized algorithm with the given running time, then the randomized version of ETH fails: there is a randomized 2^{o(n)} time algorithm for n-variable 3SAT.

This is a really awesome comment! I guess that I somehow missed it back eight years ago. 🙂

Where can I find Dexter Kozen’s proof of PEP \in PSPACE-complete?

Got it. D. Kozen. Lower bounds for natural proof systems. In Proc. 18th Symp. on the Foundations of Computer Science, pages 254–266, 1977.

What is known about how the product construction compares to the complement of the union of complements construction?