# An EXPSPACE Lower Bound

* Vector addition reachability, counters and nondeterministic tricks *

Albert Meyer, in the first decade of his career, was one of the top theorist in the world, yet you may not know his work in detail. The cause, most likely, is that years ago he closed down his research program in complexity theory, moving into the area of the logic and semantics of programming languages. While he was a theorist, he helped invent the polynomial time hierarchy, proved lower bounds on real-time Turing machines, showed huge lower bounds on various logical theories, proved the sharpest hierarchy theorem known, and more. Much of this was joint work with some of the best of the best: Mike Fischer, Mike Paterson, Charlie Rackoff, Larry Stockmeyer, and others. I plan future posts on several of their results.

I met Albert when he spent a year at Carnegie Mellon: he taught me basic recursion theory. I remember in class one day he mentioned the infamous Collatz problem on the mapping : I spent several weeks trying to prove something about this problem with no success at all. Albert listened to my attempts, but I got nowhere. Albert later moved to MIT, where he has been ever since.

Today’s post is about some joint work that we did on a decision problem called the *Vector Addition Reachability Problem.* This problem has an interesting solution, an interesting history, and moreover uses * similar techniques that were used years later to solve the LBA problem.* Thats right. I only realized this as I began to write up this post. It is remarkable that “tricks” from very different areas can be used to solve completely unrelated problems. Too bad we did not see the connection years ago. Oh well.

I have to tell you a bit about Albert in order to explain the history of this work. Just a bit. Albert was always intense, he was quick, he was full of ideas, and he was sometimes brisk. Yet, I loved talking with him, listening to his talks, and working with him. The theory community, and me personally, were diminished when he left the field.

Once at a conference I started to tell Albert about my latest result. Right after I had stated the theorem and started to explain the proof, he interrupted me and said “be quiet”. Clearly, he was trying to see if he could prove my result in real-time. Now the result was nice, not earth shattering, but I was pretty miffed that he would try to do this. So I complained to Albert that I should get credit for thinking of the problem, not just for the solution. He thought about that, and agreed. Along with his quick mind, he had a number of sayings that were very helpful. My favorite, which I still use, is his rule of :

Prove the theorem for and then let go to infinity.

The point, of course, in mathematics “what-evers” is often special. But if you can prove the result for the case of , then you are often golden. Like any rule there are exceptions, but it is a good one to know.

** Vector Addition Systems **

For vectors from , we will say that or that is *non-negative provided*, for

A -dimensional *Vector Addition System* (VAS) consists of a non-negative vector and a finite set of arbitrary vectors where and are all in . We denote it by

The key notion is that of *reachable vectors*: the set of reachable vectors of a VAS is the set of the following vectors. The start vector is reachable. If a vector is reachable, then so is for any provided is a non-negative vector. Clearly, only non-negative vectors can be reachable.

A VAS system can be viewed as a strange kind of counter machine. It has no finite state control, or if you like its finite state control has one state. The state of the machine is always a non-negative vector; initially the state is the special vector . At any time the machine is in some state described by a non-negative vector. Suppose it is in the state Then, the machine makes a nondeterministic move, that is the machine nondeterministically selects some vector and tries to add it to If the vector is non-negative, then the machine moves to that state; otherwise, the step is illegal.

Without the non-negative constraint the set of vectors reachable is easy to describe. Suppose that as an example. Then, the set of reachable vectors is:

where are non-negative natural numbers. This is a very simple geometric set, and it is easy to determine what vectors are in this set. At worst it can be written as an integer programming problem; thus, it is clearly decidable. The key to the VAS notion of reachable is that the *order* that vectors from are added to the current state is critical. This is what makes the notion of reachable difficult to understand.

Reachable captures a computational process, not just a geometric notion. This is why the behavior of VAS’s is complex, and partly why the study of them is exciting. There are several additional reasons why we studied them. First, they were actually isomorphic to a popular model called Petri nets. Thus, questions about Petri Nets could be converted into questions about VAS’s.

Second, they were close to several other decision problems. Third, there is a close connection between VAS’s and abelian semigroups–more on this later. Finally, the reason they were really interesting to me is that very strong theorist–Meyer, Paterson, Rabin–had worked on various aspects of their structure. For a me, a junior faculty, solving a problem related to their work seemed like a good idea.

** The Result **

The obvious questions about VAS’s are: is the set of reachable vectors finite? And can the system reach a particular vector? The first, question is relatively easy to solve, although the best known running time is Ackermann like–the upper bound is astronomical. The second, problem is called the Vector Addition Reachability Problem: Given a VAS is a given vector in the set of reachable vectors? I was eventually able to prove:

Theorem 1The reachability problem for Vector Addition Systems is hard.

A curiosity is that I proved this before the following was proved:

Theorem 2The reachability problem for Vector Addition Systems is decidable.

Clearly, lower bounds are a bit more interesting if the problem is decidable. My theorem would have been wiped out, if someone had been able to prove that the reachability problem was undecidable. I had thought hard about the upper bound, but failed completely, so I switched to proving a lower bound. You do what you can.

** Getting The Result **

The proof of the lower bound result for VAS reachability started with a conversation with Meyer at a STOC conference. He told me that he had just proved, with Paterson, that the -dimensional VAS reachability problem required at least . I had been thinking about VAS’s and realized instantly that I could prove the same result with fewer dimensions. I knew a couple of tricks that Albert did not, so I played it cool and said nothing to Albert. But once back at home–Yale at the time–I worked hard until I had a proof of the lower bound.

I then called Albert to tell him the news. It was a strange phone call. He did not believe that I could prove such a theorem. The conversation consisted of Albert listing a variety of mistakes that I must have made. I kept saying no I did not do that, or no I realize that. Finally, the call ended without me saying anything to him about the proof.

As soon as I hung up, I realized there was a “bug” in my proof. Not anything that Albert had raised, but a bug nevertheless. I still remember taking a deep breath and saying to myself that I can fix this–just stay calm. In less than an hour the bug was gone, and I had the lower bound. Months later I got to explain the details to Albert.

** Main Ideas of The Proof **

I will now sketch the key ideas of the lower bound proof.

A VAS is not a very friendly type of machine to program. Recall its state is just a non-negative vector, there is not even a finite state control. We next define a friendlier type of machine, which we call a * dimensional V machine*. This is a machine that has V counters, a finite state control, and is nondeterministic. A *V counter* is a device that contains a natural number and supports the following operations:

- Increment by ;
- Decrement by ; however, if , then the non-deterministic computation dies.

It is easy to prove that a dimensional V machine can be simulated by a dimension VAS. The insight is that the finite state control is encoded into the first few co-ordinates of the state of the system.

There is good and bad news. An dimension V machine has a real finite state control, is nondeterministic, and has counters. These counters can hold arbitrarily large numbers. The bad news is that V machines have no way of testing a counter for .

Here are the highlights of how we add zero testing to counters.

We use an inductive construction to build a series of counters that allow zero testing, and so that each successive one can count to a larger number than the previous one. More precisely, if the counter can count to , then the next one can count to . Thus, at each step the size of the counters square, in steps, one gets to a double exponential size counter, i.e. one that counts to

We use the fact that a fixed number of counters that can hold numbers up to can simulate a Turing tape of length . This is really a simple idea: use the counters to simulate two pushdown tapes. Since two pushdowns clearly equal one Turing tape, this suffices. Counters can simulate a pushdown, since when viewed as a binary number the only operations needed are simple ones.

During this inductive construct we must figure out how to test larger and larger counters for zero. We do this by having two counters that are always paired, in the sense that the sum of their values is fixed. Thus, we can replace a test for zero, by a test that the other counter is large.

This last is the key trick so let me say a bit more. Imagine that always. If we want to add to the “counter” we add to and subtract from . We reverse if we want to subtract. Finally, to test if , we convert that into testing whether or not . We can do this by uses two sized counters and a double loop: the loop makes decrements to . This is successful only if was zero.

** Abelian Semigroups **

As soon as Albert understood my proof he asked if the V machines I used in the lower bound proof were reversible. I thought for a while and we both worked out that they were. This meant that we could prove the following theorem:

Theorem 3The word problem for abelian semigroups is hard.

The idea iss that suppose that the abelian semigroup has three generators. Then, an element of the semigroup is described by

where are natural numbers. A semigroup is defined by equations on the generators such as

These equations are reversible in the sense that both the following transformations are allowed:

Since the V machines were reversible, the word problem for abelian semigroups could be shown to have the same lower bound.

The details of this were worked out by Meyer and his graduate student E. Cardoza. They published the final version of our paper without me; we planned a joint paper, but I had an illness in my family and could not help with the paper.

** Upper Bounds **

Recall a lower was proved first, then the problem was shown to decidable.

Theorem 4The reachability problem for Vector Addition Systems is decidable.

This theorem has an interesting story of its own, its history falls into three acts. In act I, like many hard problems, the theorem was claimed a number of times: each of these claims were found to be incorrect. For example, George Sacerdote and Richard Tenney–two theorists–claimed a proof that eventually was shown to have lemmas that failed even in the dimensional case. The reachability problem is a tricky problem, and intuition about it was often faulty. I am terrible at geometric reasoning myself, and I can understand how people could easily be mistaken.

In act II, two proofs were found: the first by Ernst Mayr and shortly thereafter a proof by Rao Kosaraju. Many did not believe Mayr’s proof, since it was unclear in many places. Understanding these long complex arguments, with many interlocking definitions and lemmas, is difficult. Kosaraju created his proof because he felt Mayr’s proof was wrong. You can imagine there was an immediate controversy. Which proof was right? Both? Neither? There were claims that Kosaraju’s proof was just Mayr’s proof, but written in a clear way. The controversy got messy very quickly.

Finally, in act III there is now a new proof by Jérôme Leroux that seems much cleaner. and more likely to be correct. More on this in the future.

** Open Problems **

The main open problem is to improve the lower bound on the reachability problem for VAS’s. There is currently a huge gap between the lower bound of and the upper bound, which is not even primitive recursive. The current construction of the lower bound is tight, and I see no way to improve it. But, perhaps you will see something that we all have missed.

Finally, a lesson here is that theorems from very disparate areas of theory can actually use the same proofs methods. I am surprised that the guess and count ideas that I used years ago are close to the methods needed to solve the LBA problem. In the VAS and the LBA proofs we need to use counters and we also need to use non-determinism. There is no direct link between the proofs, but I think there are some similar ideas that are being used. Certainly the idea, used in the VAS proof, that to prove something is small, we prove something else is large is related to the use of counters in the LBA proof.

Of course I wish I had seen that the connection years ago, but one wonders if more people had understood the VAS lower bound would the LBA problem had been solved years earlier? What other undiscovered connections are lurking out there.

### Trackbacks

- Fads and the Theory of Computation « Gödel’s Lost Letter and P=NP
- New Ideas On Nondeterministic Simulation « Gödel’s Lost Letter and P=NP
- Stockmeyer’s Approximate Counting Method « Gödel’s Lost Letter and P=NP
- Proving A Proof Is A Proof « Gödel’s Lost Letter and P=NP
- Deolalikar’s Claim: One Year Later « Gödel’s Lost Letter and P=NP

Kudos for another lovely post! Apart from being scientifically very interesting, it brought back memories of a discussion I had with Albert as a second-year PhD student during the conference dinner for LICS 1989 in Asilomar. It was an experience that I have fondly remembered ever since.

However, I wonder why you say “While he was a theorist, he helped invent the polynomial time hierarchy, proved lower bounds on real-time Turing machines, showed huge lower bounds on various logical theories, proved the sharpest hierarchy theorem known, and more.” Wasn’t he also a theorist when he was working on logic and the semantics of programming languages? In my book he definitely was, as he was also a logician when he proved that the weak monadic second order theory of successor is not elementary recursive. I feel that the equation

theorist = complexity theorist

gives far too narrow a view of the broad field of TCS.

Thanks a lot for your very readable and inspiring posts.

I agree. Once a theorist, always one.

I do take a wide view of theory. I was thinking of missing him at STOC/FOCUS etc.

In fact, as far as I recall, Albert was the prime mover behind the establishment of the LICS conference series. Perhaps this was his reaction to the lack of coverage that FOCS/STOC was giving to his new fields of research interest. I was a beginning PhD student then, so I do not know the details. Others might want to elaborate on the birth of LICS, which has since then become a very strong (some say the strongest) conference in TCS volume B.

I would like to echo Luca both in thanking you for your truly excellent posts, and agreeing with him that the definition of “theory” is often taken to be rather narrow by the STOC/FOCS community… :) In fact, I hear (from second-hand sources, so this could well be wrong!) that Jerome Leroux’s breakthrough paper was rejected from FOCS for being “out of scope”. It will now appear at this summer’s LICS.

I did not know that. I cannot imagine that his paper would not be an obvious “accept”. But that just my opinion.

Thanks for the nice post.

Joel is right. One of the three FOCS reviews was “I think that the author should find a conference that is better suited for this paper”. :-)

Jeff Lagarias has a big collection of Collatz-related results. I think that the problem is proved undecidable if you replace 3 with some other possible parameters. You can’t let 3 go to infinity for that problem.