Workshop on Infinite State Systems at the Bellairs Institute on Barbados

 Cropped from source

Joel Ouaknine is a Professor of Computer Science at Oxford University and a Fellow of St. John’s College there. He was previously a doctoral student at Oxford and made a critical contribution in 1998 of a kind I enjoyed as a student in the 1980s. This was contributing a win in the annual Oxford-Cambridge Varsity Chess Match, which in 1998 was won by Oxford, 5-3.

Today I’d like to report on some of the wonderful things that happened at a workshop on “Infinite-State Systems” hosted by Joel at the Bellairs Institute of McGill University last March 13–20 in Barbados, before we finally opened a chess set and played two games on the last evening.

The workshop was one of two happening concurrently at Bellairs, which has been my pleasure to visit twice before in 1995 and 2009. The other was on “Co-Algebras in Quantum Physics” and was co-organized by Prakash Panangaden, whom I used to know on Cornell’s faculty when I was a postdoc there. I often wished I could be in a quantum superposition between the workshops. My own talk for Joel’s workshop was on analyzing quantum circuits, and that anchored stimulating discussions I had with both workshops’ members during meals and excursions and other free time.

The other participants in ours were Dmitry Chistikov, Thomas Colcombet, Amelie Gheerbrant, Stefan Göller, Martin Grohe, Radu Iosif, Marcin Jurdzinski, Stefan Kiefer, Stephan Kreutzer, Ranko Lazic, Jerome Leroux, Richard Mayr, Peter Bro Miltersen, Nicole Schweikardt, and James Worrell. Göller and Grohe joined Joel and me for ${2 \times 2}$ soccer in a nearby park on two lunch breaks—we didn’t just play chess.

## Reachability

The basic reachability problem is: Given a graph ${G}$, a starting node ${s}$, and a set ${T}$ of target nodes, is there a path from ${s}$ to some node in ${T}$? When ${G}$ is undirected the problem is solvable in logarithmic space. This is a deep result—for a long time randomized logspace was known—but a simple statement. When ${G}$ is directed the problem is complete for nondeterministic log space (${\mathsf{NL}}$). But ${\mathsf{NL}}$ is in ${\mathsf{P}}$ so it’s still solvable in polynomial time.

Enough said about reachability? The meta-problem—really the problem—comes from other ways to present ${G}$ besides listing finitely many nodes and edges. If ${G}$ is presented by a circuit ${C}$ with ${2n}$ inputs that recognizes its edge relation on ${V = \{0,1\}^n}$, then reachability for ${G}$ jumps up to being ${\mathsf{PSPACE}}$-complete. Not all graphs ${G}$ have ${C}$ of poly size, but those that do include the transition graphs for polynomial space-bounded Turing machines with start state ${s}$ and accepting configurations ${T}$.

As the workshop’s name implies, we can consider “machines” that give rise to infinitely many states, and then things really get interesting. We just blogged about such a machine and its halting problem. Described in another way, the “machine” is a matrix

$\displaystyle M = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & \ldots & 0\\ 0 & 0 & 1 & 0 & 0 & \ldots & 0\\ 0 & 0 & 0 & 1 & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & & \ddots & & \vdots\\ 0 & 0 & 0 & 0 & 0 & \ldots & 1\\ a_1 & a_2 & a_3 & a_4 & a_5 & \ldots & a_m \end{bmatrix},$

which takes a vector

$\displaystyle s = (x_1,x_2,\dots,x_{m-1},x_m)$

to

$\displaystyle s' = Ms = (x_2,x_3,\dots,x_m,x_{m+1}),$

where

$\displaystyle x_{m+1} = a_1 x_1 + \cdots + a_m x_m.$

The target states in ${T}$ are those with last component ${0}$. Are any of them reached in the sequence ${s,Ms,M^2 s,M^3 s\dots}$? This is a deterministic and discrete and simple linear system, yet nobody knows how to decide it. Allowing more general integer matrices ${M}$ does not change the nature of the problem, and equivalent forms include whether the lower-left (or upper-right) entry of ${M^n}$ ever becomes zero.

So the complexity range of reachability runs from easy to (maybe-) undecidable. There are further questions one can ask after allowing to consider computations that are actually infinite: Are some states in ${T}$ reached infinitely often? Does the system stay always within ${T}$ upon entry? When there is branching one can put probabilities on the transitions and ask questions about probabilities of reaching ${T}$ being nonzero or ${1}$ or ${1/2 + \epsilon}$ and so on. In a sense my own talk was about those questions when the transitions have amplitudes instead of probabilities. One can also associate infinitely many states to a finite graph by having one or more “budget counters” and giving each edge a label for how much it increments or debits a given counter when traversed. Finally one can partition the nodes among adversarial players who can vie to reach certain nodes and/or bankrupt some counters.

Overall the workshop impressed me with the wide range of interesting problems in this thread and the variety of their application areas. I’ll mention a few of the 16 talks now and intend more later. Here is a photo of our participants:

## Some Talks

${\bullet}$ Nicole Schweikardt led off with the words,

“As a warmup to infinite State Systems, everything in my talk will be finite.”

She actually led off with a great introductory problem to her 2014 paper with Kreutzer: Define the ${T}$-number of a graph ${G}$ to be the number of ways of choosing three edges that don’t have any edges between them. Equivalently, their edge-neighborhoods are disjoint; they are an independent set in the edge graph of ${G}$; the subgraph induced by their six vertices is the graph of three disjoint edges. Now for any ${n}$ consider:

1. ${G_n =}$ the cycle on ${2n}$ nodes;

2. ${H_n =}$ the graph of two disjoint ${n}$-cycles.

Do they have the same ${T}$-number? I felt knowledgeable about these graphs since Dick and I were interested long ago in the problem of distinguishing these graphs via constant-depth circuits with various fancy gates. So I put some confidence in my reaction, “of course not.” But the answer is yes when ${n \geq 10}$.

The reason “why ${10}$?” can be explained in a hand-waving manner by saying that if you number the edges in one half of ${H_n}$ ${0,1,\dots,9}$ and choose edges ${3}$ and ${6}$, you can equally well choose ${0}$ or ${9}$ for a third edge in that component. One choice “attaches to” the end with ${3}$, the other to ${6}$, so the choices have the same degrees of freedom as in the bigger cycle ${G_n}$. Well that is far from a proof, and what was beautiful in the rest of the talk and the paper is the connection between Hanf equivalence and logic that makes this rigorous. This leads to an open problem about extending the equivalence for order-invariant logics, which relates to what we covered two posts ago.

${\bullet}$ Richard Mayr led off the third talk by saying his contents were

“…partly known, partly nice observations, and partly half-baked.”

We raised a cheer recently for observations. Mayr began with a setup like Christos Papadimitriou’s games against Nature: a finite graph with some node choices controlled by the Player and the others random and one budget counter. His first example was the following graph:

The player begins at node ${P}$ with a balance of ${0}$ but can raise it as much as desired by choosing the edge with ${+1}$. To reach the goal ${T}$, however, the player must eventually go to the random node ${Q}$. This carries some risk of the ${-1}$ loop being taken more times than the ${+1}$ arc was chosen at ${P}$, thus bankrupting the player. But the risk can be made arbitrarily small. The key distinction is between the properties:

• There exists a strategy for ${P}$ that gives success probability ${1}$.

• The supremum over all strategies of their probability of success is ${1}$.

When the total state space is finite these conditions are equivalent, but here where the counter creates an infinite space only the second holds.

The next issue is whether players are allowed to remember the game history so as to know the values of their counters, or at least test them for zero. The latter power suffices to emulate Marvin Minsky’s multiple-counter machines so many problems on arbitrary graphs become undecidable. In restricted cases this leads to further interesting distinctions and questions. Let us add to the above graph an arrow from ${T}$ back to ${P}$. Can the player reach ${T}$ infinitely often? There is no single strategy that assures this, but in case bad luck at node ${Q}$ depleted the counter on the last go-round, ${P}$ can use the memory to replenish it.

Mayr then went into energy games where nature is replaced by a second player who tries to bankrupt a counter. In a recent paper he and others carved out a decidable case of the problem of who wins. This is when there is just one energy counter and the game-graph itself is induced by a one-counter automaton.

${\bullet}$ Stefan Göller talked about ${d}$-dimensional vector addition systems with states (VASS). The standard definition without states is that you have a non-negative start vector ${s}$ and a set ${A}$ of vectors with positive and negative integer entries. The transitions allow adding a vector ${v \in A}$ to your current vector ${u}$, provided u+v is non-negative. Thus it is a solitaire version of the energy games we just mentioned with ${d}$ counters, where ${d}$ is the dimension of the vectors. In Göller’s case you also change state—and this may affect the subset of ${A}$ available to use at the next state.

Göller led off by quoting Dick’s ${\mathsf{EXPSPACE}}$ lower bound for the simple case, which Dick covered near the start of this blog. In his talk he covered his recent joint paper showing that for ${d = 2}$ the problem is ${\mathsf{PSPACE}}$-complete. The real surprise here is the ${\mathsf{PSPACE}}$ upper bound, since only a double-exponential time upper bound was known dating from almost 30 years ago. The proof is a deep classification and analysis of three basic types of computational flow patterns. I don’t have time to reproduce the pretty pictures and the sketch of his proof from my notes, but the pictures appear in the paper.

Indeed, I have only gone through three of the first four talks on the first day—there was much great material in the rest and I will have to pick it up another time.

## Open Problems

As our last post hinted with its discussion of possible connections between the Skolem Problem and Fermat’s Last Theorem, we suspect that number-theory issues are governing the complexity levels. Can this be brought out in a bird’s-eye view of the various reachability problems?

July 13, 2015 9:42 am

Gregory Chaitin has a proof of the infinity of the primes which uses Kolmogorov complexity. The idea is that a set of bounded entropy can’t generate a set of unbounded entropy. I wish it were possible to reason similarly to prove that a single polynomial SAT solver could not generate, modulo the polynomial reductions, every possible polynomial Turing machine. But numbers are simpler objects than programs… or at least, they interfere less with our own mental processes!