# Three Theorems About Growth

* Is a key to polynomial versus exponential complexity lurking here? *

Cropped from Abel Prize source |

Mikhail Gromov is a French-Russian mathematician who has made and continues to make fundamental contributions to many areas of mathematics. In 2009 he was awarded the Abel Prize for his seminal work on geometry. Here “geometry” is not high-school geometry of points and lines, but is all about things like: geometric group theory, hyperbolic groups, symplectic topology, and Riemannian geometry.

Today I (Dick) wish to talk about a great theorem of Gromov and its relationship to a theorem of our own Ken Regan, with a theorem by Rostislav Grigorchuk setting some goals.

Suppose we have a group with two generators . For , every *word* in gives us a group element. How many of those words are distinct? In a *free group* they all are, giving exponential growth function $\sim latex {2^d}&fg=000000$. In an *abelian group*, however, all the words are equivalent to ones of the form

where and . Thus the growth of any such group is bounded by . If there are generators we get , but this is still a polynomial bound since is fixed. Groups with finitely many generators are called **finitely generated**.

In a finite group, the growth function of course hits a ceiling at the size of the group, but for infinite groups it is unbounded. Now as computer theorists we like finite groups more than infinite, but if we have to put up with infinite groups, finitely-generated groups are the ones we can handle. Two questions about these groups that were posed by John Milnor in the 1960’s, and answered in whole or part by Gromov and Grigorchuk among others, are:

Is there an easy characterization of groups of polynomial growth? Are there groups of

intermediategrowth, between polynomial and exponential?

## Gromov’s Theorem

Here is an approximate statement of Gromov’s celebrated theorem:

Theorem 1Every finitely generated group of polynomial growth is “abelian.”

We have shown that polynomial growth is true for abelian groups, but the question is, could it be true for a larger class of groups? The answer is yes. Perhaps if a group satisfied a more complex identity than being abelian, that could be to used to argue that the growth is again polynomial.

Indeed this does happen for a group that is **nilpotent**, meaning that its chain of commutator subgroups terminates in the trivial identity subgroup after finitely many steps. To quote our friends at Wikipedia: In group theory, a nilpotent group is a group that is “almost abelian.” This idea is motivated by the fact that nilpotent groups are solvable, and for finite nilpotent groups, two elements having relatively prime orders must commute.

Thus an obvious next approximation to the theorem is:

Theorem 2Every finitely generated group of polynomial growth is “nilpotent.”

We have replaced “abelian” by “nilpotent.” The theorem now is almost correct, but not quite. There is a small but annoying issue that must be removed. Suppose that is any group of polynomial growth. Then so is where is any finite group. This is easy to show: the growth all comes from the part of the direct product since is finite. But if is “weird” then so is . In particular one can easily make fail to be nilpotent.

Group theorists have faced this issue many times before and have a work-around. They say a group is **virtually nilpotent** provided it has a subgroup that is nilpotent and of finite index in . This means roughly that “most” of is nilpotent. Thus Gromov’s real theorem is:

Theorem 3Every finitely generated group of polynomial growth is virtually nilpotent, and vice-versa.

## Grigorchuk’s Theorem

Grigorchuk answered the second question by constructing finitely-generated groups of growth asymptotic to , where is a constant that is known to lie between and . Now satisfies the definition of *sub-exponential* used by group theorists, as in this recent survey by Cong Han Lim:

Sometimes in complexity theory we use this definition. For instance, the exponential time hypothesis postulates lower bounds on the time to solve , and the same lower bound on circuit size for any language in is what is known to imply . But other times, such as with the factoring problem and “natural proofs,” what we mean by sub-exponential is: asymptotically less than for any . Perhaps call that **sub**-exponential? Accordingly we can re-pose the question:

Do there exist finitely-generated groups of growth that is super-polynomial but

sub-exponential?

Maybe there is a simple answer we are missing among various surveys, but Wikipedia notes that many simpler-seeming problems just about the particular groups found by Grigorchuk are open.

## Ken’s Theorem

Ken worked on a problem about growth in graphs. If is a graph a natural growth notion is to pick any vertex and see how many other vertices it can reach in steps. If this is bounded by a fixed-degree polynomial, then say that the graph has polynomial growth. For finite graphs we really mean that is part of an infinite family of graphs that collectively have growth bounded by the same fixed polynomial, but often the constants are concrete enough to talk about the growth of a single graph. Note that for finitely-generated groups, this is exactly the notion you get if you replace Gromov’s group-words by the underlying Cayley graph of the group.

Ken theorem is:

Theorem 4Every size- graph of fixed polynomial growth has a separator of size , indeed size where depends on the degree of the polynomial, that breaks the graph into similarly-sized pieces.

Moreover one is “almost” able to separate any two large pieces that one chooses. This is important for applications such as where are the inputs and outputs of a Boolean circuit, or disjoint halves of the inputs. For further details see his original paper, from the 1997 Computational Complexity Conference.

In graph theory there is no obvious notion of nilpotency, but the idea that the graph can be divided into two roughly equal pieces by removing only vertices is our version of “nilpotent.” This is a quite pretty theorem in my biased opinion, and one that should be better known. It may not be as deep or celebrated as Gromov’s, but it is still a very neat result.

Unlike the group theory case, it is easy to imagine graphs whose growth is super-polynomial but **sub**-exponential, because there is less structure than with the Cayley graph of a group. Such graphs are still the opposite of expander graphs, in which the growth is (truly) exponential. However, there is still a technical matter that tends to throw one into a strict polynomial-or-exponential situation, and I wonder if understanding it better can be the key to improvements and solving problems like the Unique Games Conjecture.

## The Common Obstacle

All three theorems—more precisely, the proofs of these theorems—require one to overcome a technical hurdle. This is indeed the reason I decided to talk about these results. I was reading a chapter of Terence Tao’s beautiful book titled “Compactness and Contradiction” where he gives an outline of the proof of Gromov’s theorem. Tao is brilliant at explaining things, and you can see this also on-line here.

What struck me about Tao’s proof outline is it raises an issue that one must face in proving Gromov’s result. It struck me that the exactly same issue is what makes Ken’s proof also hard. I had worked previously on trying to prove Ken’s theorem and was quite happy when he proved it.

Define to be the **ball** of radius around some point. In Gromov’s theorem and in Ken’s case this is a natural notion. The polynomial growth in both is the statement that is at most polynomial in . A *stronger* notion is that of **bounded doubling**:

for some fixed constant and all . As Tao says:

In general, polynomial growth does not obviously imply bounded doubling at all scales…

He adds that a “simple” pigeonhole argument shows that it gives bounded doubling at *most* scales. He further says that this is only a “minor” technical difficulty. But he does assume bounded doubling in his sketch of the proof of Gromov.

Ken has the same difficulty. Bounded doubling would make the proof that there is a good separator much easier and much of Ken’s proof is, in my opinion, getting around this issue. The issue may be helped by the following figure:

The function pictured grows in a polynomial, indeed linear, fashion. But it violates bounded doubling: note that the value at should be a constant times the value at , but is not. This up-and-down behavior is allowed by simple polynomial growth. Overcoming it is, in my opinion, the crux of the above proofs.

## Open Problems

Did you know Ken’s theorem? It is a natural separator theorem that probably could be used in many places in theory. Having a graph that has polynomial growth is not an unusual situation. Of course for many particular classes of polynomial-growth graphs, such as grid and lattice graphs, small separators were already known and applied.

Ken contributes two open problems showing his originally intended directions for this work. A Turing machine running in time can be simulated by circuits of size in a square grid, which give an infinite family of finite graphs that we can collectively say have growth . The -size circuits found by Nick Pippenger and Michael Fischer, however, have exponential growth. Is there a smooth size-growth tradeoff here, for instance can we get circuits of size with growth?

The second problem involves iterating a finite transducer on its own output. Under what conditions does the function given by iterations stay within ? Can it be hard for deterministic or even nondeterministic logspace? Details and some motivation are in a new paper with his student Robert Surówka. Ken thought that the Gromov-Grigorchuk theory applied to algebraic decomposition of the (iterated) transducers might be relevant.

[fixed balls-versus-spheres inconsistency in defining growth, gave more details on graph-separator theorem; fixed finitely-presented -> finitely generated]

This was originally all Dick’s post, but he had to be “offline” most of today, so I posted it as “Pip”. I contributed the Grigorchuk section and the two problems at the end.

Nice post, and result.

Nitpick: For Abelian groups the growth is Theta(d), right? There are only d+1 solutions to the equation r+d=d when r,s are nonnegative integers.

Thanks. Actually quadratic was intended in this case; we were being inconsistent with “balls” versus “spheres”. The definition of growth seems standardly to use the former, though it generally doesn’t matter above polynomial growth.

Just a note: I think the version of Ken’s theorem stated here is relatively easy to prove. Suppose no ball of radius R in the graph contains more than n/4 nodes. Then the graph has a separator of size cn log n where c = O(1/R). In particular, if the growth rate of balls is ~ r^k then one can take R ~ n^{1/k} and get sublinear separators.

To see this, just fix a vertex v. Now the assumption implies there is a radius r < R such that |S(v, r+1)| n/4 nodes.) Here S(v,.) and B(v,.) denote the sphere and ball about v, respectively. Add S(v,r+1) to the separator and recurse on the complement of B(v,r+1). By induction, one finds a balanced separator of size O(n/a).

Under polynomial growth assumptions, one can prove stronger things. There was a great question by Levin along these lines; see http://homes.cs.washington.edu/~jrl/papers/dim.pdf.

That’s the basic idea of the proof, but I felt technical obstacles, maybe in how “n” changes as one recurses. Will think about…

As long as one has removed at most n/8 vertices (say) from the graph, the constraint that “no ball of radius R contains more than n/4 nodes” remains meaningful (and can be used in the same way). The point is that one only applies the recursion to some n’ with n’ > 7n/8. Once we have collected at least n/8 vertices on the “small side” of the cut, we have found a balanced separator and we can stop.

Ah—I forgot that my theorem has the stronger feature of giving a kind of universal quantification over large pieces A,B of the graph that one wishes to separate. The upshot is you separate a large subset A’ of A from a large subset B’ of B. I needed this for applications in which A is the set of inputs of a circuit and B is the set of outputs, or where A is a selected half of the inputs and B is the other half. Meanwhile thanks for your paper, which we will examine—I’ve added to the post that my paper appeared at the 1997 “Structures” conference.

Text: “Groups with finitely many generators are called finitely presented.” Quick correction: groups generated by a finite set are called finitely *generated*. Finitely presented means that there exists a finite presentation, ie a finite set of generators and a finite set of relations.

There exist groups that are finitely generated, but have no finite presentation.

Thanks!—we were unaware of the difference. I decided to leave the statement about which kinds of groups computer scientists could “handle” as-is.

gromov’s thm does appear very significant and I immediately wonder if there is some kind of analogy of generating elements of a group to the low-level gates in a circuit building up functions used by the higher-level gates…. yeah group theory just does not seem to be that much tied into TCS yet & keep thinking there must be some “killer app” still lurking, waiting to be uncovered, imminently on the horizon, a tip of an iceberg….