Equations over groups and their relationship to complexity theory

Denis Thérien is an expert on low level complexity, especially the computational theory of groups and related algebraic structures. In a 2005 press release announcing his promotion to an administrator at McGill, it was stated that he is:

an accomplished mathematician, whose research interests include complexity theory, classifying problems in terms of resources required to compute their solution.

Indeed. I like the phrase, “classifying problems in terms of resources required to compute their solution”—very well said.

Today I want to talk about solving equations over groups, and its relationship to complexity theory.

I have two reasons for discussing this. As I discussed previously, there is a chance to solve one of the main open problems in low level complexity theory, if we could show certain equations over groups have solutions. The question is the famous one left open by David Barrington: can bounded width computations over solvable groups compute ${\mathsf{NC^1}}$? David’s famous theorem showed that such bounded computations can do this over simple groups. Barrington, Straubing, and Thérien worked on this problem proving some partial results, but the main question is unresolved. If solvable groups could, indeed, work in place of simple groups, then the following surprising result would be true:

$\displaystyle \mathsf{NC^1} \subseteq \mathsf{TC^0}.$

This is the last theorem proved in Barrington’s paper: if solvable groups can compute ${\mathsf{NC^1}}$, then the above is true. Second, the theory of equations over groups is potentially useful to computational complexity in other ways. The theory is quite difficult, since groups can behave badly in many ways. But I hope this short introduction to the area will make you want to learn more, and perhaps solve some open problems. Either resolve Barrington’s question conclusively rather than conditionally, or find more applications of the theory of equations. Or something else.

Equations Over A Group

Let’s start to look at equations over a group. As usual we will use ${xy}$ to denote the product of ${x}$ and ${y}$ in a group. Also the interesting cases are almost always when the groups are non-commutative, so ${xy}$ is not he same as ${yx}$ in general.

What is an equation over a group ${G}$? An example of an equation is:

$\displaystyle xaxb = 1.$

Note, ${a,b}$ are in ${G}$ and ${x}$ is the unknown. The equation has a solution over ${G}$ provided there is an element ${c}$ in ${G}$ so that

$\displaystyle cacb = 1.$

Pretty natural.

Many times such an equation will not have a solution: that is for all ${c}$ in ${G}$ the above is false. This does not stop us. A natural idea is to ask: is there a group ${\widehat G}$ so that ${G \subset {\widehat G}}$ and for a ${c}$ in ${\widehat G}$

$\displaystyle cacb = 1.$

In a sense we have extended the group ${G}$ and added enough elements to it so that the equation has a solution. Note, this is exactly the issue that bothered early mathematicians, what do you do with the equation

$\displaystyle x^2 = 2$

over the rationals? There is no solution, but there is one in a larger world—but you need to add the square root of ${2}$.

The obvious question is: can we always find the larger group ${\widehat G}$ so this is possible?

No Fundamental Theorem Of Groups

The answer, you may have guessed, is that there is no analog of the Fundamental Theorem of Algebra (FTA). There are group equations that cannot be solved even if we can extend the group and add new elements. This is the reason the theory is so different from solving equations over fields.

Pick any finite group ${G}$ with elements ${a}$ and ${b}$ so that ${a^2=1}$ and ${b^2 \neq 1}$. This is nothing more than asking for a group with an element of order ${2}$ and another with a different order. To make it interesting let’s also assume that ${a}$ is not the identity element. Now consider the equation:

$\displaystyle xax^{-1}b^{-1} = 1.$

The unknown is, as usual, the ${x}$. I claim that this equation has no solution in any group that contains ${G}$. Assume there was an ${x}$ in some larger group. Rewrite the equation as

$\displaystyle xax^{-1} = b.$

Then, square both sides of the equation,

$\displaystyle xax^{-1}xax^{-1} = xa^2x^{-1} = 1 = b^2 \neq 1.$

This is a contradiction. Thus, there is no FTA for group equations.

A Partial Fundamental Theorem

A pretty case where there always is a solution is given by the following almost fifty year old theorem proved in 1962 by Frank Levin:

Theorem: Suppose that ${G}$ is a finite group, and let

$\displaystyle g_1 x g_2 x g_3 \dots g_m x g_{m+1} = 1$

be an equation in the variable ${x}$, where each ${g_i}$ is in ${G}$. Then, there is a finite group ${\widehat G}$ that contains ${G}$ as a subgroup, and in ${\widehat G}$ the equation has a solution.

Even better the proof is quite constructive, and actually proves much more. Besides giving ${\widehat G}$ explicitly some properties of ${G}$ are preserved. For example, if ${G}$ is a soluble group, then so is ${\widehat G}$.

I will not have time to explain the proof, but it is interesting that the proof uses the wreath product of groups. Recall the wreath product of groups is very close to the famous zig-zag product of graphs—the wreath product is an older construction. See the great survey by Shlomo Hoory, Nathan Linial, and Avi Wigderson for an introduction to how products are used in expander graphs and other wonderful things.

Note, Levin’s theorem does not allow the “bad” equation that has no solution. It does this by simply not allowing ${x^{-1}}$ anywhere. The following is fine:

$\displaystyle x^6ax^7bx = 1,$

since we can get positive powers of ${x}$ by repeating. But we cannot get negative powers of ${x}$.

Toward The General Case

I am not an expert, not even close, on equations over groups. I do hope the above shows some of the issues that arise in even the most basic cases. I really need theorems that handle systems of equations and more complex situations.

There is one further general comment that I would like to make. There is a beautiful conjecture that is called the Kervaire-Laudenbach Conjecture. It attempts to handle inverses—as we have seen this is not trivial.

The conjecture looks at the following equations:

$\displaystyle g_1 x^{\epsilon_1} g_2 x^{\epsilon_2} g_3 \dots g_m x^{\epsilon_m} g_{m+1} = 1$

where ${\epsilon_i}$ are in ${\{-1,1\}}$. The equation is conjectured to be solvable provided

$\displaystyle \epsilon_1 + \epsilon_2 + \cdots + \epsilon_m \neq 0.$

$\displaystyle xax^{-1}b^{-1} = 1,$

which violates this constraint: ${ -1 + 1 = 0}$.

Universal Groups

Let’s call a group ${G}$ universal if Barrington’s theorem can be proved over this group. We know the following key insights:

1. If ${G}$ is a finite non-abelian simple group, then ${G}$ is universal.
2. If ${G}$ is nilpotent, then it is not universal.
3. If ${G}$ is solvable and universal, then ${ \mathsf{NC^1}}$ is contained in ${\mathsf{TC^0}. }$

Perhaps we should call such a group a Barrington group, but I hope universal is fine.

The relationship between universal solvable groups and equations over groups is that there are equation systems so if they have a solution, then there is a universal solvable group. I have many such families and will discuss them in detail another time. The general idea is to encode an “AND” gate by the equations. Here is an example of a system ${\cal S}$: Let ${x_1,x_2,\dots,x_n}$ be the variables, and let ${A \neq 1}$ be an element in solvable group ${G}$. There are four equations:

$\displaystyle \begin{array}{rcl} x_1 x_2 \dots x_ n &=& 1 \\ x_1 A x_2 A \dots x_k A x_{k+1} x_{k+2} \dots x_n &=& 1 \\ x_1 x_2 \dots x_m x_{m+1} A x_{m+2} A \dots x_n A &=& 1 \\ x_1 B_1 x_2 B_2 \dots x_n B_n &=& A, \end{array}$

where ${B_i}$ is equal to ${A}$, or ${A^2}$ based on the rule: If ${i}$ is in ${[1\dots m]}$ or ${[k+1,\dots,n]}$ then ${B_i = A}$; otherwise, it is ${A^2}$. Note, we assume that ${m. If ${\cal S}$ can be solved over some solvable group then that group is universal.

I probably made an error in the indices in the above system ${\cal S}$. Then idea is this: The product ${x_1 \dots x_n = 1}$. Then, if ${A}$ is placed in the first ${k}$ positions or the last ${m}$ positions the product is still ${1}$. However, if both are done then the product is equal to ${A}$. The tricky part if working out the indices to reflect the overlap part, since there two ${A}$‘s yield ${A^2}$.

Open Problems

The main issue, I believe, is to understand the theory of group equations better. I feel that this would help us in complexity theory a great deal. As noted it could even solve some of the main open questions in the area of low level complexity.

I would like to point out that there is very nice work on the complexity of solving equations over groups. See the paper of Mikael Goldmann and Alexander Russell for a number of results. For example they prove: that the problem of determining if a (single) equation has a solution is NP-complete for all non-solvable groups ${G}$. Perhaps I will discuss these and related results in the future.

November 3, 2010 11:00 pm

This reminds me of something I thought of after one of your previous posts about this subject. If I thought right then this method is not going to work. Consider the more general equations

$x_1 y_1 \ldots x_n y_n = 1$

$x_1 A_1 y_1 \ldots x_n A_n y_n = 1$

$x_1 y_1 B_1 \ldots x_n y_n B_n = 1$

$x_1 A_1 y_1 B_1 \ldots x_n A_n y_n B_n = A'$

Now define

$C_1 = x_1 A_1 x_1^{-1}$

$D_1 = (x_1 y_1) B_1 (x_1 y_1)^{-1}$

$C_2 = (x_1 y_1 x_2) A_2 (x_1 y_1 x_2)^{-1}$

$D_2 = (x_1 y_1 x_2 y_2) B_2 (x_1 y_1 x_2 y_2)^{-1}$

$C_n = (x_1 y_1 \ldots x_n) A_n (x_1 y_1 \ldots x_n)^{-1}$

$D_n = (x_1 y_1 \ldots x_n y_n) B_n (x_1 y_1 \ldots x_n y_n)^{-1}$

Using this we can rewrite the equations as

$C_1 C_2 \ldots C_n = 1$

$D_1 D_2 \ldots D_n = 1$

$C_1 D_1 C_2 D_2 \ldots C_n D_n = A'$

From this it is clear that A’ is in the commutator subgroup of G.

Now if the $A_i$ and $B_i$ are constructed from powers of A’, then the $C_i$ and $D_i$ will themselves be in the commutator subgroup of G, and so A’ is in [G,G], and by iterating and using that G is solvable we eventually get that A’ must be 1.

More generally, if all $A_i$ and $B_i$ are in $G^{(n)}$ then A’ will be in $G^{(n+1)}$.

(Crossing fingers for all this LaTeX :D)

November 4, 2010 2:59 pm

Ørjan Johansen

I will check but looks right…back to ideas

November 9, 2010 2:21 pm

Ørjan, it is tremendously rude to pretend to be alive when one is patently dead. Please cease this nonsense immediately.

2. November 3, 2010 11:50 pm

We can add that Denis Thérien himself and his students, in more-recent work, have proved results on the even messier topic of equations over algebraic structures that aren’t even groups, namely monoids and groupoids. They give the Goldmann-Russell paper as major inspiration. We also haven’t even touched on Denis et al.’s classification of complexity results for these weaker structures.

November 4, 2010 7:34 am

http://mathoverflow.net/questions/41288/can-a-soluble-group-compute-or

After asking the question I realised such a system of equations does not exist for any soluble group. So one needs to come up with a more general setup. Unfortunately I suspect S is (equivalent to) a special case of my question.

Essentially, we are trying to devise a ‘reusable’ binary AND gate, that is, one where we can feed the output into another AND gate and so on to an unlimited depth. I suspect this is impossible to do with equations of this kind over a soluble group; the best we can hope for is an AND gate where the output appears lower down the derived series than at least one of the inputs.

November 4, 2010 11:11 pm

It looks like we came to the same conclusion in slightly different ways. I think that my argument above essentially generalizes your MathOverflow result to the case where each of the inputs also is a vector. This means using and mixing several different input representations won’t work. (I think false can always be assumed to be 1 since that is just adjusting by multiplying with a constant, which is absorbed by the xi and yi.)

I had another idea, which this doesn’t immediately rule out, but which I couldn’t really get my head around, namely to let false and true be represented by several possible values in each spot, as long as they don’t overlap, at least for the final result. But if you have several representations for false in one spot then you cannot adjust them all to 1 and it all becomes a lot more complicated.

November 5, 2010 5:15 am

I think the same basic problem remains:

We can assume input (0,0) gives output the trivial element. In this case, if (1,0) gives a and (0,1) gives b, then (1,1) gives ab modulo M’, where M is a normal subgroup containing the inputs (such an M inevitably contains a and b). Perhaps there is a way to process both a and b as ‘false’ while ab is ‘true’. If there isn’t though, any subsequent step will give answers that differ only by an element of M’, and then only by an element of M” at the next stage, and so on.

A different question: what happens if we try to construct a ternary AND gate, or more generally an n-ary one? This is certainly possible, but the best known constructions (AFAIK) use an equation whose length is exponential in n.

November 5, 2010 8:34 am

Colin Reid:

Ouch, and here this morning I got to thinking if one could simplify this by letting the false elements form a subgroup (necessarily non-normal if you were to gain anything). Your note that ab needs to be treated as true while a and b are false seems to destroy that possibility.

I’m wondering what happens if you combine having more than one false/true element with using vectors for each input (obvious answer: much more complication).

November 4, 2010 4:11 pm

If {G} is solvable and universal, then { \mathsf{NC^1}} is contained in {\mathsf{TC^0}. }

should be something like :

If EVERY solvable {G} is universal, then { \mathsf{NC^1}} is contained in {\mathsf{TC^0}. }

November 4, 2010 4:24 pm

Gabriel,

NO. I think it suffices to have ONE such group. Besides some solvable groups are known not to be universal.

November 5, 2010 8:49 am

I’m still not an expert, but I think I saw somewhere a comment by Barrington that solvable group elements can be multiplied in $ACC^0$, so doesn’t that mean $TC^0$ can be weakened to that?

Googling now seems to point to a theorem by Barrington and Thérien implying this.

November 7, 2010 2:46 pm

Ørjan,

I believe even more is true. Can use CC^0. But I quoting the original paper.

November 5, 2010 8:55 am

One other thing I wanted to mention, I don’t think it’s just simple non-abelian groups that are universal, but all nonsolvable finite ones. Because you can always pass to the repeating term in the derived series, which has the necessary [H,H] = H property, and use only elements from that subgroup.

• November 10, 2010 11:54 am

Dick is correct — multiplication of elements in any fixed solvable group is in CC^0, a class that had not yet been named at the time of my 1989 JCSS paper where I proved this.

On the subject of naming, Ryan Williams’ new paper uses “ACC” rather than “ACC^0”, remarking that since no one ever looks at “ACC^i” for positive i, the superscript is not needed. I named “ACC” in that 1989 paper, as an abbreviation for “AC^0 with counting”, but I switched to calling it “ACC^0” when I first saw the idea, because I like uniform naming schemes.

5. November 5, 2010 1:21 pm

There are a lot of exciting works regarding equations in groups (and sometimes also in semigroups) in “combinatorial group theory,” and equations in group plays an important role in this field and in related low dimensional topology. For example (among many many others), there is the famous Makanin-Razborov algorithm for solving equations over free groups. This was the starting point for Sela’s proof of the Tarski’s conjecture asserting that all free groups with 2 or more generators are elementary equivalent.