# Can Group Theory Save Complexity Theory?

* Can solvable groups be used in Barrington’s theorem? *

Eric Allender is one of world’s experts on computational complexity. He is especially a master at the care and feeding of the complexity classes that are at the weaker end of the spectrum. These include classes like or among many others. Just because these classes seem to be weak does not mean they give out their secrets easily. Proving almost anything new about them seems well beyond all the techniques that are available to us.

Today I want to talk about a pretty result of Eric’s and how it connects to our last discussion on partial results.

I have never had the honor of working on a project with Eric, even though he was at Rutgers and I was nearby at Princeton for many years. We did however have an interesting situation arise where Eric and his colleagues wiped out Anastasios Viglas and me.

It was back in 1999 and for some reason the question of what is the computational cost of deciding the primality of a number occurred to both groups: one at Rutgers and one at Princeton. Proving lower bounds on general Turing machines is too hard, so both groups asked: could it be proved that testing whether a number is prime or not at least cannot be too easy. In particular, could we prove that primality did not lie in some low level complexity class.

The answer from Rutgers was “yes;” the answer from Princeton was “maybe.” Eric along with Michael Saks, and Igor Shparlinski proved that primality could not be in for any prime . Viglas and I proved a slightly weaker result, and worse our result needed an assumption about the structure of primes. It was one of those assumptions that must be right, but is probably hard to prove.

Unfortunately for us both papers were submitted to exactly the same conference: the 14 Annual IEEE Conference on Computational Complexity. Theirs got in—we did not. Clearly the right decision, but I always wondered what would have happened if we had submitted alone. Oh well.

** Allender On Permanent **

Now I will talk about the particular pretty result, an even more notable theorem of Eric’s from the late 1990’s. The actual results proved in his paper are stronger, but for our purposes the following simple statement is sufficient:

Theorem:The permanent is not in uniform .

Recall that is a constant depth circuit model that allows arbitrary threshold gates. In the theorem the circuits must be *uniform*: this means roughly that there is a simpler-than-logspace computation that tells which wires connect to which gates. There is some care needed in getting the model just right, so please look at Allender’s paper for all the details.

The proof uses a number of facts from complexity theory, but shows that a contradiction would follow if the permanent was in the threshold class. A very pretty and clever argument. I think it is well known, but I do not think Eric gets as much credit as he should for this result. It is definitely one of the strong results about the complexity of permanent.

** A Result To Dream About **

I decided that after the last discussion on partial results I would put out a partial result. The goal is to state a conjecture—actually a family of conjectures—so that if one is proved, then we can prove the following theorem:

Theorem?:The complexity class is properly contained in the class

The high-level issue is whether there is a loophole in the work by Barrington and others that an efficient group-algebra simulation of Boolean gates **requires** a non-solvable group. The issue involves the so-called *commutator subgroup* of a subgroup of a group . This is the group generated by the *commutators* where . The commutators themselves need not form a group—the question is, what group do they generate, and is it all of ? Well, if is an abelian subgroup then , the trivial identity subgroup, and we’ve figuratively “hit bottom.” If is not and not all of , then one can iterate and ask the same question about , and so on. A group is solvable provided you always hit bottom. Put another way, is *non-solvable* provided it has a subgroup such that . Barrington’s proof employs such a subgrou , indeed where is simple.

Of course, most of us probably believe that solvable groups cannot replace the simple groups in Barrington’s theorem, but that is an open problem. Suppose they can. Then the later work by Barrington and others on the solvable-group case would make equal to (a subclass of) . Allender’s theorem would then make properly contained in (which is equivalent for our purposes to the language class ). The partial work has thus been to establish a connection between a family of existential questions about solvable groups and results that would then fit together to prove an open separation theorem.

Moreover, if you don’t believe , i.e. if you believe Barrington’s characterization is tight, then you can read the connection the other way: results in complexity theory imply the impossibility of some pure statements in group theory. Maybe you can refute the particular statements we’ll propose by direct algebra, but if they stay open then we gain a meaningful dialogue between complexity and algebra.

** **

I will state one of the questions in the family . They all imply as I stated that

The following is the question. Some group theory notation is needed. If are in a group, then

is the conjugate of by . Also denotes the group generated by the elements and . Finally, the order of an element in is denoted by .

Is there **some** solvable with four elements in the group so that the following are true? Each of the elements is not the identity.

- Let be the subgroup . Then we require that some conjugate of and some conjugate of lie in .
- Let be the subgroup . Then we require that some conjugate of and some conjugate of lie in .
- And the following two numbers are relatively prime: and .

The last clause may force that must equal its commutator subgroup, which would violate being a solvable group. That would shoot down this member of , though we have others that might survive…

** Open Problems **

Prove . Or disprove it. If you do that then I will put out some even weaker conditions. Let me know.

Finally, I wish to thank Ken Regan for his special help with this post—as usual any mistakes are mine.

[fixed typo]

### Trackbacks

- Can Group Theory Save Complexity Theory? « Gödel's Lost Letter and … | Silcon Group
- Mathematical Intuition—What Is It? « Gödel’s Lost Letter and P=NP
- The Depth Of The Möbius Function « Gödel’s Lost Letter and P=NP
- An Annoying Open Problem « Gödel’s Lost Letter and P=NP
- “Mathematical Intuition—What Is It?” | Eslkevin's Blog

ITYM “2. Let L be the subgroup . Then we require some conjugate of a and some conjugate of b lie in L.”

(c is a conjugate of c and thus as written, 2 is always true.)

Andrew typo…thanks

Would SOLVE also imply PSPACE falls into the counting hierarchy, and thus you would actually separate Logspace from PP?

It seems to me that the statement about groups is easily false for abelian finite groups (which are of course solvable). Do I miss something?

Let G be a finite abelian group.

For all H and K subgroups of G, define HK:={hk | h is in H and k is in K}. It is a very basic fact of group theory that

|HK|=(|H||K|)/(|HnK|)

where by HnK I denote the intersection of H and K. (see for example a random link: http://www.math.admu.edu.ph/~banjo/Ma250/lesson5_250.pdf)

So suppose that, wlog, c is not the identity e.

Since any coniugate of c is c itself, by 1., c belongs to . But G is abelian, hence =

. So o(c)=|| divides || by Lagrange theorem. But|

|=(||||)/(|n|)=(o(a)o(b))/(|n|)Therefore, o(c) divides o(a)o(b), for o(c) divides a divisor of o(a)o(b). But since o(c)>1, it cannot be that o(a)o(b) and o(c)o(d) are relatively prime.Federico,

The question is does SOME such group exists.

It looks like the second half of my post is not displayed correctly. I use the notation #a# for the sub group generated by a.

“So suppose that, wlog, c is not the identity e.

Since any coniugate of c is c itself, by 1., c belongs to #a,b#. But G is abelian, hence #a,b#=#a##b#. So o(c)=|#c#| divides |#a##b#| by Lagrange theorem. But

|#a##b#|=(|#a#||#b#|)/(|#a#n#b#|)=(o(a)o(b))/(|#a#n#b#|)

Therefore, o(c) divides o(a)o(b), for o(c) divides a divisor of o(a)o(b). But since o(c)>1, it cannot be that o(a)o(b) and o(c)o(d) are relatively prime.”

I am probably missing something, but aren’t conditions 1 and 2 vacuous, since the element $c1c^{-1}$ is a conjugate of c and is also automatically contained in any subgroup? (Or was there a condition somewhere that I missed which prevents this?)

I am sorry seem to have had trouble making the conjecture clear. I fixed it again.

The point is a,b,c,d must be non-trivial elements. The last condition is really the orders of the elements a,b must be relatively prime to the orders of c,d. And the elements must all be non-trivial.

The key to use this conjecture is that a,b generate a subgroup that has the element c,d in it. But the order of c, for example, must be different from that of a and b. Even stronger if the order of a was 6 and b was 12, then c could have order 15, but not order 36.

I will probably read state the conjecture again, but hope this helps.

This seems to be a stronger version of 1./2. (since you require c,d in instead of only conjugates of them) and a weaker version of 3. since you seem to require that (o(a),o(b)) does not divide o(c), but not even that they are relatively prime.

Can you post all your conditions in their weakest form?

if group theory doesn’t save it, will complexity theory eventually become a dead subject?

Suppose we have an example G. Consider the derived series of G: G_0 = G, G_{i+1} = [G_i,G_i]. As G is soluble, there is some i such that G_i contains all of a,b,c and d, but G_{i+1} does not; without loss of generality, G_{i+1} does not contain a. Let H = G_i, and let K = G_{i+1}.

Now contains a G-conjugate c^g of c, so if we look in the quotient group H/K, contains c^gK. The orders of aK and bK divide o(a) and o(b) respectively, and aK and bK commute, so || divides o(a)o(b). On the other hand, the order of c^gK divides o(c), which is coprime to o(a)o(b). So by Lagrange’s theorem o(c^gK)=1, that is, c^g is contained in K. As K is normal in G, we conclude that c in contained in K. By a similar argument, d is contained in K. Hence is a subgroup of K. Now contains a G-conjugate of a, so in particular K contains a G-conjugate of a. Normality of K ensures that K contains every G-conjugate of a, in particular a is contained in K. Contradiction.

No such group exists. I tried to type out a proof but it doesn’t seem to have appeared. Basically, you go down the derived series and apply Federico’s argument at each level to show that a,b,c and d all have to be in the next term of the derived series, until you hit the bottom.

This basicly means you show that [H,H] = H. Since H need not be abelian, I do not see how to apply Frederico’s argument, though.

It is clear that the conditions imply that neither K nor L can be abelian (so neither can be H or G). So if [H,H] is not H, you can assume that, wlog, [a,c]=0 (:={e}). But this does not necessarily imply that every conjugate of c is c itself, since for instance, c and d do not commute.

Colin,

Has sent me his proof and it seems right. So have to go to more complex versions of the group theory conjectures. Will get a version of his argument out soon.

Could you also give some motivation on the restrictions (including why G needs to be solveable)? Maybe there is some path from A to C that only slightly misses B ;)

I’ll try again:

Suppose we have an example G. Consider the derived series of G: G_0 = G, thereafter G_{i+1} = [G_i,G_i]. Since G is soluble, there is some i such that G_i contains all of a,b,c and d but G_{i+1} does not contain all of them; wlog G_{i+1} does not contain a (using the symmetry in the conditions). Write G_i = H and G_{i+1} = K.

By condition 2, contains a conjugate a^g of a, so as a subgroup of H/K, contains a^gK. The orders of cK and dK divide o(c) and o(d) respectively, and cK and dK commute, so || divides o(c)o(d), and hence o(a^gK) divides o(c)o(d) by Lagrange’s theorem. But o(a^gK) also divides o(a^g), which is equal to o(a) and hence coprime to o(c)o(d) by condition 3. Thus o(c^gK)=1, that is, c^g is contained in K. K is normal in G, so a is contained in K. Contradiction.

Its all exciting A great sie management in operatation Wunderbar it will be glorious

Plenty of esprit to turn the tables as time goes on…..

Prof Dick Lipton,

If you can pass through the “prejudice” of a”regular” academic background, then you should care to read my paper, under 40 pages. (No I don’t have a background resembling Deolalikar) —

I have used group enumeration technique to count all the perfect matchings in a bipartite graph– that is I have a (claimed) proof that #P=FP.

http://arxiv.org/pdf/0812.1385v11

As it happens nobody has been willing to even read my paper– I submitted to TOC, prof Babai, the chief editor said no editor was willing to read.

But a group of undergrad students at UOR did a good work on finding a bug. I have almost fixed the bug in : http://arxiv.org/pdf/0906.5112v1

I have yet another version– far more clear and simpler proof just about to be posted on arxiv.org

However, the basic framework for perfect matching enumeration remains the same as in the very first version of the paper.

Here is the abstract:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

The Collapse of the Polynomial Hierarchy: NP = P (A Summary)

Authors: Javaid Aslam

(Submitted on 7 Dec 2008 (v1), last revised 7 Apr 2009 (this version, v11))

Abstract: We present a novel extension to the permutation group enumeration technique which is well known to have polynomial time algorithms. This extended technique allows each perfect matching in a bipartite graph on 2n nodes to be expressed as a unique directed path in a directed acyclic graph on O(n^3) nodes.

Thus it transforms the perfect matching counting problem into a directed path counting problem for directed acyclic graphs.

We further show how this technique can be used for solving a class of #P-complete counting problems by NC-algorithms, where the solution space of the associated search problems spans a symmetric group. Two examples of the natural candidates in this class are Perfect Matching and Hamiltonian Circuit problems.

The sequential time complexity and the parallel (NC) processor complexity of counting all the solutions to these two problems are shown to be O(n^{19}\log(n)) and O(n^{19}) respectively.

And thus we prove a result even more surprising than NP = P, that is, #P = FP, where FP is the class of functions, f: \{0, 1\}* –> N, computable in polynomial time on a deterministic model of computation such as a deterministic Turing machine or a RAM. It is well established that NP \subseteq P^{#P}, and hence the Polynomial Time Hierarchy collapses to P.

There’s a pretty easy way to prove that you’re right. Implement your algorithm and run it against some very difficult problem. You don’t need the entire power of #P to factor the largest RSA challenge pseudo-primes because factorization is contained NP for example. Should be a very simple task. That is, unless you claim the constant is enormous or something…

When you’ve got the factorization (or a witness of some other very challenging task), I will read your paper. (Note: Not Dick Lipton, but someone you’d want to read your paper.)

RSA Challenge & Integer factorization—the bottleneck: The polynomial time complexity turns out to be very un-intuitive and illusive.

The time complexity, T(n) = O(n^15) = O(10^13) YEARS!! [ for n=100]

[For n=10, T(n) = 3.65 days]

This calculation is based on the following parameters:

Number of clock cycles in the smallest step = 1

(This is the absolute lower bound on the number of clock cycles, ignoring any constants)

Clock frequency = 5.6 GHz, CPU: IBM 5.6GHZ, Z196 with 92 instructions/sec.– released Sept 2010. [1 year = O(10^7) seconds].

Although, w/o knowing an actual reduction (Factor to an NP-hard problem) it would be difficult for me to estimate the complexity, O(n^15) is perhaps the most optimistic time bound I can expect in my algorithm framework.

Clearly, RSA factorization must have used some very clever heuristics, and it factors only a fixed integer—i.e., they can’t possibly have an algorithm for any arbitrary integer.

Parallelism could help—if we could someday integrate millions of CPUs on one chip or component, and have an NC-algorithm, with a low log-exponent, one could achieve almost a linear speedup (O(log N/N) with the number, N, of CPUs. But eventually the range of n remains bounded.

BTW, I do not know any polynomial reduction from “Factor to NP-hard problem”. I have seen some non-constructive proofs saying how it must be do-able in polynomial time, but no real algorithm for the transformation. Could you provide me a reference for that “pretty easy” technique?

Hi great article thanks for sharing. What is your LOST theory after seeing the finale last night?