Skip to content

Can Group Theory Save Complexity Theory?

September 28, 2010


Can solvable groups be used in Barrington’s theorem?

Eric Allender is one of world’s experts on computational complexity. He is especially a master at the care and feeding of the complexity classes that are at the weaker end of the spectrum. These include classes like {\mathsf{ACC^0}} or {\mathsf{TC^0}} among many others. Just because these classes seem to be weak does not mean they give out their secrets easily. Proving almost anything new about them seems well beyond all the techniques that are available to us.

Today I want to talk about a pretty result of Eric’s and how it connects to our last discussion on partial results.

I have never had the honor of working on a project with Eric, even though he was at Rutgers and I was nearby at Princeton for many years. We did however have an interesting situation arise where Eric and his colleagues wiped out Anastasios Viglas and me.

It was back in 1999 and for some reason the question of what is the computational cost of deciding the primality of a number occurred to both groups: one at Rutgers and one at Princeton. Proving lower bounds on general Turing machines is too hard, so both groups asked: could it be proved that testing whether a number is prime or not at least cannot be too easy. In particular, could we prove that primality did not lie in some low level complexity class.

The answer from Rutgers was “yes;” the answer from Princeton was “maybe.” Eric along with Michael Saks, and Igor Shparlinski proved that primality could not be in {\mathsf{ACC^0}[p]} for any prime {p}. Viglas and I proved a slightly weaker result, and worse our result needed an assumption about the structure of primes. It was one of those assumptions that must be right, but is probably hard to prove.

Unfortunately for us both papers were submitted to exactly the same conference: the 14{^{th}} Annual IEEE Conference on Computational Complexity. Theirs got in—we did not. Clearly the right decision, but I always wondered what would have happened if we had submitted alone. Oh well.

Allender On Permanent

Now I will talk about the particular pretty result, an even more notable theorem of Eric’s from the late 1990′s. The actual results proved in his paper are stronger, but for our purposes the following simple statement is sufficient:

Theorem: The permanent is not in uniform {\mathsf{TC^0}}.

Recall that {\mathsf{TC^0}} is a constant depth circuit model that allows arbitrary threshold gates. In the theorem the circuits must be uniform: this means roughly that there is a simpler-than-logspace computation that tells which wires connect to which gates. There is some care needed in getting the model just right, so please look at Allender’s paper for all the details.

The proof uses a number of facts from complexity theory, but shows that a contradiction would follow if the permanent was in the threshold class. A very pretty and clever argument. I think it is well known, but I do not think Eric gets as much credit as he should for this result. It is definitely one of the strong results about the complexity of permanent.

A Result To Dream About

I decided that after the last discussion on partial results I would put out a partial result. The goal is to state a conjecture—actually a family of conjectures—so that if one is proved, then we can prove the following theorem:

Theorem?: The complexity class {\mathsf{NC^1}} is properly contained in the class { \mathsf{\#P}. }

The high-level issue is whether there is a loophole in the work by Barrington and others that an efficient group-algebra simulation of Boolean gates requires a non-solvable group. The issue involves the so-called commutator subgroup {[H,H]} of a subgroup {H} of a group {G}. This is the group generated by the commutators {[a,b] = aba^{-1}b^{-1}} where {a,b \in H}. The commutators themselves need not form a group—the question is, what group {I} do they generate, and is it all of {H}? Well, if {H} is an abelian subgroup then {I = \{e\}}, the trivial identity subgroup, and we’ve figuratively “hit bottom.” If {I} is not {\{e\}} and not all of {H}, then one can iterate and ask the same question about {[I,I]}, and so on. A group {G} is solvable provided you always hit bottom. Put another way, {G} is non-solvable provided it has a subgroup {H \neq \{e\}} such that {[H,H] = H}. Barrington’s proof employs such a subgrou {H}, indeed where {H} is simple.

Of course, most of us probably believe that solvable groups cannot replace the simple groups in Barrington’s theorem, but that is an open problem. Suppose they can. Then the later work by Barrington and others on the solvable-group case would make {\mathsf{NC}^1} equal to (a subclass of) {\mathsf{TC}^0}. Allender’s theorem would then make {\mathsf{NC^1}} properly contained in { \#\mathsf{P}} (which is equivalent for our purposes to the language class {\mathsf{PP}}). The partial work has thus been to establish a connection between a family {{\cal SOLVE}} of existential questions about solvable groups and results that would then fit together to prove an open separation theorem.

Moreover, if you don’t believe {\mathsf{NC}^1 = \mathsf{TC}^0}, i.e. if you believe Barrington’s characterization is tight, then you can read the connection the other way: results in complexity theory imply the impossibility of some pure statements in group theory. Maybe you can refute the particular statements we’ll propose by direct algebra, but if they stay open then we gain a meaningful dialogue between complexity and algebra.

{\cal SOLVE}

I will state one of the questions in the family {\cal SOLVE}. They all imply as I stated that

\displaystyle  \mathsf{NC^1} \subsetneq \mathsf{\#P}.

The following is the question. Some group theory notation is needed. If {x,y} are in a group, then

\displaystyle  yxy^{-1}

is the conjugate of {x} by {y}. Also { \langle a,b \rangle} denotes the group generated by the elements {a} and {b}. Finally, the order of an element {x} in {G} is denoted by {o(x)}.

{\bullet } Is there some solvable {G} with four elements {a,b,c,d} in the group so that the following are true? Each of the elements is not the identity.

  1. Let {K} be the subgroup {\langle a,b \rangle}. Then we require that some conjugate of {c} and some conjugate of {d} lie in {K}.
  2. Let {L} be the subgroup {\langle c,d \rangle}. Then we require that some conjugate of {a} and some conjugate of {b} lie in {L}.
  3. And the following two numbers are relatively prime: {o(a)o(b)} and {o(c)o(d)}.

The last clause may force that {H = \langle a,b,c,d\rangle} must equal its commutator subgroup, which would violate {G} being a solvable group. That would shoot down this member of {{\cal SOLVE}}, though we have others that might survive…

Open Problems

Prove {\cal SOLVE}. Or disprove it. If you do that then I will put out some even weaker conditions. Let me know.

Finally, I wish to thank Ken Regan for his special help with this post—as usual any mistakes are mine.

[fixed typo]

About these ads
26 Comments leave one →
  1. Andrew Hunter permalink
    September 28, 2010 2:30 pm

    ITYM “2. Let L be the subgroup . Then we require some conjugate of a and some conjugate of b lie in L.”

    (c is a conjugate of c and thus as written, 2 is always true.)

    • rjlipton permalink*
      September 28, 2010 2:46 pm

      Andrew typo…thanks

  2. Lance permalink
    September 28, 2010 3:42 pm

    Would SOLVE also imply PSPACE falls into the counting hierarchy, and thus you would actually separate Logspace from PP?

  3. Federico permalink
    September 28, 2010 4:41 pm

    It seems to me that the statement about groups is easily false for abelian finite groups (which are of course solvable). Do I miss something?

    Let G be a finite abelian group.

    For all H and K subgroups of G, define HK:={hk | h is in H and k is in K}. It is a very basic fact of group theory that

    |HK|=(|H||K|)/(|HnK|)

    where by HnK I denote the intersection of H and K. (see for example a random link: http://www.math.admu.edu.ph/~banjo/Ma250/lesson5_250.pdf)

    So suppose that, wlog, c is not the identity e.

    Since any coniugate of c is c itself, by 1., c belongs to . But G is abelian, hence =. So o(c)=|| divides || by Lagrange theorem. But

    ||=(||||)/(|n|)=(o(a)o(b))/(|n|)

    Therefore, o(c) divides o(a)o(b), for o(c) divides a divisor of o(a)o(b). But since o(c)>1, it cannot be that o(a)o(b) and o(c)o(d) are relatively prime.

    • rjlipton permalink*
      September 28, 2010 5:07 pm

      Federico,

      The question is does SOME such group exists.

  4. Federico permalink
    September 28, 2010 4:49 pm

    It looks like the second half of my post is not displayed correctly. I use the notation #a# for the sub group generated by a.

    “So suppose that, wlog, c is not the identity e.

    Since any coniugate of c is c itself, by 1., c belongs to #a,b#. But G is abelian, hence #a,b#=#a##b#. So o(c)=|#c#| divides |#a##b#| by Lagrange theorem. But

    |#a##b#|=(|#a#||#b#|)/(|#a#n#b#|)=(o(a)o(b))/(|#a#n#b#|)

    Therefore, o(c) divides o(a)o(b), for o(c) divides a divisor of o(a)o(b). But since o(c)>1, it cannot be that o(a)o(b) and o(c)o(d) are relatively prime.”

  5. Mikola permalink
    September 28, 2010 5:10 pm

    I am probably missing something, but aren’t conditions 1 and 2 vacuous, since the element $c1c^{-1}$ is a conjugate of c and is also automatically contained in any subgroup? (Or was there a condition somewhere that I missed which prevents this?)

  6. Anonymous permalink
    September 28, 2010 6:07 pm

    if group theory doesn’t save it, will complexity theory eventually become a dead subject?

  7. Colin Reid permalink
    September 29, 2010 5:09 am

    No such group exists. I tried to type out a proof but it doesn’t seem to have appeared. Basically, you go down the derived series and apply Federico’s argument at each level to show that a,b,c and d all have to be in the next term of the derived series, until you hit the bottom.

    • Carsten Milkau permalink
      September 29, 2010 8:33 am

      This basicly means you show that [H,H] = H. Since H need not be abelian, I do not see how to apply Frederico’s argument, though.

      It is clear that the conditions imply that neither K nor L can be abelian (so neither can be H or G). So if [H,H] is not H, you can assume that, wlog, [a,c]=0 (:={e}). But this does not necessarily imply that every conjugate of c is c itself, since for instance, c and d do not commute.

    • rjlipton permalink*
      September 29, 2010 8:34 am

      Colin,

      Has sent me his proof and it seems right. So have to go to more complex versions of the group theory conjectures. Will get a version of his argument out soon.

      • Carsten Milkau permalink
        September 29, 2010 9:11 am

        Could you also give some motivation on the restrictions (including why G needs to be solveable)? Maybe there is some path from A to C that only slightly misses B ;)

  8. Colin Reid permalink
    September 29, 2010 8:17 am

    I’ll try again:

    Suppose we have an example G. Consider the derived series of G: G_0 = G, thereafter G_{i+1} = [G_i,G_i]. Since G is soluble, there is some i such that G_i contains all of a,b,c and d but G_{i+1} does not contain all of them; wlog G_{i+1} does not contain a (using the symmetry in the conditions). Write G_i = H and G_{i+1} = K.

    By condition 2, contains a conjugate a^g of a, so as a subgroup of H/K, contains a^gK. The orders of cK and dK divide o(c) and o(d) respectively, and cK and dK commute, so || divides o(c)o(d), and hence o(a^gK) divides o(c)o(d) by Lagrange’s theorem. But o(a^gK) also divides o(a^g), which is equal to o(a) and hence coprime to o(c)o(d) by condition 3. Thus o(c^gK)=1, that is, c^g is contained in K. K is normal in G, so a is contained in K. Contradiction.

  9. October 1, 2010 7:33 am

    Its all exciting A great sie management in operatation Wunderbar it will be glorious
    Plenty of esprit to turn the tables as time goes on…..

  10. Javaid Aslam permalink
    October 1, 2010 6:52 pm

    Prof Dick Lipton,
    If you can pass through the “prejudice” of a”regular” academic background, then you should care to read my paper, under 40 pages. (No I don’t have a background resembling Deolalikar) —

    I have used group enumeration technique to count all the perfect matchings in a bipartite graph– that is I have a (claimed) proof that #P=FP.
    http://arxiv.org/pdf/0812.1385v11

    As it happens nobody has been willing to even read my paper– I submitted to TOC, prof Babai, the chief editor said no editor was willing to read.
    But a group of undergrad students at UOR did a good work on finding a bug. I have almost fixed the bug in : http://arxiv.org/pdf/0906.5112v1

    I have yet another version– far more clear and simpler proof just about to be posted on arxiv.org
    However, the basic framework for perfect matching enumeration remains the same as in the very first version of the paper.
    Here is the abstract:
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    The Collapse of the Polynomial Hierarchy: NP = P (A Summary)
    Authors: Javaid Aslam
    (Submitted on 7 Dec 2008 (v1), last revised 7 Apr 2009 (this version, v11))

    Abstract: We present a novel extension to the permutation group enumeration technique which is well known to have polynomial time algorithms. This extended technique allows each perfect matching in a bipartite graph on 2n nodes to be expressed as a unique directed path in a directed acyclic graph on O(n^3) nodes.
    Thus it transforms the perfect matching counting problem into a directed path counting problem for directed acyclic graphs.
    We further show how this technique can be used for solving a class of #P-complete counting problems by NC-algorithms, where the solution space of the associated search problems spans a symmetric group. Two examples of the natural candidates in this class are Perfect Matching and Hamiltonian Circuit problems.
    The sequential time complexity and the parallel (NC) processor complexity of counting all the solutions to these two problems are shown to be O(n^{19}\log(n)) and O(n^{19}) respectively.
    And thus we prove a result even more surprising than NP = P, that is, #P = FP, where FP is the class of functions, f: \{0, 1\}* –> N, computable in polynomial time on a deterministic model of computation such as a deterministic Turing machine or a RAM. It is well established that NP \subseteq P^{#P}, and hence the Polynomial Time Hierarchy collapses to P.

    • Anonymous permalink
      October 1, 2010 9:42 pm

      There’s a pretty easy way to prove that you’re right. Implement your algorithm and run it against some very difficult problem. You don’t need the entire power of #P to factor the largest RSA challenge pseudo-primes because factorization is contained NP for example. Should be a very simple task. That is, unless you claim the constant is enormous or something…

      When you’ve got the factorization (or a witness of some other very challenging task), I will read your paper. (Note: Not Dick Lipton, but someone you’d want to read your paper.)

      • Javaid Aslam permalink
        October 11, 2010 2:24 pm

        RSA Challenge & Integer factorization—the bottleneck: The polynomial time complexity turns out to be very un-intuitive and illusive.

        The time complexity, T(n) = O(n^15) = O(10^13) YEARS!! [ for n=100]
        [For n=10, T(n) = 3.65 days]
        This calculation is based on the following parameters:
        Number of clock cycles in the smallest step = 1
        (This is the absolute lower bound on the number of clock cycles, ignoring any constants)
        Clock frequency = 5.6 GHz, CPU: IBM 5.6GHZ, Z196 with 92 instructions/sec.– released Sept 2010. [1 year = O(10^7) seconds].

        Although, w/o knowing an actual reduction (Factor to an NP-hard problem) it would be difficult for me to estimate the complexity, O(n^15) is perhaps the most optimistic time bound I can expect in my algorithm framework.
        Clearly, RSA factorization must have used some very clever heuristics, and it factors only a fixed integer—i.e., they can’t possibly have an algorithm for any arbitrary integer.

        Parallelism could help—if we could someday integrate millions of CPUs on one chip or component, and have an NC-algorithm, with a low log-exponent, one could achieve almost a linear speedup (O(log N/N) with the number, N, of CPUs. But eventually the range of n remains bounded.

        BTW, I do not know any polynomial reduction from “Factor to NP-hard problem”. I have seen some non-constructive proofs saying how it must be do-able in polynomial time, but no real algorithm for the transformation. Could you provide me a reference for that “pretty easy” technique?

  11. October 11, 2010 5:22 am

    Hi great article thanks for sharing. What is your LOST theory after seeing the finale last night?

Trackbacks

  1. Can Group Theory Save Complexity Theory? « Gödel's Lost Letter and … | Silcon Group
  2. Mathematical Intuition—What Is It? « Gödel’s Lost Letter and P=NP
  3. The Depth Of The Möbius Function « Gödel’s Lost Letter and P=NP
  4. An Annoying Open Problem « Gödel’s Lost Letter and P=NP
  5. “Mathematical Intuition—What Is It?” | Eslkevin's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,542 other followers