Looking at some of its components

 From source, our congrats too

Laci Babai’s first talk a week ago Tuesday is now a webcast here. There is also a great detailed description of his talk by Jeremy Kun, including background on the problem and how the proof builds on Eugene Luks’s approach.

Today we talk some more about ingredients of Laci’s algorithm.

He has one more scheduled talk this coming Tuesday on its key vehicle of progress: Either the current graph (and associated groups) in the recursion can be split up for divide-and-conquer, or it embeds a special kind of graph named for Selmer Johnson that can be handled specially. This strikes us as following another Chicago idea—the Geometric Complexity Theory programme of Ketan Mulmuley and Milind Sohoni—of looking for objects that obstruct the progress of algorithms; this post by Joshua Grochow discusses an example for matrix multiplication.

Rather than try to anticipate these further details and Laci’s forthcoming preprint, we thought we might give more high-level discussion of some more ideas and elements we recognize that make the algorithm work. We could be way off, but these ideas we believe are key to his breakthrough result. In any event they are useful insights that we hope you enjoy.

## All Aboard

Laci’s talk is old style, a chalk talk. Most talks these days are powerpoint-style, but for details and complex mathematics, there is nothing like seeing a master at work using chalk. The virtues of such a talk are many. It slows down the information rate to the audience. This is good. Slow is good. Flipping powerpoint slides rapidly can be hard to follow. Watching Laci write out in his handwriting:

Graph Isomorphism  GI

is wonderful. You can follow it, internalize it, and perhaps understand it.

The second is the use of many boards. This is wonderful. It gives a larger state for the audience, so they are more likely to understand.

## Overview

The key ideas of the algorithm for GI are really classic ones from design of algorithms. The genius is getting them all to work together. The ideas break into two types: those that are general methods from computer science and those that are special to the GI problem.

## Domain-General Points

${\bullet }$ Small Steps. One of the most important principles in mathematics and also in the design of complex algorithms is that we rarely can just write down the answer. That is if we wish to construct something we rarely can just write down the answer in closed form. The quadratic formula for the roots of ${ax^{2} + bx +c = 0}$ is a classic counterexample:

$\displaystyle x = \frac{-b \pm \sqrt{b^{2}-4ac}}{2a}.$

Of course there are no such formulas for high degree polynomials nor for non-polynomial equations. Enter the Newton method.

The idea of building the answer in small steps is the cornerstone of most important algorithms. Linear programming uses simplex or interior point methods, matching uses path-augmentation, and so on.

${\bullet }$ Recursion. The reason the small step idea is so powerful is that it plays well with running time. If the small steps can be bounded by a nice equation, then the algorithm in question will run fast. For Laci’s case the key is

$\displaystyle T(n) \le q(n)T(0.9n).$

where ${q(n)}$ is a quasi-polynomial term. This type of equation yields a quasi-polynomial running time. If we had a constant ${c}$ in place of ${q(n)}$ then we would get a term ${c^{O(\log n)}}$ which is polynomial, and this typifies how one gets polynomial-time algorithms by divide-and-conquer. Here ${q(n)}$ is not constant, but quasi-polynomial terms are closed under raising them to log powers, so that is the time we get. The recursion can be cut off when the problem size is polylog.

${\bullet }$ This-or-That. As Laci says, “split-or-Johnson.” This is the hardest element to convey simply but may pay the highest dividends if we can recognize when it can be used. Here is an example: For every ${\epsilon > 0}$ there exists ${\delta > 0}$ such that for every ${n}$-vertex graph ${G}$:

Either ${G}$ has at least ${\delta n^3}$ triangles, or one can remove ${\epsilon n^2}$ edges to leave no triangles.

This enables a randomized algorithm to distinguish graphs that are triangle-free from those that are ${\epsilon}$-far from being triangle-free: it can try (say) ${3/\delta}$ random triples of vertices and say “probably triangle-free” if none is a triangle. The randomized primality test of Gary Miller and Michael Rabin can also be phrased as an either-or in which the “or” branch finds a factor. To be sure, Laci’s algorithm isn’t randomized.

## Domain-Specific Elements

${\bullet }$ Graph Marking. Perhaps the main problem with GI has always been that looking locally at vertices of some graph ${G}$ may yield no information about the graph. Locally all the neighborhoods of vertices may look alike. This is terrible from the point of view of trying to tell one vertex from another, which is critical to be able to solve the GI problem.

But there is an expensive way to change this. We simply pick a set of vertices of ${G}$ and mark them in a unique manner. Now using these we can start to break up the structure of the graph and start to label vertices differently. Obviously a vertex near one of these marked vertices is different from one that is farther away. This is a critical “trick” that allows one to make progress on GI for the graph.

There is a cost. If we pick ${t}$ such special vertices in a graph, then we must try all ways to pick these in the other graph. That is expensive. If the graph has ${n}$ vertices then it costs ${n \choose t}$. So to avoid taking more than quasi-polynomial time, we must keep ${t}$ of size at most poly-logarithmic. It also needs ${t}$ of order at least ${(\log n)^3}$ to be effective, which is a reason the technique cannot simply be improved to run in polynomial time.

${\bullet }$ Groups Are Nice. One of the most useful properties of groups is the “subgroup size trick.” In most structures, including graphs, a substructure can be very big. Suppose that ${G}$ is a graph on ${n}$ vertices. It has subgraphs with ${n-1}$ vertices—just remove any vertex. This cannot happen with groups. If ${H}$ is a groups with ${n}$ elements, then the largest nontrivial subgroup can have at most ${n/2}$. This is a simple consequence of the famous theorem of Joseph-Louis Lagrange that for any finite group ${H}$, the size of every subgroup of ${H}$ divides the size of ${H}$. If you keep taking subgroups and each one is proper then you quickly make great progress reducing the size.

The worst case is when the factor is ${2}$ such as with the alternating group ${A_m}$ inside the symmetric group ${S_m}$. Babai calls a homomorphism ${\phi}$ from a group ${G}$ into ${S_m}$ giant if its image is either ${A_m}$ or all of ${S_m}$. An element ${x}$ of the set ${G}$ acts on is unaffected if ${\phi}$ restricted to the subgroup ${G_x}$ of permutations that fix ${x}$ is still giant. Subject to conditions about the setting that we don’t yet fully understand, his key new group-theoretic theorem is:

Unless ${m}$ is tiny, the restriction of a giant homomorphism to the intersection of ${G_u}$ over all unaffected elements ${u}$ is still giant.

This looks like it should be either easily true or easily false, but Babai’s precise statement depends on the classification of finite simple groups.

The key inner point of the algorithm is to try subsets ${A}$ of the elements where ${|A|}$ is polylog but not tiny so the theorem works for ${S_{|A|}}$. The theorem helps decide in quasipolynomial time whether the resulting homomorphism into ${S_{|A|}}$ is giant. This distinguishing power in turn is used to make progress.

## Open Problems

What is the full pattern of progress in the algorithm, with “split-or-Johnson” evidently directing the outermost loop and the above the inner loops? This may need more details to unfold.

1. November 22, 2015 1:34 am

There are some issues with this post and the LaTeX: I’m seeing your formulae replaced with “latex path not specified”.

November 22, 2015 2:00 am

“latex path not specified” for every equation in the post.

November 22, 2015 4:42 am

Reblogged this on Pink Iguana.

4. November 22, 2015 9:20 am

Thanks, everyone! I noticed this for the first time on Wednesday in San Francisco with some of the older posts including “Lint for Math” which was relevant to a discussion there. The bug showed on my Windows 8.1 laptop then but not now; it never showed yesterday on my Windows 7 office machine or home Macintosh (Firefox on all three machines) nor on Safari or Chrome or Internet Explorer now. The displayed equations in this post are (a) the quadratic formula; (b) T(n) = q(n)T(0.9n); (c) Either G has at least \delta*n^3 triangles or you can remove \epsilon n^2 edges to leave no triangles, and (d) Unless m is tiny, the restriction of a giant homomorphism to the intersection of G_u over all unaffected elements u is still giant. Haven’t had time to be proactive, so will let this resolve itself.

• November 22, 2015 7:23 pm

Everything is fine. I’m using a PC with Windows 10 installed and updated versions of Chrome and Firefox.

5. December 1, 2015 4:13 pm

WHAT IF P = NP?

We define an interesting problem called $MAS$. We show $MAS$ is actually a succinct version of the well known $\textit{NP–complete}$ problem $\textit{SUBSET–PRODUCT}$. When we accept or reject the succinct instances of $MAS$, then we are accepting or rejecting the equivalent and large instances of $\textit{SUBSET–PRODUCT}$. Moreover, we show $MAS \in \textit{NP–complete}$.

In our proof we start assuming that $P = NP$. But, if $P = NP$, then $MAS$ and $\textit{SUBSET–PRODUCT}$ would be in $\textit{P–complete}$, because they are $\textit{NP–complete}$. A succinct version of a problem that is complete for $P$ can be shown not to lie in $P$, because it will be complete for $EXP$. Indeed, in Papadimitriou’s book is proved the following statement: “$NEXP$ and $EXP$ are nothing else but $P$ and $NP$ on exponentially more succinct input”. Since $MAS$ is a succinct version of $\textit{SUBSET–PRODUCT}$ and $\textit{SUBSET–PRODUCT}$ would be in $\textit{P–complete}$, then we obtain that $MAS$ should be also in $\textit{EXP–complete}$.

Since the classes $P$ and $EXP$ are closed under reductions, and $MAS$ is complete for both $P$ and $EXP$, then we could state that $P = EXP$. However, as result of Hierarchy Theorem the class $P$ cannot be equal to $EXP$. To sum up, we obtain a contradiction under the assumption that $P = NP$, and thus, we can claim that $P \neq NP$ as a direct consequence of the Reductio ad absurdum rule.

You could see more on…

https://hal.archives-ouvertes.fr/hal-01233924/document

Best Regards,
Frank.