# Babai’s Result: Still a Breakthrough

*Even after today’s retraction of quasi-polynomial time for graph isomorphism*

Cropped from source |

László Babai is famous for many things, and has made many seminal contributions to complexity theory. Last year he claimed that Graph Isomorphism (GI) is in quasi-polynomial time.

Today Laci posted a retraction of this claim, conceding that the proof has a flaw in the timing analysis, and Ken and I want to make a comment on what is up. **Update 1/10:** He has posted a 1/9 update reinstating the claim of quasi-polynomial time with a revised algorithm. As we’ve noted, he is currently speaking at Georgia Tech, and we hope to have more information soon.

Laci credits Harald Helfgott with finding the bug after “spending months studying the paper in full detail.” Helfgott’s effort and those by some others have also confirmed the mechanism of Laci’s algorithm and the group-theoretic analysis involved. Only the runtime analysis was wrong.

Helfgott is a number theorist whose 2003 thesis at Princeton was supervised by Henry Iwaniec with input by Peter Sarnak. Two years ago we discussed his claimed proof of the Weak Goldbach Conjecture, which is now widely accepted.

## The Claim

In December 2015, Laci posted to ArXiv an 89-page paper whose title claimed that GI can be solved in quasi-polynomial time. Recall that means that the algorithm runs in time for some constant . This an important time bound that is above polynomial time, but seems to be the right time bound for many problems. For example, *group* isomorphism has long been known to be in quasi polynomial time. But the case of graphs is much more complex, and this was reason that Babai’s claimed result was so exciting. We covered it here and here plus a followup about string isomorphism problems that were employed.

He also chose to give a series of talks on his result. Some details of the talks were reported by Jeremy Kun here.

## The Amendment

Retracting a claim is one of the hardest things that any researcher can do. It is especially hard to say when to stop looking for a quick-fix and make an announcement. It may not help Laci feel any better, but we note that Andrew Wiles’s original proof of Fermat’s Last Theorem was also incorrect and took 15 months to fix. With help from Richard Taylor he repaired his proof and all is well. We wish Laci the same outcome—and we hope it takes less time.

In particular, his algorithm still runs faster than for any you care to name. For comparison, for more than three decades before this paper, the best worst-case time bound was essentially due to Eugene Luks in 1983. The new bound in full is

for some fixed that will emerge in the revised proof.

The important term is the . The function is exponential in . We previously encountered a recursion involving in the running time of space-conserving algorithms for undirected graph connectivity (see this paper) before Omer Reingold broke through by getting the space down to and (hence) the time down to polynomial. So there is some precedent for improving it.

## Some Intermediate Thoughts

As things stand, however, GI remains in the “extended neighborhood” of exponential time. Here is how to define that concept: Consider numerical functions given by formulas built using the operations and and . Assign each formula a *level* by the following rules:

- The identity function has level .
- If has level and has level then and have level .
- If has level then has level .
- If has level then has level .

Note that if has level then so does the power for any fixed because . The functions of level include not only all the polynomials but also all quasi-polynomial functions and ones such as , which is higher than quasi-polynomial when .

The amended bound on GI, however, belongs to level , which is what we mean by its staying in the extended neighborhood of exponential time. This is the limit on regarding the amended algorithm as “sub-exponential.”

It also makes us wonder about why it is so difficult to find natural problems with *intermediate* running times. We can define this notion by expanding the notion of “level” with a new rule for functions that are sufficiently well behaved:

- If and all but one of have well-defined levels , , and/or , respectively, then the other level is well-defined and satisfies .

Rule 5 subsumes rules 3 and 4 given that has level and has level . A special case is that when and has level , then has level .

We wonder when and where rule 5 might break down, but we note that careful application of rule 2 for multiplication when expanding a power makes it survive the fact that , , , and so on all have the same level. It enables defining functions of intermediate levels where .

Can the GI algorithm be improved to a level ?

We note one prominent instance of level in lower bounds: Alexander Razborov and Steven Rudich proved unconditionally in their famous “Natural Proofs” paper that no natural proof can show a level higher than for the discrete logarithm problem.

## Open Problems

The obvious open problems are dual. Is the amended result fully correct? And can the original quasi-polynomial time be restored in the near future, or at least some intermediate level achieved? We hope so.

[fixed discussion of terms related to , added to the intro an **update about the claim being reinstated**]

Note: the function e^(e^(sqrt(log n))) is exponential in n^(o(1)), not e^(sqrt(log n)) as you wrote

Thanks, Ryan. I wonder if the amended “latex {e^{e^{\sqrt{\log n}}}}” is the first ever case of four consecutive braces in paragraph mode on this blog. I also needed to change something in the following sentence—I had short-circuited my original intent.

On “Only the runtime analysis was wrong.” – not to be pedantic, but this is not completely correct, in the most technical of senses. In order to make the algorithm work in time $e^{e^{O(\sqrt{\log n \log \log n}}}$, Babai had to change one of the internal parameters and add an iteration.

Thanks. We meant that the original runtime analysis was wrong and that “the mechanism of” the algorithm was confirmed.

The notion of levels came up in discussion of the result in Nov. 2015 and I think it stems from Hardy-Littlewood’s proof that the log/exp functions are totally ordered by little-oh up to Theta equivalence. The “Rule 5” may be going out on a limb, however.

Is this definition of complexity levels novel? I feel like this kind of course-grained approach to complexity would be quite useful for judging how significant an algorithmic improvement is.

Another example of where the same sort of bound appears is Dvir and Gopi’s construction of 2-server PIR’s: https://arxiv.org/abs/1407.6692. The length of the 2-query LDC obtained is $$\exp(\exp(\sqrt{\log n \log \log n}))$$.

And of course, also the same bound for the best 3-query binary LDC length: https://www.cs.cmu.edu/~odonnell/hits09/efremenko-subexponential-ldcs.pdf

😳 ❗ ⭐ there are some interesting (“meta”) angles here. there are many incorrect papers posted to arxiv which are generally/ simply ignored by the wider community, although there is a helpful/ useful page of tracking of P vs NP claims by Woeginger. why would so much effort for analysis be put on this one? its a bold claim but there are many other bold claims.

a simple answer is that a research community functions somewhat like a social graph where some individuals have both high reputation and accomplishments, generally correlated. Babai has high credentials/ bkg in this area so he attracts peer review by other high reputation/ credentialed individuals in this case Helfgott. however, these social graphs are largely/ to some degree invisible outside the communities. so, thanks for shedding some light on it.

it would seem that Babais clear writing helped in this effort. another recent case is the abc conjecture work by Mochizuki which has undergone years of analysis by top experts of the community with no conclusive results so far, and arguably this is somewhat related to Mochizukis obscure style.

another angle is that papers seem to be getting more complex in a sort of arms race. mochizuki has written hundreds of pages. a 60 page proof in complexity theory is also something of an outlier. unf this is probably a trend.

Very interesting points. But why do you write with no capitals and with weird abbreviations (bkg=background?). Perhaps this is also one reason why the majority of researchers don’t follow people outside academia: they simply don’t write clear enough.

Very interesting points. Why did I put the nice paper On the Group and Color Isomorphism Problems by François Le Gall and David J. Rosenbaum on the “read later” stack, despite the fact that I would probably have been able to read it, and I know that they do treat exactly the questions that bothered me too after gaining a basic understanding of László Babai’s result. Why did I spent so much time reading László’s paper and watching video’s of his lectures, despite the fact that I didn’t even expect to understand his proof in detail?

It was the math I hoped to learn from it. I wanted to learn about Johnson graphs, Johnson groups, Johnson schemes, Johnson ???, because it seemed like an important concept that would improve my understanding of graphs, permutation groups, hypergraphs (schemes), ??? And I wanted to learn more about Weisfeiler-Leman, because it was apparently the one combinatorial approach to rule all other combinatorial approaches. And I did learn everything I wanted to learn about Johnson … and I did learn a lot of new stuff about Weisfeiler-Leman. I still don’t know everything about Weisfeiler-Leman I wanted to learn, but I now have at least a solid base.

I also did learn something unexpected, i.e. how a professional reaction to email inquiries about details of a paper can look like. Namely just a short reply initially that you received the mail and will look at it in detail later, but already thanking the inquirer for his interest in your paper. (+ later follow up.) This one thing that authors of the “many incorrect papers posted to arxiv” have much more trouble with, if I may say so. I have to admit that even I myself fail badly in this respect. This means that the “great response I initially want to write” typically never gets written, and after two or three weeks of silence, I then write some “low quality” reply just to get it off my plate.

Where in natural proofs paper do we have level $\frac12$?

It comes from the remark after the definition of “half exponential” in section 4 of the paper, then noting that levels > 1/2 minimally satisfy it.

I predict this complexity will be brought below quasipoly in under a year.

** below** quasipoly.

It seems that your prediction may already be correct. 🙂

Link: http://people.cs.uchicago.edu/~laci/update.html

Excellent post. Laci Babai’s amended result is still one of the greatest breakthroughs in computational complexity in decades. It is remarkable to note that the development regarding running time has no bearing on the many great ideas involved in the proof and, in particular, the central place of Johnson graphs is absolutely mind-boggling. Understanding and verifying the proof is important on its own and may lead to further progress towards better algorithms for GI, and of course, I admire Harald Helfgott for his careful examination of the proof in connection with his Bourbaki’s report.

As for the growth function involved I encountered some similar functions in the study of diameter of graphs of d-simensional polytopes with 2d facets. (Usually the number of facets is a different parameter n, but to simplify the description I let n=2d.) I had an upper bound of roughly and then I found an improvement to . I fantasized that the next improvement will be to and so on. (All level-1 functions.) However, the next step was to which is where things are since the early 90s. For the related question on (randomized) pivot rules for LP, the best running time since the early 90s is . An improvement to and even to would be fantastic. (Of course, it is known that LP is in P via other algorithms.)

Well, it was way quicker than Fermat’s.

http://people.cs.uchicago.edu/~laci/update.html