# Beating A Forty Year Old Result: Hamilton Cycles

* A new exponential algorithm for finding a Hamilton cycle in a graph *

Andreas Björklund is a complexity theorist who has many interesting results on the complexity of hard problems. He is a master at algorithms for hard problems such as computing the permanent and computing the number of solutions to #P complete problems.

Today I will discuss his recent paper that will soon appear at FOCS 2010. I think it is a beautiful result, and a clever proof. It solves a decades old problem.

I also think that Björklund’s result is an example of a trend among theory results. Most researchers believe that NP-complete problems not only lack polynomial-time algorithms, but require exponential time in worst case. Hence there is greater attention to algorithms with running time

where is a constant and is a parameter of the problem. Note, usually is not just the number of bits required to represent the problem. An example is SAT itself: there the parameter is the number of variables. The conjecture, in this case, that the best algorithm is exponential in is called the Exponential Time Hypothesis. I must add that I find this conjecture very hard to believe. Yet all algorithms to date for SAT do indeed take exponential time—perhaps I am wrong.

** The Result **

A *Hamiltonian cycle* is a cycle in an undirected graph that visits each vertex exactly once and also returns to the starting vertex. A graph is said to be Hamiltonian if it contains a Hamiltonian cycle.

The obvious algorithm is try all cycles: this would take time at least. One principle I have always told my students is:

No “real” problem’s best algorithm runs in time.

I have many times seen initial algorithms that take this time, but there always seems to be a way to get a better bound. The classic result, with the much better bound, is due to Michael Held and Richard Karp in 1970:

Theorem:There is a deterministic algorithm that takes an undirected graph on vertices as input and decides if the graph is Hamiltonian in time.

The key insight of their famous result is to make clever use of dynamic programming. The induction essentially keeps track not ordered sequences of vertices visited, but sets of vertices visited. This can be used to reduce to .

Björklund’s new result is a major improvement over this result—the first in forty years.

Theorem:There is a Monte Carlo algorithm detecting whether an undirected graph on vertices is Hamiltonian or not running in time, with false positives and false negatives occurring with probability exponentially small in .

The major point is that his algorithm runs in exponential time still, but the constant is now . As I stated earlier I believe that we will see many more results like this; results that improve the constant on an exponential algorithm.

** An Overview of The Proof **

Ryan Williams was kind enough to supply a summary of the proof. I have made some changes, so any errors are all mine. As usual look at the paper for the details.

But first, here is my summary of Ryan’s summary:

- Replace the graph’s adjacency matrix so that each replaced by a different variable—except that symmetric positions get the same variable.
- Then notice that the determinant of this matrix is the same as the permanent over any finite field of characteristic . This follows since in such a field.
- This permanent is, of course, a very complex polynomial with an exponential number of terms. However, it is easy to tell by evaluation at a random point whether or not the polynomial is trivial—has no terms or not.
- The claim we would like to make, but cannot, is that this polynomial is trivial if and only if there is
**no**Hamilton cycle. Of course this would imply a randomized polynomial time algorithm, so it is too much to hope for. - The claim is therefore false: the polynomial can be trivial for the wrong reasons. The fix to this is the most clever part of the proof. The idea is to add extra labels to the variables to avoid being trivial for the wrong reason. The cost of this last step is exponential. But, it is not too exponential.

Here is Ryan’s summary of the proof. We will first start with a broken polynomial time algorithm for Hamiltonian cycle, then over a series of steps we will try to fix it.

Let be an undirected graph. Define a matrix over variables as follows.

when and and in . Let when in , and let when in .

That is, the matrix is similar to an adjacency matrix, and it is symmetric except for the first row and first column. Suppose we plug in random nonzero values for the ‘s into drawn from the field , and evaluate the permanent of the resulting matrix. If the permanent is zero, say “no Hamilton cycle” If nonzero, say “Hamilton cycle.”

This is a polynomial time algorithm, since permanent = determinant over characteristic two. And, as the paper states, the permanent sums over all oriented cycle covers in the graph, and for each cover we have the product of all where is an arc in the cycle cover. That is, it enumerates all cycle covers along with all possible orientations of the cycles in the cover. One might expect that, over characteristic two, the “bad” cycle covers might all cancel, and so we only get a nonzero evaluation with high probability (whp) if there’s a Hamiltonian cycle. Note that if there *is* a Hamiltonian cycle, then we *do* get a nonzero evaluation whp, because for each Hamiltonian cycle we have two orientations of it which contribute an expression of the form

where is a product of all other arcs in the cycle to the sum, for some node . By the Schwartz-Zippel lemma, the sum of these will evaluate to nonzero whp, regardless of what the rest of the polynomial looks like—or by another result.

However the algorithm fails to find a Hamiltonian cycle whp, for one reason: these directed cycle covers can have “2-cycles” which go from some vertex to then back to . If the cycle cover is a collection of 2-cycles plus a cycle involving vertex 1, then when you “re-orient” one of these 2-cycles as in Lemma 1 of the paper, you get the *same* cycle cover back, not a different one. Hence you cannot “pair up” the cycle covers that have these 2-cycles so that their corresponding monomials all cancel out modulo 2. For all other cycle covers which are not Hamiltonian, they actually *do* cancel in the evaluation, as the proof of Lemma 1 shows.

The key insight is to assign distinct “labels” to all the arcs in the cycle cover, so that when you reverse the directions of arcs in a 2-cycle, you do get a different 2-cycle. If you have the 2-cycle with label “a” on and label “b” on , then the reverse cycle now has label “b” on and label “a” on , which is a “different” assignment of labels.

The good news is this works. The bad news is there are potentially too many labels needed. A naive method would use labels and this would yield an algorithm for Hamiltonian cycle that runs in , which is too slow. But at least we can compute this “labeled cover” problem in time, where is the number of labels needed.

The next important idea is to figure out a way to use less than labels. We will pick a random bisection of the graph into roughly equal parts and . Fix a Hamilton cycle. With decent probability, there are at most edges of this cycle where both endpoints of the edge are in . The paper explains how to use this partition to reduce the number of labels needed.

** Open Problems **

As I said earlier, I believe there is a “trend” among research that goes to a quote of Alan Perlis:

“for every polynomial-time algorithm you have, there is an exponential algorithm that I would rather run.”

I am quite interested in work that is trying to improve the exponential on the running time. There have already been several neat papers, including this one, and I expect to see more.

As always a great post, thanks. However, I’m pretty sure the photo is not of Andreas Björklund (at least not the Andreas Björklund that wrote the paper).

Is it real Andreas on the photo? In real life he looks more like here

http://picasaweb.google.com/lh/photo/10qEM42eOmY4GCm2H9_zXg

Fedor,

I got the picture wrong. I just clicked on wrong one. It is fixed now—I believe.

Sorry for any confusion.

I’m sorry, I can’t understand the labeling part of the proof sketch. What is a label of a variable? Do we assign it to variables or arcs of cycle covers?

Thanks for writing about this wonderful and important result!

You did get Andreas’s picture wrong, however. You can see him chatting with Anna Östlin Pagh at the Reykjavik ICALP 2008 at http://picasaweb.google.com/lh/photo/10qEM42eOmY4GCm2H9_zXg .

Hi Prof. Lipton, could you express why you find the ETH difficult to believe?

Josephina,

Why I do not believe ETH? Well even a proof that P is not equal to NP only requires that you prove SAT requires more than polynomial time for an infinite number of input. It says nothing about whether we can solve it in slightly non-polynomial time. SAT could be doable in n^log n and that would not contradict P not equal to NP. So why think that the best algorithm we will ever find for SAT is going to be c^n? That is what I have trouble with believing.

As one of the introducers of at least the name ETH, let me point out that this is labelled a “hypothesis” in our paper, not a “conjecture”. I am not sure whether ETH holds, and really wouldn’t want to give more than 50/50 odds either way (unlike P vs. NP, however, irrational my belief.) However, I think either a proof or a disproof would be very interesting, so it is a useful hypothesis to relate to other plausible complexity statements. When we prove that ETH implies that some other problem is hard, we are saying that to give a better algorithm for the new problem, you have to give a better algorithm for 3-SAT that is of a different order of improvement than people have succeeded at so far. So it is some evidence that major improvement for the problem shown ETH-hard is at least quite difficult, if not necessarily evidence that no such algorithm exists.

By the way, improved exponential time algorithms for standard NP-complete problems has been a mini-sub-field since at least Tarjan and Trojanowski in the 70′s, and there has been progress all along. Hamiltonian path was an example of a type of problem where 2^n seemed a natural barrier, so this is indeed a big result. But I don’t think it came out of nowhere, and we could point to many other significant recent significant improved algorithms over the last five years. So I think your prediction that this will be a productive area in the future is the safe kind: it’s already being fulfilled before it’s made….

I remember a version of linked lists from Knuth, where a head node was inserted. It made the code simpler, but presumably also made it more efficient. Some years later I optimized another algorithm by adding in ‘virtual’ entities. They didn’t really exist, but they bypassed the corner cases nicely, again saving time.

Perhaps this is the key to some optimizations: it’s not what you take out or reuse, but instead it’s what you add in without altering the results that makes it faster?

Paul, an elegant algebraic/geometric instantiation of this principle is the specification of quaternionic (4D) coordinate functions on a rotation group (3D) manifold … the redundant coordinate induces metric and symplectic isomorphisms (i.e., the familiar “♭” and “♯” operations on vector arrays) that are purely algebraic and singularity-free.

At any at any given moment, a pretty substantial chunk of all the computer cycles on the planet are integrating molecular trajectories via this insight … almost any modern synthetic biology group accumulates many terabytes of integrated dynamical trajectories every week, and conditions their overall research objectives and strategy largely upon them.

Unsurprisingly, as engineers become more-and-more focussed upon autonomous dynamical systems (ranging from dynamical molecules to dynamical vehicles), the union of algebraic efficiency with geometric naturality—gee, maybe this field ought to be called “algebraic geometry”?—emerges as not only central to mathematics, but central to the entire STEM enterprise.

The paucity of undergraduate textbooks that espouse this naturality-driven point of view arguably holds-back the global STEM enterprise … and this is where STEM-centric weblogs (like this one) provide a vital service in providing the advanced mathematical tools that enable engineers to envision a 21st century that is secure, prosperous, free, and “interesting” … which (from a purely engineering point-of-view) is a mighty challenging exercise.

That Alan Perlis quote was terrific:

“For every polynomial-time algorithm you have, there is an exponential algorithm that I would rather run.”I parsed it as

“For everystrictlypolynomial-time algorithm you have, there is aworst-caseexponential algorithm that I would rather runbecause in practice it works so much better.”This parsing was fun because so many of the greatest algorithms fit this principle pretty accurately. In the domain of optimization, the simplex algorithm (of course). In the quantum domain, density functional formalisms. In information theory, turbo codes. In linear algebra, preconditioners that in practice almost always make convergence faster, in worst-case instances destroy convergence completely.

So it seemed pretty clear that Perlis had articulated a profoundly general “Great Truth”. Then I followed Dick’s link, and discovered that (seemingly?) Alan Perlis had in mind an abstract principle … not concrete instantiations.

Perhaps someday we might have a post that discussed the broader history and implications of the Perlis quote? With case histories of algorithm development, please … we engineers *LOVE* case histories!

By the way, digging for this quote uncovered a new book:

The P=np Question and Godel’s Lost Letter… by some guy named Richard Lipton … yah can find it on Amazon.com … and it looks TERRIFIC!Dick, my sincere appreciation, thanks, and congratulations are extended! You have solved everyone’s holiday gift-giving problem! :)

Now if only you and Terry Tao would join forces, and write a joint column for Scientific American … the glory days of the STEM enterprise might return … :)

I am sad that there’s no kindle version of the book.

John,

You are very kind.

Hi Dick

Maybe I’m mistaken, but isn’t the Held-Karp paper that does dynamic programming on the TSP the one from 1962 ? http://www.jstor.org/stable/2098806 (the 1970 papers were the ones that gave the linear relaxation for TSP, IIRC)

That’s what I thought. I would be surprised if this new result can be made to work for the TSP.

Suresh,

You are correct. My mistake. So the result is even better. Thanks for the great correction.

dick

Just a note in passing. When I have this article open in a tab the last part of the title is cropped out and it reads “Beating A Forty Year Old.” Perhaps you want to alter the wording a little…

Does this O*(.) notation mean “ignore polynomial factors” like \tilde{O}(.) means ignore logarithmic factors?

Tyson,

Yes it is exactly that in this case. It allows us to be even a bit more approximate. But here the key term so dominates that is safe.

Open problem?: How to quantum computerize Björklund’s Monte Carlo algorithm. Maybe achieve quadratic speed up?

rrtucci,

A very nice question. I guess you are thinking about using Grover? That would be a cool result.

For this problem, I have a polynomial time algorithm and its proof. But, or smart men donot trust a pig, even donot want to pay a little time, a little attention, or pigs donot trust a smart man and even donot want to pay a little time, a little attention!

I have no way.

Is it not true that each HC problem can be mapped to an Exact Cover problem with size c.n.log(n).m where n is the number of vertices and m the number of edges? The paper Faster Solutions for Exact Hitting and Exact SAT gives an algorithm with O(2^0.2985n). Would that not be enough to beat the O(2^n) boundary?

I’ve seen the term “STEM disciplines” frequently used in the last few weeks. Can anyone clarify what it means?

dv,

S.T.E.M. stands for science, technology, engineering, and mathematics check this.

I don’t understand why we can say that this “beats” the Held-Karp algorithm. There is a non-zero probability that the new algorithm gets the wrong answer, correct? If I must guarantee whether a given graph has a Hamiltonian cycle, I cannot use this new algorithm.

The deterministic algorithm takes O(2^n) time, while the randomized one take O(1.7) time. As a back-of-the-envelope calculations, let’s ignore constant factors. Then for n=100, the randomized algorithm runs 100,000,000 times faster than the deterministic one.

Running the algorithm 300 times and taking the majority answer yields a error-probability of 1/[# of particles in the universe]. And we still net a speed boost of 300,000.

By comparison, the computer running the algorithm is far more likely to err due to being repeatedly bombarded by cosmic radiation.

“There have already been several neat papers, including this one, and I expect to see more.”

Could someone please list a few more such papers that use pretty fundamental stuff, and yet produce great results.

Thanks

Bonjour,

Excusez moi d’intervenir comme cela ! Je crois que l’algorithme le plus efficace est celui qui relie un Hamiltonien avec un Lagrangien, puisqu’il décrira tous les mécanismes de la conscience humaine

Cordialement

Clovis Simard