Skip to content

A Matchless Result

March 28, 2014


A famous algorithm, a new paper, a full correctness proof

VijayGatech

Vijay Vazirani is one of the world experts on algorithmic theory, and is especially known for his masterful book on approximation algorithms. Among many results in computational complexity, two have been Wikified: his theorem with Les Valiant on unique SAT, and the subsequent generalized Isolation Lemma with Ketan Mulmuley and his brother Umesh Vazirani. Lately his focus has been more on computational aspects of auctions, economic systems, and game theory.

Today Ken and I wish to discuss a recent paper by Vijay on matching.

His paper is quite unique—is “quite” redundant?—well it is an isolated case of a top researcher taking the time and energy to explain the correctness of one of his famous algorithms, originally in joint work with Silvio Micali. As our field matures we should see more of this.

Given an undirected graph {G=(V,E)} a matching, of course, is a set of pairwise non-adjacent edges: no two edges share a common vertex. A maximum matching is a matching that contains the largest possible number of edges. If {|V|} is even then a matching with {|V|/2} edges is perfect. The notorious Petersen graph

match

has six different perfect matchings: the five “spokes” and others using just one spoke. Although matchings are often associated with bipartite graphs, it is important to note that the Petersen graph is not bipartite. This is important since the matching algorithm we will discuss works for general graphs, not just bipartite graphs.

The Importance Of Matching

Finding a maximum matching is one of the foundational problems at the intersection of graph theory and algorithmic theory. The reason for this is:

  • The problem is useful in practice. Unlike many problems that are sometimes studied in theory, people really would like to be able to find matchings of general graphs. See this for a detailed list.

  • The problem is basic to the theory of graphs. László Lovász and Michael Plummer wrote an entire book called Matching Theory on—you guessed right—matchings. They shared the Niveau Prize of the Hungarian Academy of Sciences for this book in 1991. We especially like the Amazon review of the book:

    The subject has obviously advanced a lot since the book was published, but the overview provided is still unmatched.

    Cool pun.

  • The problem is basic to the theory of algorithms. Euclid may have the first non-trivial algorithm, his algorithm for finding the gcd of two numbers, but algorithms for matching started over fifty years ago. This is not the place to trace out the long and complex history of who did what algorithm for finding matchings. Suffice it to say that much of the theory of algorithms can be traced to the history of matching algorithms: various tricks and methods that are used through computing often first appeared in matching algorithms. Andras Frank has written a wonderful piece called On Kuhn’s Hungarian Method—A tribute from Hungary about the bipartite case—see it for some interesting history.

Here is a picture of some of the applications of matching algorithms.

apps

Let’s turn to look at Vijay’s new paper.

The Paper’s Tracks

Actually we need to first look at the original paper called An {O(\sqrt{|V|}|E|)} algoithm for finding maximum matching in general graphs by Silvio Micali and Vijay Vazirani. It was published in the 21st Annual Symposium on Foundations of Computer Science in 1980.

We will call this paper the MV paper, and its algorithm the MV algorithm. Note the running time is {O(m\sqrt{n})} using “modern” internationally agreed-upon notation: we use {n} always to denote the number of vertices and {m} the number of edges. Paul Halmos was always careful about notation, and once said that his worst nightmare was that someone would type

\displaystyle  \lim_{\epsilon \rightarrow \infty} n_{\epsilon} = 0.

The title of Vijay’s original paper was created before these agreements, so it is understandable that they used the “old” style notation in the title. It is also less prone to the problem that if you write {O(\sqrt{n}m)} it may be hard to see that the square-root sign does not also cover the {m}.

This paper presented the MV algorithm with the given running time. The major advance was that it efficiently found a maximum matching in general graphs. There is an interesting cliff that happens in the theory of algorithms for finding maximum matchings. Bipartite graphs are just much easier for matching algorithms. Of course they are easier for many other algorithms: it is trivial to find a 2-coloring of a bipartite graph, but NP-hard to find a 3-colring in a general graph. One of the achievements of the MV paper is that everything works for general graphs.

The new paper is solely authored by Vijay. It has several goals, but the main one is to give a clear but full proof of the correctness of the MV algorithm. The algorithm remains the same, but the correctness proof is new. The previous proof had a flawed case analysis. The new proof avails itself of information that the previous proof bypassed, to make the analysis tighter and more manageable.

We will not give the whole proof, but we will give an algorithmic idea highlighted also by Vijay, which has independent interest.

Dueling Depth-First Searches

Let {G = (V,E)} be a directed acyclic graph whose vertices are in layers numbered {0} (for sinks) through {\ell} (sources). Edges from any non-sink node go to nodes in lower layers, not necessarily the next layer, and every node has a path to a sink in layer {0}. Let {g} and {r} be any two non-sink nodes. By Menger’s Theorem, either

  1. there are vertex-disjoint paths from {r} and {g} to sinks {s_r} and {s_g}, or

  2. the set {B_{g,r} = \{v:} every path from {g} or {r} to a sink goes through {v\}} is nonempty.

In the latter case, let {b} be the element of {B_{g,r}} in the highest layer. The algorithm must find this highest bottleneck node {b} within time proportional to the total number of edges in paths from {g} or {r} to {b}. In the former case the algorithm must output paths to {s_r} and {s_g} within time proportional to the total number of edges on all paths from {g} or {r} to either of {s_r} or {s_g}.

The main puzzle is, how can we avoid searching past {b} and taking more than the time allowed to the latter case, if we don’t even know that it holds? For a single local search, this does indeed seem to be impossible. However, what Vijay called “Double Depth-First Search” (DDFS) solves it. Two search posses led by rangers Green and Red start respectively at {g} and {r} and mark out “green” and “red” nodes and edges both. Neither may tread on the other’s ground. The rules are:

  • Whichever posse is on a higher-level node gets to move; if they are on the same level, Red moves.

  • If one posse’s depth-first search comes to an already-marked empty node {v}—it could have been marked by either posse on an earlier visit—it goes back and tries another down-edge from its previous node. If there are no more down-edges, it backtracks up from its previous node as in normal depth-first search.

  • If the other posse is on node {v}, however, Red and Green fight a duel at 20 paces.

BeattyGHRanger
By permission of ComicArtFans gallery owner Mark Geier: source commissioned from artist Terry Beatty

The duel follows gentlemen’s rules. Red shoots in the air. Green takes the hint and backtracks looking for another node {w} at the same level as {v} or lower. If Green succeeds, he sends a Western Union telegram to Red, who puts his marker on {v} and moves on. If Green gets stuck, then Red has to keep the same bargain. Red backtracks until he either finds another {w} at the same level as {v} or lower, whereupon he telegrams Green to come back and claim {v} as his, or Red gets stuck too. If they are both stuck they both come back to {v}. Each claims just the respective (necessarily different) (sets of) edges he followed into {v}, but they identify {v} itself as the highest bottleneck {b} and share a bottle of hooch to celebrate.

They can celebrate because they have met the time guarantee: Neither searched below {b}, and all the edges they traversed in their wanderings had to be on paths to {b}, since they did not find any open trails going elsewhere. And in the case where they reach separate destinations {s_g} and {s_r}, any backtracking done by one posse because of hitting the other’s marked node is inductively on a path to the other’s goal, so at most the various routes between {\{g,r\}} and {\{s_g,s_r\}} get searched.

Making a search posse on a lower level wait for the other ensures that nodes below a bottleneck don’t get searched. Many other parts of the algorithm similarly rely on co-ordination with delicate timing considerations; this is just a taste of them. The new paper also has many other figurative concepts, with illustrations.

Open Problems

Do read Vijay’s paper. It is one of the clearest expositions of a graph algorithm—a model for others. Also the idea that Vijay went back to an ancient paper—over thirty years old—solely to write the definitive proof of its correctness is something we all should applaud.

Of course the central problem in matching theory is still what is the best time for an algorithm that finds a maximum matching? Given the interest today in huge graphs that arise from social networks we will like a linear time algorithm. Is this possible? See this for some fast approximate algorithms.

[added words “on all paths” to DDFS time requirement, name and word fixes and adds, permission for vintage Green Hornet/Lone Ranger artwork]

11 Comments leave one →
  1. G. G. permalink
    March 28, 2014 12:52 pm

    “A matching M in an undirected graph G = (V;E) is a set of edges no two of which meet at
    a vertex. Our problem is to find a matching of maximum cardinality in the given graph. All
    definitions henceforth are w.r.t. a fixed matching M in G.

    An alternating path is a simple path whose alternate edges are matched and unmatched. An alternating path that starts and ends at unmatched vertices is called an augmenting path.”

    I can infer the definition of unmatched edges, but what the heck are unmatched vertices?

    I shouldn’t be stuck on the definitions in a paper that supposedly has a “lot to be desired in terms of correctness, completeness and clarity”.

  2. March 28, 2014 5:07 pm

    “VV” recently started blogging over at Turing invisible hand…. what is this cyberspace neighborhood coming to? 🙄

  3. March 28, 2014 5:14 pm

    another interesting factoid about matching from john savage book “models of computation” notes p457, hopes about using monotone circuits to prove major lower bounds were somewhat dashed by the following:

    Having shown that monotone circuit complexity can lead to exponential lower bounds,
    Razborov [271] then cast doubt on the likelihood that this approach would lead to exponential
    non-monotone circuit size bounds by proving that the matching problem on bipartite graphs,
    a problem in P, has a super-polynomial monotone circuit size.

  4. Curious Theorist permalink
    March 28, 2014 6:08 pm

    How does this relate to

    Norbert Blum: A New Approach to Maximum Matching in General Graphs. ICALP 1990: 586-597

    and

    Paul A. Peterson, Michael C. Loui: The General Maximum Matching Algorithm of Micali and Vazirani. Algorithmica 3: 511-533 (1988) ?

    My outsider impression is that there were other works that had already fixed the issue, but I couldn’t find them cited in the new paper.

  5. March 29, 2014 8:34 am

    In a sense, every algorithm is an approximation algorithm since the notion of correctness lies on psychology, and therefore on probability at the bottom line. The same holds true for proofs. My theory consists in saying that the algorithms you can prove correct with a high degree of certainly, are precisely those you can’t prove efficient with a high certainty, and vice versa. The easy problems are those where this indetermination is hardly noticeable, while the hard problems are those where the notion of polynomial behavior doesn’t even make sense, because the algorithms are all galactic and can’t be run on any physical processor.

    Of course this is a theory of physics, not of mathematics. I’m the first to regret, because I would otherwise have thus proved the undecidability of the very undecidability of P=NP. Indeed, by just setting the Planck constant either positive – implying P=NP is undecidable – or null – implying P=NP is decidable – you get the expected result. Nonetheless, I still feel that algorithms behave like particles under the analogy “proof of correctness”= “position measurement”, “proof of efficiency” = “speed measurement”, undecidability = indetermination.

    • March 30, 2014 12:54 pm

      In fact, the P versus NP problem looks like a logical Möbius strip.

      On the one side, it’s purely mathematical and P!=NP appears to be the right hypothesis – though probably unprovable. Ladner’s theorem then explains why there are three distinct kinds of hardness in NP – polynomial, neither polynomial nor NP-complete, NP-complete.

      On the other side, the opposite hypothesis P=NP can be described with arguments of pure physics, with quantum leaps between the discrete levels of hardness – e.g. between the hardness of SORTING, FACTORING and SAT. Unfortunately, these arguments are probably unfalsifiable as well.

      My conclusion is that we should give up any hope of settling this problem. Blame it on the metaphysics…

      • March 31, 2014 3:53 pm

        The physics level represents the hardware of computer science, whereas the mathematics level represents its software. Making this distinction might lead to new progress. These two levels of computer science should indeed be named differently.

        It’s the same distinction as that which is made between logic and mathematics – which is actually software developed upon logical hardware. But logic – the other name of theoretical computer science – lies on an intermediate level between physics and mathematics.

        The third level is – in my humble opinion – “the third man” who passes unnoticed much too often…

  6. March 31, 2014 1:16 pm

    I was very happy to see Vijay and Vijay’s beautiful paper featured here. I also share the compliments for this unique important endeavor. Making such an effort to give a clear, correct and inspiring proof for this classic algorithm (well it was a type of algorithm that became classic instantaneously but now it is also a few decades old) seems almost a thankless task. But in science often tasks that seems thankless turn out to lead to many fruits and recognition. And, of course, matching is such a great topic. A wonderful never-ending mine for beautiful mathematics, computer science, (and economics and more).

Trackbacks

  1. A Little More on the Graph Isomorphism Algorithm | Gödel's Lost Letter and P=NP
  2. TECNOLOGÍA » A Little More on the Graph Isomorphism Algorithm
  3. select refs and big tribute to empirical CS/ math | Turing Machine

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading