When are counting constraint problems easy and when are they hard

Richard Ladner is a complexity theorist, who after working in pure theory, later moved into more practical areas. In both areas he brought his expertise at problem solving to bear: in the former he proved seminal theorems, and in the later he created systems that have enhanced the lives of many with sight or hearing impairments.

Today I want to discuss some exciting developments in the classification of the complexity of counting problems. These results are related to the early work of Richard.

Ladner started his career in complexity theory where he made important and lasting contributions. He later shifted his focus to the area of creating accessible technologies for disabled persons. For example, he has worked making web browsing possible for sight impaired people, making understanding textbook pictures also accessible to them, and making cell phones useable for hearing impaired people via realtime sign language translation.

I have had the pleasure to meet Ladner’s parents who both are hearing impaired, who were both wonderful and kind to me. I have no doubt that they played some large role in helping him refocus his energies. While a theorem may be forever, the work that he has done in accessible technology has changed the lives of countless people. Thank you Richard.

One of Ladner’s beautiful theorems is the following from 1975:

Theorem: Suppose that ${\mathsf{P} \neq \mathsf{NP}}$. Then there is a language ${L}$ in ${ \mathsf{NP} - \mathsf{P}}$ so that ${L}$ is not ${\mathsf{NP}}$-complete.

Possible conjectured candidates for the language ${L}$ are the decision version of factoring integers and graph isomorphism. The beauty of Ladner’s theorem is as long as ${\mathsf{P} \neq \mathsf{NP}}$, then there must be such intermediate languages. Whether or not they are natural problems of the kind we expect is another story, but they will always exist.

Decision and Counting Problems

A decision problem asks: is a set of objects X empty? A counting problem asks: how many objects X are there in a set? These sets are syntactically defined such as the satisfying assignments of a Boolean formula.

The decision problem and the counting problem are not related always in an obvious manner. There are cases where the decision problem is trivial, but the counting problem is hard: for example 2-${\mathsf{SAT}}$. There are other cases where the decision problem and the counting problem are both easy. One of my favorites of this latter phenomena is the problem of counting the number of spanning trees in an undirected graph. This follows from the famous Matrix Tree Theorem of Gustav Kirchhoff, which reduces the counting problem to the evaluation of a determinant.

Do Not Give Up Too Easily

A meta question is: how can one tell if a decision problem or a counting problem is easy, specifically in polynomial time, or complete?

At this level of generality the problem is completely hopeless: if a problem of either type can be defined by an arbitrary procedure there is no hope to be able to tell whether the problem is hard or easy. More precisely Ladner’s theorem shows that there will be intermediate problems: some problems will be outside polynomial time, but still not complete. The same behavior can happen for counting problems.

Rule two of research is not to give up too easily—pretty cool to have all three “two’s” in the same sentence? If the general unrestricted question is impossible, then a good idea is to restrict the class of problems. This is exactly what has been done for both decision problems and counting problems. The high level idea is:

1. Create a restrictive language for expressing decision and counting problems.
2. Then show that it is possible to look at any problem that is expressed in this limited language and determine its computational complexity: whether it is in polynomial time or complete.

The exciting part of this research is there is a fine line. If the language used to express the problems is too strong, then one will fall into the tarpit—there will be no way to classify easy problems from hard ones. If the language used is too weak, then one may succeed, but the results will be not very interesting. A weak language will fail to capture enough interesting problems. This makes the research non-trivial and interesting.

Classification Theorems

A classification theorem consists of a language for defining a family of decision or counting problems, and a rule for deciding which problems are hard and which are easy. All the classification theorems I will discuss are stronger: they show that the problems are either in polynomial time or are complete for ${\mathsf{NP}}$ for decision problems and compete for ${\#\mathsf{P}}$ for counting problems. Such theorems are sometimes called dichotomy theorems—no Ladner intermediate problems are allowed.

You can use these theorems as follows:

1. Suppose you wish to know the complexity of counting the number of objects X that are in a certain family.
2. You check if the property that defined the family can be expressed in the language allowed by the classification theorem.
3. If the answer is yes, then you intermediately have an answer: you just check the theorem and see if the problem is easy or hard.

Pretty neat, pretty simple. Makes determining the complexity of a problem like looking up the value of a logarithm in a table. Almost mechanical.

Well almost. Some of the classification theorems give mathematical criterion, but these criterion are not always effective. Indeed one of the contributions has often been to show that the criterion can be made effective.

A Language

What is a suitable language for properties? Suppose that we are only interested in graphs and their properties. One idea is to define the properties by using the notion of graph homomorphism.

The notion of homomorphism is fundamental to mathematics. Let’s look at homomorphism’s on other structures, like groups. If ${G,H}$ are groups, then a homomorphism is a mapping ${\phi: G \rightarrow H}$ so that

$\displaystyle \phi(x \circ_G y) = \phi(x) \circ_H \phi(y),$

for all ${x,y}$ in ${G}$. Here ${\circ_G}$ denotes the operation for the group ${G}$ and ${\circ_H}$ for the group ${H}$.

These mappings are central to understanding the structure of groups. When the mapping is not one-to-one such maps are especially interesting. In this case they reveal many wonderful properties of the group. Graph homomorphism play the same role of revealing important information for graphs as algebraic homomorphism play for other structures.

The idea is that homomorphisms for graphs allows many interesting properties to be defined. While homomorphism were well studied for groups, rings, and other structures it was not as obvious that they can play a key role in graph theory. A graph homomorphism from a graph ${G= (V, E)}$ to a graph ${H = (V',E')}$ is defined by a mapping ${\phi: V \rightarrow V'}$ so that

$\displaystyle (u,v) \in E \implies (\phi(u),\phi(v)) \in E'.$

At first it may seem that the definition is not really the same type as the group one—the difference is that in groups the operation ${\circ}$ is a function, while in graphs the “operation” is a relation. We could have defined group homomorphism by the rule:

$\displaystyle x \circ y = z \implies \phi(x) \circ \phi(y) = \phi(z).$

This would make them both look alike.

As you probably expect many properties of graphs can be expressed as follows: a graph ${G}$ has property X provided there is some homomorphism from ${\phi}$ from ${G}$ to a particular graph ${H}$. In this way it is easy to define the property, for example, of “3-colorable.” A graph ${G}$ is 3-colorable if and only if there is a homomorphism from ${G}$ to the triangle graph. Pavol Hell and Jaroslav Nešetřil have a book with many more details.

One of the reasons the notion of graph homomorphisms is so powerful is the following beautiful result of László Lovász who proved in 1967:

Theorem: Let ${G}$ and ${H}$ be finite graphs. Then ${G}$ and ${H}$ are isomorphic if and only if for all graphs ${F}$,

$\displaystyle N(F,G) = N(F,H).$

Here ${N(F,G)}$ counts the number of homomorphisms from ${F}$ to ${G}$. Thus counting the number of graph homomorphisms can be used to define a graph up to isomorphism.

Classification For Homomorphism

One basic problems is to determine when a property defined by counting the number of graph homomorphisms is either easy or complete. Let’s call this Constraint Satisfaction Problems (#CSP)—the name is selected because it really stands for a much larger class of properties. More on that later on.

For example, Martin Dyer, Leslie Goldberg and Mike Paterson proved a dichotomy when ${H}$ is restricted to be a directed acyclic graph. They showed that if ${H}$ is layered and satisfies a condition called Lovász goodness, then the corresponding restricted #CSP is computable in polynomial time; otherwise it is #P-hard. Below are two examples. The first is a good graph,

and the second is a bad graph:

Check the paper for the definition of what is good and what is bad.

Classification For Generalized Homomorphism

There is a natural way using weights and more to generalize the notion of graph homomorphism. I will not define this notion of a full fledged #CSP, see here for the details. The excitement is that there are three recent complexity dichotomy theorems in this area. They completely classify the worst-case complexity of every possible counting #CSP problem. It is interesting to note that the corresponding decision problem remains open: this is a case where we know more about counting problems than decision problems.

The first big result is due to Andrei Bulatov. Over the years, Bulatov and others have built a formidable machinery which has been called the Algebraic Approach. The distinguishing feature is the heavy reliance on Universal Algebra. In a 2008 paper Bulatov gave a dichotomy theorem for all #CSP problems where the constraint functions take 0-1 values. This proof uses deep structural theorems from Universal Algebra, in particular the so-called commutator theory and tame congruence theory.

Then Martin Dyer and David Richerby in their paper gave an alternative proof of Bulatov’s dichotomy theorem for unweighted #CSP. Notably, their proof uses no universal algebra other than the notion of Mal’tsev polymorphisms.

The newest dichotomy theorem is by Jin-Yi Cai, Xi Chen and Pinyan Lu and is available here. It gives a complete classification for all non-negatively weighted #CSP’s. A feature of their theorem is that, while it is more general than previous theorems, surprisingly the criterion for tractability becomes simpler than the previous ones, and the proof uses virtually no universal algebra.

The new results on #CSP have powerful consequences. They completely characterize the complexity of counting directed graph homomorphisms. They give us new tractable cases. For example, the following graph defines a binary predicate on domain size 8, and it is not layered and not acyclic. The fact that it is tractable in polynomial time is far from obvious, and is implied by the new dichotomy theorem.

Open Problems

I have written in the past on the beautiful paper by Cai, Chen and Lu that proves an earlier complexity dichotomy theorem. That theorem is not superseded by the new results. That tour de force (which is over 100 pages) is intimidatingly weighty; it dealt with undirected graph homomorphisms with arbitrary complex weights. When the constraint functions are no longer restricted to taking only non-negative values, the problem is even more challenging and also more fascinating. Just think of monotone circuit complexity versus general circuit complexity.

The main open problem is to unify all this information about counting problems into one framework. I expect this will be hard, but that we will see progress on it.

February 14, 2011 10:02 am

The link to the new paper by Cai, Chen and Lu seems to be wrong.

February 14, 2011 3:08 pm

The link to the Cai et. al. paper is wrong. Here is a link from Cai’s webpage:
http://pages.cs.wisc.edu/~jyc/papers/weighted-CSP.pdf

• February 14, 2011 3:16 pm

Thanks to both of you—I have fixed it in the main text.

3. February 14, 2011 7:18 pm

The theorem of Lovász you mention looks awfully similar to (one form of) the Yoneda lemma!

February 14, 2011 10:37 pm

I am a student majoring in computer science and very interested in your blog.
All NP problems can be reduced to Integer Programming problem(ILP). Some cases of ILP have an efficient algorithm. For example, Maximum Network Flow problem and Maximum Matching problem can be reduced to ILP and this ILP can be solved efficiently; Some cases of ILP are hard, such as SAT’s corresponding ILP.
Is it possible to determine whether an instance of ILP is easy or hard? Is there some theory about this?

February 15, 2011 2:19 pm

The problem is that if P = NP, all instances of ILP are “easy.” Thus, it is difficult to characterize ILP instances as “harder” or “easier,” given that such a characterization would be as hard to develop as a proof that P != NP.

• February 15, 2011 3:08 pm

There are also decision dichotomies (as opposed to the counting dichotomies discussed in this post), such as Schaefer’s famous dichotomy theorem for SAT. However, I am not aware of any dichotomy theorems for the ILP framework.

February 14, 2011 11:49 pm

Do dichotomy theorems typically also give an efficient algorithm for solving the problems that they classify as solvable in polynomial time?

• February 15, 2011 9:08 am

Of course we always try to do this, but it does not always happen. For example, the newest dichotomy theorem by Jin-Yi Cai, Xi Chen and Pinyan Lu is decidable in NP. Some of the previous graph homomorphism dichotomies where either not know to be decidable or decidable but without any efficiency measure. So showing decidability in NP is a great improvement.

• February 15, 2011 10:23 am

Oh, sorry. I misunderstood your question.

The answer to your question is yes. The cases that are not #P-complete are proven to be in polynomial time by giving a polynomial time algorithm for them.

6. February 16, 2011 1:10 am

Dick,

Thanks for the kind words and recognition of my early work and current work in the accessibility area. I wish I can say that I solved the sign language translation problems, but, alas, that problem is still open. Some progress has been made, but it is a long way from being solved. What I did with my colleagues is work on video compression that is suitable for real-time sign language video compression on cell phones.

I do recall you meeting my parents when I visited you on my sabbatical leave to Yale in winter 1978. My visit to Yale was memorable with you, David Dobkin, Larry Snyder, and others to make it very stimulating intellectually.

7. February 17, 2011 6:45 am

Thanks for another very interesting post! “A classification theorem consists of a language for defining a family of problems, and a rule for deciding which problems are hard and which are easy” – isn’t there some similarity to the problem of deciding whether a program is, say, polynomial-time or not? For Turing-complete languages, this is undecidable, but there are various restricted formalisms which make it decidable (the various workshops on Implicit Computational Complexity are a good source for such results). Interestingly, often when there is a decidable classification, there is also a dichotomy. For example, the programs described in
my own DICE 2010 paper will be either polynomial or at least exponential.