Skip to content

Our Three Body Problem

August 13, 2014


The three body problem, computer theory style

Unknown

Ellis Horowitz is one of the founders of the theory of algorithms. His thesis with George Collins in 1969 had the word “algorithm” in the title: “Algorithms for Symbolic Integration of Rational Functions.” He is known for many things, including an algorithm that after forty years is still the best known.

Today I want to talk about this algorithm, and one of the most annoying open problems in complexity theory.

We have just recently talked about the three body problem from physics, but now it is our turn in computer theory. Our version is called the 3SUM problem. The problem—I guess you probably know it already—is given three sets {A,B,C} of integers, are there {a \in A}, {b \in B}, and {c \in C} so that

\displaystyle  a + b + c = 0?

Suppose the sets each contain at most {n} elements and the elements are bounded by a polynomial in {n}, then there is a trivial algorithm that takes {n^{3}}: just try all triples. Our friends at Wikipedia say

3SUM can be easily solved in {O(n^2)} time {\dots}

This is unfair. The algorithm may be simple, but it is in my opinion very clever. It is due essentially to Ellis and his then PhD student Sartaj Sahni, who proved in 1974 a related algorithm for knapsack. More on knapsack in a moment. They called their result a “square root improvement.”

The Quadratic Upper Bound

Here is the general idea. Build a table of all possible values {a+b} and then check to see if {-c} is in the table. With some care this can be done in time {O(n^2)}. I once talked about this result—it is one of my favorite algorithms. For example it can be used to solve the knapsack problem in time {2^{n/2}}. Recall this is the problem of finding {x_{1},\dots,x_{n}} that are {0-1} so that

\displaystyle  \sum_{i=1}^{n} a_{i}x_{i} = b.

Here {a_{1},\dots,a_{n}} and {b} are given integers. This is what Ellis and Sahni did in 1974.

In ancient times, before 3SUM was called 3SUM, I thought about how to improve the above algorithm. The idea of breaking things into three or more sets was natural. I worked pretty hard on trying to get a better method. I failed. Here is what I said in the previous discussion:

A natural problem is what happens with a set like {A+B+C}? Is there a way to tell if {x} is in this set in time {|A|+|B|+|C|}, for example? If this is possible, then the knapsack problem has an algorithm whose exponential term is {2^{n/3}}. I have thought about this and related approaches quite often. I have yet to discover anything, I hope you have better luck.

The 3SUM Problem

Actually the usual version now is that we are given one set {S}. Then we must determine whether there are three elements {x,y,z} in {S} so that

\displaystyle  x+y+z=0.

It is not hard to show that having one set instead of three is essentially no real change: just code the set name into the number.

There are lower bounds on the 3SUM problem in various restricted models of computation. Jeff Erickson showed a quadratic lower bound provided the elements are real numbers and the computation is a special type of decision tree. The result is interesting because it is an adversary argument where it is useful to use infinitesimals in the argument. See the paper for details.

Back to the 3SUM problem for integers. If the set {S} has all elements less than {M} in absolute value, then 3SUM can be computed in time

\displaystyle  O(n + M\log M).

The trick is to cheat: use more than comparisons as in Erickson’s model.

Suppose that {A} is a set of positive numbers less than {M}—this is not needed but I hope makes the idea clearer. The problem is to see if {A+A} contains an element from another set {B}. Define the polynomial {f(x)} by

\displaystyle  f(x) = \sum_{i \in A} x^{i}.

Note it’s degree is at most {M}. Using the Fast Fourier transform we can compute {f(x)^{2}} in the time {O(M\log M)}. The critical point is that {f(x)^{2}} is equal to {g(x)} where

\displaystyle  g(x) = \sum_{k} a_{k}x^{k},

and {a_{k}} is equal to the number of {i,j} in {A} so that {i+j=k}. But we are almost done. Just check each element {k} from {B} to see if its coefficient {a_{k}} in {g(x)} is positive.

Modern View

I wish I could report that the problem has been solved, that there is a sub-quadratic algorithm now for 3SUM. That we have finally beat Horowitz and Sahni. But that is not the case. We do have some results that are additional evidence that either the old algorithm is best, or that it will be hard to improve it.

The modern view is to make a failure to find an algorithm into a useful tool. Modern cryptography is based on this: factoring has been studied forever and no polynomial-time algorithm is known, so we purpose that it is hard and use it for the basis of crypto-systems. Think RSA.

The same as been done to 3SUM. Since in forty years we cannot beat quadratic, let’s assume that there is no sub-quadratic algorithm and use this as a tool to “prove” conditional lower bounds on problems. It’s lemons into lemonade.

Well to be careful there are better algorithms. The work of Ilya Baran, Erik Demaine, and the late Mihai Pǎtraşcu give slightly better times on RAM machines. These results are interesting, but do not get {O(n^{1.99})}, for example. See their paper for details. Pǎtraşcu also showed a tight reduction from 3-SUM to listing 3-cliques.

There is other work on why the 3SUM problem is hard by Amir Abboud, Kevin Lewi, and Ryan Williams: in the paper: Losing Weight by Gaining Edges. They prove many interesting results; here is one that I especially like:

Theorem 1 For any {c > 2}, if {k}-Clique can be solved in time {O(n^{c})}, then {k}-SUM can be solved in time {n^{c+o(1)}}.

Of course {k}-SUM is the generalization to 3SUM where we seek {k} numbers in a set that sum to zero.

Open Problems

Is there a sub-quadratic algorithm for 3SUM? I would love to see one. The problem seems hard, very hard. It has been open for forty years, and perhaps it will be open for another forty. Who knows. As always I believe that there could very well be a better algorithm—I am always optimistic about the existence of new clever algorithms.

About these ads
23 Comments leave one →
  1. atri permalink
    August 13, 2014 1:52 pm

    A small nitpick: Mihai’s work showed a lower bound that listing O(m) triangles in a graph m edges should take time m^{4/3-o(1)}. The matching upper bound was proven in this year’s ICALP by Bjorklund, Pagh, Vassilevska Williams and Zwick, who showed that one can list O(m) triangles in time O(m^{4/3}) assuming \omega=2.

  2. Craig permalink
    August 13, 2014 1:54 pm

    There is no exact and deterministic algorithm that solves 3SUM in $o(n^2)$ time.

    Proof: We have the equation $a+b+c=0$, where $a \in A$, $b \in B$, $c \in C$. When we try to manipulate the equation to minimize the total number of possible values on each side, we find that the best we can do is $a+b= -c$, so there are possibly $n^2$ values on the LHS and $n$ values on the RHS. $n^2+n=\Theta(n^2)$, so we get a lower-bound of $\Theta(n^2)$ for the running-time of an exact and deterministic algorithm that solves 3SUM. QED

    In other words, since you can’t solve an equation with a total of $\Theta(n^2)$ possible values on each side in $o(n^2)$ time and the 3SUM equation has a total of $\Theta(n^2)$ possible values on each side (any way you manipulate it), the 3SUM equation cannot be solved in $o(n^2)$ time.

    A similar argument can be used to show the Horowitz-Sahni algorithm is the fastest deterministic and exact algorithm for knapsack and Erickson’s algorithm in his paper is the fastest deterministic and exact algorithm for r-SUM.

    • Craig permalink
      August 13, 2014 1:56 pm

      Correction: $\Theta$ should be replaced with $\Omega$ in the 3rd paragraph.

    • MrX permalink
      August 13, 2014 4:38 pm

      Those kinds of proofs don’t mean a whole lot. In computing, it often happens that you circumvent traditional proofs by using a completely different approach.

      Question about the article. Are the sets sorted/ordered or no? If they’re sorted, there’s often different approaches you can take. But if they’re not sorted, you often end up with O(n logn) lower bound and it’s not really worth looking into because you still have n operations left making it n^2logn if you even try to sort (aside from bucket sort).

      • Craig permalink
        August 14, 2014 10:31 am

        I’m now not sure if the argument would be valid for Erickson’s algorithm, given the paper mentioned below.

      • Craig permalink
        August 14, 2014 7:38 pm

        It appears that the argument isn’t valid at all, given the paper below. Where is the flaw?

      • Craig permalink
        August 16, 2014 8:01 pm

        Of course, it is also possible that the paper mentioned below is flawed.

      • Craig permalink
        August 21, 2014 10:13 am

        The paper mentioned below is not flawed as far as I can see. And my argument above is not flawed as far as I can see. The reason why they give contradictory results is because we are assuming two different models of computation. My argument above assumes that the computer takes O(log n) time to read O(log n) bits. The paper mentioned below assumes that the computer can take O(1) time to read O(log n) bits.

    • August 22, 2014 8:07 am

      I know from bitter experience that it is very hard to make you understand why this kind of argument is wrong, so what I’m writing here is not for your benefit but for the benefit of anyone else who might find it slightly convincing. I’m just going to make this one comment — it’s not the start of a long discussion.

      Consider the example where A=B=C is the set of all odd numbers between -n and n. Then the best we can do by rearranging the equation a+b+c=0 is a+b=-c, which results in roughly 2n values on the left-hand side. But we don’t need to check all these values, since we know that the numbers in question are even while the numbers on the other side are odd.

      Of course, that’s just a trick that works for one particular example. But maybe the following situation holds.

      1. For every triple of sets A, B and C there is a clever trick that enables one to bypass the need to check through every element of A+B.

      2. That clever trick can be discovered in subquadratic time given the sets A, B and C.

      Your “proof” doesn’t address this possibility, so it’s not a proof.

      • Craig permalink
        August 22, 2014 9:08 am

        I don’t want to get into a long discussion either. We can agree to disagree. I’ll also just make one comment for the benefit of anyone who is convinced by gowers’ counter-argument.

        If there were such a trick for every possible triple of sets A,B,C that could be discovered in subquadratic time, then it would be possible to show how this could be done with the general equation, a+b+c=0. But we have already seen that for general A,B,C that the best we can do is rearrange the equation to get a+b=-c. Hence, for general A,B,C, you can’t get o(n^2) time on a realistic computer (not RAM, which is convenient but not realistic).

  3. August 13, 2014 4:38 pm

    There’s a paper about to appear in STOC 2014 (in fact I thought your post was about this one). It’s by Allan Grønlund and Seth Pettie, called “Threesomes, Degenerates, and Love Triangles” and has two interesting results: firstly, that there’s an UPPER BOUND that’s strictly subquadratic in the decision tree model (in fact it’s slightly worse than n^{3/2}), and secondly, that there’s an upper bound for 3SUM that is a teeny bit better than existing results for both real numbers and in the integer case (specifically, they improve the result you mention by Baran et al)

    http://arxiv.org/abs/1404.0799

    • Craig permalink
      August 13, 2014 10:05 pm

      In the paper by Allan Grønlund and Seth Pettie, they give a subquadratic algorithm for 3SUM when the sets A,B,C are all equal. Can the algorithm be adjusted for general A,B,C?

      • Craig permalink
        August 14, 2014 8:21 am

        The algorithm can be adjusted for general A,B,C via the reduction described here: http://web.stanford.edu/~rrwill/scribe6.pdf

        However, this reduction yields only a probabilistic algorithm. Let me rephrase my question: Can the deterministic algorithm for 3SUM in the article by Grønlund and Pettie be modified to deterministically solve 3SUM for general A,B,C in o(n^2) time?

        This would be a miracle.

      • Craig permalink
        August 14, 2014 10:37 am

        Correction: The probabilistic algorithm can be derandomized, as is mentioned in the notes. But at what cost?

      • Craig permalink
        August 14, 2014 3:11 pm

        Oops, I looked at the wrong reduction in the lecture notes. The correct reduction is not random at all. Very interesting.

    • August 14, 2014 1:31 am

      I am really shocked how the post could miss this recent breakthrough.

    • Mrz permalink
      August 14, 2014 8:49 am

      It is to appear at FOCS not STOC.

  4. Serge permalink
    August 13, 2014 6:07 pm

    I have nothing particular to say about 3SUM, except that I feel 3SAT has more to do with the usual three body problem. In both cases, 3 is the right number of variables from which things become unpredictable…

  5. August 13, 2014 8:48 pm

    Can people comment on the correctness of the following paper?

    http://arxiv.org/abs/1407.4640

    A new algorithm for solving the rSUM problem
    Valerii Sopin

    • August 19, 2014 5:58 am

      To implement the suggested algorithm for 3SUM correctly, one needs n^2 time.

      I think Valerii will post an updated version soon, where he tries to claim that the algorithm is subquadratic on certain classes of inputs.

      • August 19, 2014 11:52 am

        I don’t know, it seems difficult to choose certain classes of inputs. Maybe Valerii just proposed a useful tool? He wrote:

        “As time and memory complexity of suggested algorithm is of the
        sub-quadratic order, it seems to be useful to perform it at the beginning of any other known algorithm.”

        I think:
        As this algorithm doesn’t affect on order of time complexity, use it and sometimes you will get the answer. The main question is how often it would be. What’s about average-case complexity?

  6. August 30, 2014 3:03 pm

    Reblogged this on rvalue and commented:
    The 3SUM problem sounds interesting. I wish to learn more about it.

Trackbacks

  1. The 3SUM Assumption Is Wrong? | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,133 other followers