# Our Three Body Problem

* The three body problem, computer theory style *

Ellis Horowitz is one of the founders of the theory of algorithms. His thesis with George Collins in 1969 had the word “algorithm” in the title: “Algorithms for Symbolic Integration of Rational Functions.” He is known for many things, including an algorithm that after forty years is still the best known.

Today I want to talk about this algorithm, and one of the most annoying open problems in complexity theory.

We have just recently talked about the three body problem from physics, but now it is our turn in computer theory. Our version is called the 3SUM problem. The problem—I guess you probably know it already—is given three sets of integers, are there , , and so that

Suppose the sets each contain at most elements and the elements are bounded by a polynomial in , then there is a trivial algorithm that takes : just try all triples. Our friends at Wikipedia say

3SUM can be easily solved in time

This is unfair. The algorithm may be simple, but it is in my opinion very clever. It is due essentially to Ellis and his then PhD student Sartaj Sahni, who proved in 1974 a related algorithm for knapsack. More on knapsack in a moment. They called their result a “square root improvement.”

## The Quadratic Upper Bound

Here is the general idea. Build a table of all possible values and then check to see if is in the table. With some care this can be done in time . I once talked about this result—it is one of my favorite algorithms. For example it can be used to solve the knapsack problem in time . Recall this is the problem of finding that are so that

Here and are given integers. This is what Ellis and Sahni did in 1974.

In ancient times, before 3SUM was called 3SUM, I thought about how to improve the above algorithm. The idea of breaking things into three or more sets was natural. I worked pretty hard on trying to get a better method. I failed. Here is what I said in the previous discussion:

A natural problem is what happens with a set like ? Is there a way to tell if is in this set in time , for example? If this is possible, then the knapsack problem has an algorithm whose exponential term is . I have thought about this and related approaches quite often. I have yet to discover anything, I hope you have better luck.

## The 3SUM Problem

Actually the usual version now is that we are given one set . Then we must determine whether there are three elements in so that

It is not hard to show that having one set instead of three is essentially no real change: just code the set name into the number.

There are lower bounds on the 3SUM problem in various restricted models of computation. Jeff Erickson showed a quadratic lower bound provided the elements are real numbers and the computation is a special type of decision tree. The result is interesting because it is an adversary argument where it is useful to use *infinitesimals* in the argument. See the paper for details.

Back to the 3SUM problem for integers. If the set has all elements less than in absolute value, then 3SUM can be computed in time

The trick is to cheat: use more than comparisons as in Erickson’s model.

Suppose that is a set of positive numbers less than —this is not needed but I hope makes the idea clearer. The problem is to see if contains an element from another set . Define the polynomial by

Note it’s degree is at most . Using the Fast Fourier transform we can compute in the time . The critical point is that is equal to where

and is equal to the number of in so that . But we are almost done. Just check each element from to see if its coefficient in is positive.

## Modern View

I wish I could report that the problem has been solved, that there is a sub-quadratic algorithm now for 3SUM. That we have finally beat Horowitz and Sahni. But that is not the case. We do have some results that are additional evidence that either the old algorithm is best, or that it will be hard to improve it.

The modern view is to make a failure to find an algorithm into a useful tool. Modern cryptography is based on this: factoring has been studied forever and no polynomial-time algorithm is known, so we purpose that it is hard and use it for the basis of crypto-systems. Think RSA.

The same as been done to 3SUM. Since in forty years we cannot beat quadratic, let’s assume that there is no sub-quadratic algorithm and use this as a tool to “prove” conditional lower bounds on problems. It’s lemons into lemonade.

Well to be careful there are better algorithms. The work of Ilya Baran, Erik Demaine, and the late Mihai Pǎtraşcu give slightly better times on RAM machines. These results are interesting, but do not get , for example. See their paper for details. Pǎtraşcu also showed a tight reduction from 3-SUM to listing 3-cliques.

There is other work on why the 3SUM problem is hard by Amir Abboud, Kevin Lewi, and Ryan Williams: in the paper: *Losing Weight by Gaining Edges*. They prove many interesting results; here is one that I especially like:

Theorem 1For any , if -Clique can be solved in time , then -SUM can be solved in time .

Of course -SUM is the generalization to 3SUM where we seek numbers in a set that sum to zero.

## Open Problems

Is there a sub-quadratic algorithm for 3SUM? I would love to see one. The problem seems hard, very hard. It has been open for forty years, and perhaps it will be open for another forty. Who knows. As always I believe that there could very well be a better algorithm—I am always optimistic about the existence of new clever algorithms.

A small nitpick: Mihai’s work showed a lower bound that listing triangles in a graph edges should take time . The matching upper bound was proven in this year’s ICALP by Bjorklund, Pagh, Vassilevska Williams and Zwick, who showed that one can list triangles in time assuming .

There is no exact and deterministic algorithm that solves 3SUM in $o(n^2)$ time.

Proof: We have the equation $a+b+c=0$, where $a \in A$, $b \in B$, $c \in C$. When we try to manipulate the equation to minimize the total number of possible values on each side, we find that the best we can do is $a+b= -c$, so there are possibly $n^2$ values on the LHS and $n$ values on the RHS. $n^2+n=\Theta(n^2)$, so we get a lower-bound of $\Theta(n^2)$ for the running-time of an exact and deterministic algorithm that solves 3SUM. QED

In other words, since you can’t solve an equation with a total of $\Theta(n^2)$ possible values on each side in $o(n^2)$ time and the 3SUM equation has a total of $\Theta(n^2)$ possible values on each side (any way you manipulate it), the 3SUM equation cannot be solved in $o(n^2)$ time.

A similar argument can be used to show the Horowitz-Sahni algorithm is the fastest deterministic and exact algorithm for knapsack and Erickson’s algorithm in his paper is the fastest deterministic and exact algorithm for r-SUM.

Correction: $\Theta$ should be replaced with $\Omega$ in the 3rd paragraph.

Those kinds of proofs don’t mean a whole lot. In computing, it often happens that you circumvent traditional proofs by using a completely different approach.

Question about the article. Are the sets sorted/ordered or no? If they’re sorted, there’s often different approaches you can take. But if they’re not sorted, you often end up with O(n logn) lower bound and it’s not really worth looking into because you still have n operations left making it n^2logn if you even try to sort (aside from bucket sort).

I’m now not sure if the argument would be valid for Erickson’s algorithm, given the paper mentioned below.

It appears that the argument isn’t valid at all, given the paper below. Where is the flaw?

Of course, it is also possible that the paper mentioned below is flawed.

The paper mentioned below is not flawed as far as I can see. And my argument above is not flawed as far as I can see. The reason why they give contradictory results is because we are assuming two different models of computation. My argument above assumes that the computer takes O(log n) time to read O(log n) bits. The paper mentioned below assumes that the computer can take O(1) time to read O(log n) bits.

I know from bitter experience that it is very hard to make you understand why this kind of argument is wrong, so what I’m writing here is not for your benefit but for the benefit of anyone else who might find it slightly convincing. I’m just going to make this one comment — it’s not the start of a long discussion.

Consider the example where A=B=C is the set of all odd numbers between -n and n. Then the best we can do by rearranging the equation a+b+c=0 is a+b=-c, which results in roughly 2n values on the left-hand side. But we don’t need to check all these values, since we know that the numbers in question are even while the numbers on the other side are odd.

Of course, that’s just a trick that works for one particular example. But maybe the following situation holds.

1. For

everytriple of sets A, B and C there is a clever trick that enables one to bypass the need to check through every element of A+B.2. That clever trick can be discovered in subquadratic time given the sets A, B and C.

Your “proof” doesn’t address this possibility, so it’s not a proof.

I don’t want to get into a long discussion either. We can agree to disagree. I’ll also just make one comment for the benefit of anyone who is convinced by gowers’ counter-argument.

If there were such a trick for every possible triple of sets A,B,C that could be discovered in subquadratic time, then it would be possible to show how this could be done with the general equation, a+b+c=0. But we have already seen that for general A,B,C that the best we can do is rearrange the equation to get a+b=-c. Hence, for general A,B,C, you can’t get o(n^2) time on a realistic computer (not RAM, which is convenient but not realistic).

There’s a paper about to appear in STOC 2014 (in fact I thought your post was about this one). It’s by Allan Grønlund and Seth Pettie, called “Threesomes, Degenerates, and Love Triangles” and has two interesting results: firstly, that there’s an UPPER BOUND that’s strictly subquadratic in the decision tree model (in fact it’s slightly worse than n^{3/2}), and secondly, that there’s an upper bound for 3SUM that is a teeny bit better than existing results for both real numbers and in the integer case (specifically, they improve the result you mention by Baran et al)

http://arxiv.org/abs/1404.0799

In the paper by Allan Grønlund and Seth Pettie, they give a subquadratic algorithm for 3SUM when the sets A,B,C are all equal. Can the algorithm be adjusted for general A,B,C?

The algorithm can be adjusted for general A,B,C via the reduction described here: http://web.stanford.edu/~rrwill/scribe6.pdf

However, this reduction yields only a probabilistic algorithm. Let me rephrase my question: Can the deterministic algorithm for 3SUM in the article by Grønlund and Pettie be modified to deterministically solve 3SUM for general A,B,C in o(n^2) time?

This would be a miracle.

Correction: The probabilistic algorithm can be derandomized, as is mentioned in the notes. But at what cost?

Oops, I looked at the wrong reduction in the lecture notes. The correct reduction is not random at all. Very interesting.

I am really shocked how the post could miss this recent breakthrough.

It is to appear at FOCS not STOC.

I have nothing particular to say about 3SUM, except that I feel 3SAT has more to do with the usual three body problem. In both cases, 3 is the right number of variables from which things become unpredictable…

Can people comment on the correctness of the following paper?

http://arxiv.org/abs/1407.4640

A new algorithm for solving the rSUM problem

Valerii Sopin

To implement the suggested algorithm for 3SUM correctly, one needs n^2 time.

I think Valerii will post an updated version soon, where he tries to claim that the algorithm is subquadratic on certain classes of inputs.

I don’t know, it seems difficult to choose certain classes of inputs. Maybe Valerii just proposed a useful tool? He wrote:

“As time and memory complexity of suggested algorithm is of the

sub-quadratic order, it seems to be useful to perform it at the beginning of any other known algorithm.”

I think:

As this algorithm doesn’t affect on order of time complexity, use it and sometimes you will get the answer. The main question is how often it would be. What’s about average-case complexity?

Reblogged this on rvalue and commented:

The 3SUM problem sounds interesting. I wish to learn more about it.