Skip to content

The Knapsack Problem is Not Too Easy

February 17, 2009


a decision tree complexity lower bound

images18

David Dobkin is currently the Dean of the Faculty at Princeton University. Quite a fancy title. As Dean he is in charge of many things: for example his office runs all the tenure decisions at Princeton. However, his biggest claim to fame, besides being Dean, is his huge snow-flake paper weight collection. That’s right those little plastic half globes that have a scene like the “Statue of Liberty” and when you shake them it snows inside the globe.

David is extremely proud of his collection. However, he has also proud of his many wonderful papers. One of those papers was a joint one that we wrote together almost 30 years ago. It proves a {1 \over 2}n^{2} lower bound on the knapack problem. Recall the knapsack problem is the question of whether or not given the positive integers a_{1}, \dots a_{n}, there is a subset I of the indices 1 to n
so that
      \sum_{i \in I} a_{i} = b
where b is another given positive integer. This is a famous NP-complete problem so according to CW it should not have an efficient algorithm. Note, our result shows only that it cannot be really easy: it cannot be done in linear time, for example.

Of course we could not do this in the general model of any computation. As I have said before we have no way to prove such theorems. An algorithm can start out by first computing the value of \sqrt{b+a_{5}} for example. It could them compute other strange values. And so on. Good luck proving that such algorithms cannot work.

Our Result

This is our theorem:

Theorem: Any linear decision tree that decides whether or not there is a solution to the knapsack problem must take at least {1 \over 2}n^{2} steps.

Here a step is any linear test on the inputs. Thus, a linear decision tree can perform any test on the inputs a_{1} \dots a_{n},b that is a linear function. Then depending on the outcome of the test the decision tree can decide what test to perform next. Eventually the algorithm stops and answers whether or not the problem instance is solvable or not. A linear decision tree is a natural model for many problems. Certainly, we thought that any “reasonable” approach to the knapsack problem would probably need to compute the value of such linear tests.

The proof of the theorem is perhaps more important than the actual result. It is based on the following simple insight. Suppose that a collection of hyper-planes in n-dimensional space form an arrangement. Suppose we remove all the points in the arrangement from the n-dimensional space. What’s left will consist of M disconnected regions. Then, any linear decision tree algorithm for deciding which region a point is in requires at least \log M linear tests. I believe that we were the first to use this simple idea. Previously a special case had been used to prove lower bounds on sorting. But our result was the first that used the more general observation. Of course, there is one more part to our proof. We had to calculate how many regions there were for the arrangement that arises from the knapsack problem. It turned out that this could be computed quite tightly and we had our lower bound.

A Surprise

We felt that the bound we got could be vastly improved. Indeed we conjectured that our quadratic bound was just the start. We thought that it could likely be the case that some one might be able to prove that our bound of {1 \over 2}n^{2} could be replaced by n^{100}, for example. We were wrong. Meyer auf der Heide proved in 1984 that there is a linear decision tree algorithm that runs in
      O(n^{5}\log(n))
for the knapsack problem. We, and many others, were quite surprised. Of course the decision which linear test to perform next is not easy to compute so his result does not imply anything about P=NP. Rather it destroyed our conjecture, and the whole approach based on decision trees to lower bounds.

Further Results

Following our work there have been two types of improvements. The first was started by Andy Yao. He allowed non-linear tests in the decision trees. The second, was allowing the decision tree algorithms to use “randomness”. The latter work was done by Dima Grigoriev and Marek Karpinski in 1997.

When using non-linear tests one must be careful to “charge” correctly for the test. The key idea is that a test of degree d costs d now. Note, without this restriction there would be a way to solve the problem in constant time for all inputs of any bounded size.

Open Problems

I still believe that there is a pair of interesting open questions based on this approach. First, there is a serious gap between our quadratic lower bound and the upper bound. Can one improve the lower bound? or the upper bound?

One idea that we recently looked at is to combine the cost function that Yao uses and the one that we used. So a linear test of the form: a + b + c \ge 0 will cost 3: two additions and one comparison. It seems clear that in this model one should be able to beat the bound of {1 \over 2}n^{2}. However, the best I have been able to do is to replace the {1 \over 2} by 1. I think this approach should at least yield a lower bound of order cn^{2} for some large c.

7 Comments leave one →
  1. Pascal Koiran permalink
    March 2, 2009 8:08 am

    As you point out, “the decision which linear test to perform next is not easy to compute”.
    Hervé Fournier and I showed a few years ago that for a large
    class of problems, this decision becomes easy with acces to an NP oracle.
    As a result, proving lower bounds in a linear decision tree model where the cost of decisions is properly accounted for is quivalent to P NP.
    An optimist might say that it “just remains” to prove a superpolynomial lower bound in this improved model, and we’ll be done with P=NP.
    I guess this point of view would be overly optimistic, though.

    • rjlipton permalink*
      March 2, 2009 9:08 am

      I was aware of your nice work, and I am sorry did not point it out. I probably will do a whole post on decision tree complexity some time in the near future. It is a very interesting model and there are many open questions.

      By the way here is their citation: “Are lower bounds easier over the reals?” by
      Hervé Fournier and Pascal Koiran in STOC ’98: Proceedings of the thirtieth annual ACM symposium on Theory of computing.

  2. Pascal Koiran permalink
    March 2, 2009 9:32 am

    If I may indulge in a second self-serving comment, Hervé and I also have a follow-up paper in ICALP 2000 where the equivalence with P=NP is established.
    The STOC’98 paper only established a connection with the “easier” (acoording to the conventional wisdom) P=PSPACE problem.

  3. Craig permalink
    September 12, 2010 12:32 pm

    It is not a total surprise that you got a lower bound on the order of n^2. The hard instances of knapsack are known to have integers with an order of magnitude 2^n. That is n positive integers, each needing n bits of storage. This totals to input of n^2 bits.

Trackbacks

  1. Cook’s Class Contains Pi « Gödel’s Lost Letter and P=NP
  2. High Dimensional Search and the NN Problem « Gödel’s Lost Letter and P=NP
  3. The 3SUM Assumption Is Wrong? | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s