The Knapsack Problem is Not Too Easy
a decision tree complexity lower bound
David Dobkin is currently the Dean of the Faculty at Princeton University. Quite a fancy title. As Dean he is in charge of many things: for example his office runs all the tenure decisions at Princeton. However, his biggest claim to fame, besides being Dean, is his huge snow-flake paper weight collection. That’s right those little plastic half globes that have a scene like the “Statue of Liberty” and when you shake them it snows inside the globe.
David is extremely proud of his collection. However, he has also proud of his many wonderful papers. One of those papers was a joint one that we wrote together almost 30 years ago. It proves a lower bound on the knapack problem. Recall the knapsack problem is the question of whether or not given the positive integers , there is a subset of the indices to
where is another given positive integer. This is a famous NP-complete problem so according to CW it should not have an efficient algorithm. Note, our result shows only that it cannot be really easy: it cannot be done in linear time, for example.
Of course we could not do this in the general model of any computation. As I have said before we have no way to prove such theorems. An algorithm can start out by first computing the value of for example. It could them compute other strange values. And so on. Good luck proving that such algorithms cannot work.
This is our theorem:
Theorem: Any linear decision tree that decides whether or not there is a solution to the knapsack problem must take at least steps.
Here a step is any linear test on the inputs. Thus, a linear decision tree can perform any test on the inputs that is a linear function. Then depending on the outcome of the test the decision tree can decide what test to perform next. Eventually the algorithm stops and answers whether or not the problem instance is solvable or not. A linear decision tree is a natural model for many problems. Certainly, we thought that any “reasonable” approach to the knapsack problem would probably need to compute the value of such linear tests.
The proof of the theorem is perhaps more important than the actual result. It is based on the following simple insight. Suppose that a collection of hyper-planes in -dimensional space form an arrangement. Suppose we remove all the points in the arrangement from the -dimensional space. What’s left will consist of disconnected regions. Then, any linear decision tree algorithm for deciding which region a point is in requires at least linear tests. I believe that we were the first to use this simple idea. Previously a special case had been used to prove lower bounds on sorting. But our result was the first that used the more general observation. Of course, there is one more part to our proof. We had to calculate how many regions there were for the arrangement that arises from the knapsack problem. It turned out that this could be computed quite tightly and we had our lower bound.
We felt that the bound we got could be vastly improved. Indeed we conjectured that our quadratic bound was just the start. We thought that it could likely be the case that some one might be able to prove that our bound of could be replaced by , for example. We were wrong. Meyer auf der Heide proved in 1984 that there is a linear decision tree algorithm that runs in
for the knapsack problem. We, and many others, were quite surprised. Of course the decision which linear test to perform next is not easy to compute so his result does not imply anything about P=NP. Rather it destroyed our conjecture, and the whole approach based on decision trees to lower bounds.
Following our work there have been two types of improvements. The first was started by Andy Yao. He allowed non-linear tests in the decision trees. The second, was allowing the decision tree algorithms to use “randomness”. The latter work was done by Dima Grigoriev and Marek Karpinski in 1997.
When using non-linear tests one must be careful to “charge” correctly for the test. The key idea is that a test of degree costs now. Note, without this restriction there would be a way to solve the problem in constant time for all inputs of any bounded size.
I still believe that there is a pair of interesting open questions based on this approach. First, there is a serious gap between our quadratic lower bound and the upper bound. Can one improve the lower bound? or the upper bound?
One idea that we recently looked at is to combine the cost function that Yao uses and the one that we used. So a linear test of the form: will cost : two additions and one comparison. It seems clear that in this model one should be able to beat the bound of . However, the best I have been able to do is to replace the by . I think this approach should at least yield a lower bound of order for some large .