Skip to content

Computing Very Large Sums

June 9, 2009

Computing exponential size sums are sometimes easy and sometimes hard


Jin-Yi Cai is one of the top researchers in complexity theory. He is one of the greatest pure problem solvers that I have had the pleasure to work with. Cai has solved a number of famous open problems, starting with his thesis work under the great Juris Hartmanis, continuing with his work on several long-standing complexity conjectures, and more recently extending the beautiful work of Leslie Valiant on holographic computation. I will discuss this work in the future.

Today I will talk about “doing sums.” The problem is computing the sum of elements from an exponential size set in polynomial time. These problems arise in many parts of computer science, yet we still do not completely understand when they are easy to compute and when they are hard to compute.

Cai is a joy to work with and do research with. Over the years I have worked with many people and he is one of the nicest. I am trying to think of a funny or strange story, but I cannot think of anything that unusual about him. I wonder what that says? Is my memory failing or more likely he is just so solid a person that I cannot think of anything {\dots}

There is one story of how we worked hard and solved a problem, and then got completely passed by a much more clever result. Recall that Barrington’s Theorem shows that polynomial length bounded width computations can compute all of {\mathsf{NC^{1}}}. David Barrington did not care about about the exact polynomial length: he only cared that it was polynomial. Jin-Yi and I tried to improve his construction. The issue is this: suppose that {F} is a formula of size {|F|} with all paths to leaves the same length. Barrington shows that this can be computed by a bounded width computation of length order {|F|^{2}}.

Cai and I tried to reduce the exponent {2} on {|F|^{2}}. After a great deal of work, we were able to show how to improve Barrington’s initial bound from {2} to {1.81}. Our method used a complex “gadget” and some other ideas.

As soon as our result was published, others started to work on the problem. Richard Cleve wiped us out. He proved that the exponent could be {1.7} or {1.6} or even better {1 + \epsilon} for any {\epsilon>0}. He saw much more deeply into the problem than we did; he set up a pretty framework that avoided any ad hoc gadgets, allowing him to essentially solve the problem. Oh well, at least we raised the question.

Summation Problems

The general summation problem over a field {F} is: find the value of

\displaystyle  S = \sum_{x \in A} f(x)

where {A} is an arbitrary finite subset of {F^{n}} and {f(x)} is an arbitrary function from {F^{n}} to {F}. When the field is a subset of the complex numbers, it makes sense to talk about computing the value {S} with either additive or multiplicative error.

I find these problems interesting for several reasons:

{\bullet} Summations arise naturally in many settings. However, the classic approach in mathematics is to get bounds on summations, as computer scientists we want algorithms that can compute them. That does not mean that general bounds are not related to computing their values, but general bounds are not enough.

{\bullet} Summations arise as a model of quantum computation. If we could compute certain summations efficiently, then we could simulate quantum computations. This is likely too much to ask, but the question then becomes when can a sum be computed fast?

{\bullet} They yield some interesting mathematics, and I like the questions that arise. For a variety of reasons, summation type questions seem not to have received sufficient attention in complexity theory. Yet I would argue that they are a natural set of questions.

{\bullet} Lastly, they have nothing to do with variance. Sorry I could not avoid saying something today about the “storm” of wonderful comments that the recent post on variance caused. I will get back to that in the near future.

A Hard Family of Summations

Let me start by showing the following “folklore theorem.” I have known it for a while, but cannot find a reference. I am happy to include a reference, if someone knows who first proved it.

Notation: Hereafter, {I} and {J} will denote intervals of integers, i.e. {I} could be {[a,b]}, which is equal to,

\displaystyle  \{ a, a+1, \dots, b-1, b \}.

When I say {\sum_{x\in I}}, where {I} is an interval, I mean the summation to be over all integers {x} in the interval {I}. Also, the notation {|I|} is the number of integers in the interval.

Theorem: Let {r(x,y)} be a degree at most {2} rational function. Computing the sum {S}

\displaystyle  S = \sum_{x \in I}\sum_{y \in J} r(x,y)

is at least as hard as integer factoring.

In this and all future results, algorithms can, of course, be polynomial in the number of bits that are needed to describe the given functions. I will not cheat and encode information by using coefficients that are more than exponential size.

Lemma: Consider the following function {S(I,J,M,T)} defined by,

\displaystyle  S(I,J,M,T) = \sum_{x \in I}\sum_{y \in J} \frac{1}{T(xy-M) + 1}.

Assume {T>3}. Then, {S(I,J,M,T)} is equal to

\displaystyle  \# \left\{ (x,y) \mid x \in I \text{ and } y \in J \text{ and } xy=M \right\} + O(|I| \cdot |J|/T).

As usual {\#A} is the cardinality of the set {A}.

Proof: Define {r(x,y)} to be

\displaystyle  \frac{1}{T(xy-M) + 1}.

Then, if {xy=M}, then {r(x,y) = 1}. I next claim that if {xy \neq M}, then {|r(x,y)| \le 2/T.} Suppose that {xy-M = k}. If {k >0}, then the claim is easy to see. Thus, assume that {k<0} and that the inequality is false. Then,

\displaystyle  |r(x,y)| = \frac{1}{|k|\cdot T-1} > \frac{2}{T}

which implies that

\displaystyle  T > 2|k|T-2 \ge 2T-2

Recall that {T > 3}, hence this is a contradiction. This proves the claim.

It remains to calculate the contribution of each {x,y} to the sum {S(I,J,M,T)}: Each {x,y} with {xy=M} contributes {1}; each other {x,y} contributes at most {2/T}. The total number of {x,y} is clearly bounded by {|I| \cdot |J|}.

This proves the lemma. \Box

Lemma: Factoring of {M} can be reduced to a polynomial number of evaluations of the function {S(I,J,M,T)}.

Proof: Let {M} be an {n}-bit number, and let {T = 2^{3n}}. Define the function {g(a,b) =S([a,b],[2,M-1],M,T)}. By the lemma, {g(a,b)} is equal to the number of divisors {x | M} so that {a \le x \le b} plus an error term that is {o(1)}.

Start with {g(2,M-1)}. If this is {o(1)}, then {M} is prime. So assume that {g(2,M-1) \ge 1 +o(1)}. Then, by a binary search we can find a factor of {M}. \Box

Two remarks about this theorem. First, we can weaken the assumption on computing {S=S(I,J,M,T)} exactly. The argument actually shows that either {S \ge 1 + O(|I|\cdot|J|/T)} or {S \le O(|I|\cdot|J|/T)}. For {T} large enough, this shows that we can handle either additive error of almost {1} or multiplicative error of exponential size.

Second, the theorem shows that the summation problem for very simple rational functions is hard. Using a result–I believe still unpublished–of Joe Killian and others?, we can do much better. Killian proved the following:

Theorem: Given an integer {N}, the following question is NP-complete: Is there a factor of {N} in the range {[a,b]}?

The intuition for this result is, even if one is given the factorization of {N}, it could be hard to find a subset of the prime factors, whose product lies in the given interval. Killian’s insight is that the problem is essentially a knapsack type question, where the values to be packed are the logarithms of the prime factors of {N}.

This result and the above methods yield the following stronger theorem:

Theorem: Let {r(x,y)} be a degree at most {2} rational function. Computing the sum {S}

\displaystyle  S = \sum_{x \in I}\sum_{y \in J} r(x,y)

is NP-hard. This remains true even for computing the sum within an additive error of 2/3 .

As remarked earlier we can allow any additive error less than 1 or even an exponential multiplicative error.

This uses a proof, similar to that of the previous lemma:

Proof: Let {M} be an {n}-bit number, and let {T = 2^{3n}}. Define the function {g(a,b) =S([a,b],[2,M-1],M,T)}. By the lemma, {g(a,b)} is equal to the number of divisors {x | M} so that {a \le x \le b} plus an error term that is {o(1)}. Thus, determining if there is a factor in an interval reduces to one summation evaluation. \Box

There is some interesting earlier work on related problems. For example, Leonid Gurvits and Warren Smith show that computing the integral of rational functions that are given by straight-line arithmetic circuits is #P-hard. The key to their result is the following pretty formula for the permanent of an {n} by {n} matrix {A},

\displaystyle  \frac{1}{2\pi}\int_{0}^{2\pi} \prod_{j=1}^{n-1}h(p,\theta) d\theta

where {p \ge 2} is an integer and {h(p,\theta)} is equal to

\displaystyle  \sum_{k=1}^{n-1} a_{jk} \exp(ip^{k-1}\theta) + a_{jn}\exp(-i\frac{p^{n}-1}{p-1}\theta).

An Easy Family of Summations

An interesting family of “easy” summations arises from a recent paper by Jin-Yi Cai, Xi Chen, and Pinyan Lu (CCL): “Graph Homomorphisms with Complex Values: A Dichotomy Theorem.” László Lovász, first defined the notion of graph homomorphism, which generalizes many interesting graph properties such as: vertex cover, independent set, and graph colorings–to name a few. Thus, the paper of CCL, by allowing complex valued graph homomorphisms, covers many important graph properties. A natural question is: which properties are easy and which are hard?

They answer this question and prove a dichotomy type theorem. Roughly such theorems prove that all problems of a certain type can be divided into two classes: the first class are all in polynomial time, and the second type are all NP-hard. The proof that the first class are in polynomial time relies on an interesting family of summations, which we will discuss in a moment.

You probably have seen dichotomy type theorems before. A very simple example is this: consider SAT problems. Those with only two-clauses are always easy to solve. While in general those with three-clauses are, of course, NP-complete.

The work of CCL is a much harder, a much deeper, and a much more powerful result. However, it is in the same spirit as the simple example I just gave. However, I will not be able to do justice to their beautiful paper, since it is a mere 111 pages: yes, that’s not a typo, nor is it in binary. Read their paper for the details. Or at least browse through the paper–it contains many interesting ideas.

CCL must be able to classify certain summations that arise from finite rings, and decide which are easy to compute and which are not. The main result on easy ones is this beautiful theorem:

Theorem: Let {N} be any positive integer, and {\omega_{N} = e^{2 \pi i/N}}. Let {f \in \mathbb{Z}_N [x_1,\ldots, x_n]}~be a quadratic polynomial in {n} variables {x_1, \ldots, x_n } over {\mathbb{Z}_N}. Then the following sum

\displaystyle Z_{N}(f)=\sum_{x_1, \ldots, x_n \in \mathbb{Z}_N }\omega_{N}^{f(x_1, \ldots, x_n)}

can be evaluated in polynomial time in {n} and {\log N}.

Note, when {N} is a prime, the result follows from known results–see this. However, the full general case when {N} is an arbitrary number with unknown factorization is new.

These sums are quite interesting, and it is not obvious that they are easy to compute for several reasons. First, there are, of course, an exponential number of terms in the sum: {N^{n}}. Second, raising a root of unity to the value of a quadratic polynomial is a term that varies in a complex manner. Finally, the problem is an exact evaluation: no error–the exact value is computed. The bottom line is that it is not at all clear why such a sum should be easy to compute.

Sketch of the Proof

The proof of the above theorem is fairly involved. The main idea is to show that it can be reduced to a class of sums that Gauss knew about. I will, with Cai’s kind permission, include a sketch of the proof for the special case of {N} an odd prime power. This, I believe, gives the flavor of how such theorems are proved.

\displaystyle \S\

Define the Gauss sum as

\displaystyle G(a, N) = \sum_{t \in \mathbb{Z}_{N}} e^{\frac{2 \pi i}{N} a t^2}.

Gauss knew the value of these sums, and in closed form. The story of how he did this is quite interesting–see this.

The main idea of the proof is to use induction on the number of variables. Let {p} be an odd prime, and {N=q=p^k}. We are trying to compute {Z_q(f)} We assume {f(x_1,x_2,\ldots,x_n)} has the following form:

\displaystyle  f(x_1,\ldots,x_n)=\sum_{i\le j\in [n]} c_{i,j}x_ix_j+\sum_{i\in [n]} c_ix_i+c_0.

where all the {c_{i,j}} and {c_i} are elements in {\mathbb{Z}_q}. We can assume the constant term in {f} is zero since it can be taken out of {Z_{q}(f)} as a constant factor.

Note, if {f} was purely linear, then the sum could be easily written as a product of a simple sum and a sum over {n-1} variables. Even though {f} is not linear, the strategy is to achieve this same goal. Thus, we can assume that {f} has non-linear terms.

For any non-zero {a \in \mathbb{Z}_q}, we can write it as {a=p^t a'}, where {t} is a unique non-negative integer, such that {p\nmid a'}. We call {t} the order of {a} (with respect to {p}). Since {f} has non-zero quadratic terms, let {t_0} be the smallest order of all the non-zero quadratic coefficients {c_{i,j}} of {f}. We consider the following two cases: there exists at least one square term with coefficient of order {t_0}, or not. We call the former the first case (I) and the latter the second case (II). Note, all this complexity is due to the fact that we are working over a ring and not a field.

In the first case (I), without loss of generality, we assume {c_{1,1}=p^{t_0}c} and {p\nmid c} (so {c} is invertible in {\mathbb{Z}_q}). Then by the minimality of {t_0}, every non-zero coefficient of a quadratic term has a factor {p^{t_0}}. Now we factor out {c_{1,1}} from every quadratic term involving {x_1}, namely from {x_1^2, x_1 x_2, \ldots, x_1 x_n} We can write

\displaystyle  f(x_1,x_2,\ldots, x_n) = c_{1,1} \big(x_1+g(x_2,\ldots,x_n)\big)^2+ c_1 x_1 + \Delta

where {g} is a linear form over {x_2,\ldots,x_n} and {\Delta} is a quadratic polynomial in {(x_2,\ldots,x_n)} By adding and then subtracting {c_1 g(x_2,\ldots,x_n)}, we get

\displaystyle  f = c_{1,1}\big(x_1+g(x_2,\ldots,x_n)\big)^2+c_1\big(x_1+g(x_2\ldots,x_n)\big) +f'(x_2,\ldots,x_n),

where {f'(x_2,\ldots, x_n) \in \mathbb{Z}_q[x_2,\ldots,x_n]} is a quadratic polynomial over {x_2,\ldots x_n}.

For any fixed {x_2, \ldots,x_n \in \mathbb{Z}_q}, when {x_1} takes all the values in {\mathbb{Z}_q}, {x_1+g(x_2, \ldots,x_n)} also takes all the values in {\mathbb{Z}_q}. Thus,

\displaystyle \sum_{x_1, \ldots, x_n \in \mathbb{Z}_q }\omega_{q}^{f(x_1, \ldots, x_n)}=\left(\sum_{x \in \mathbb{Z}_q} \omega_{q}^{c_{1,1} x^2+c_1x}\right)\left(\sum_{x_2,\ldots,x_n \in \mathbb{Z}_q } \omega_{q}^{f'(x_2,\ldots, x_n )}\right)

\displaystyle = \sum_{x \in \mathbb{Z}_q} \omega_{q}^{c_{1,1} x^2+c_1x} \cdot Z_{q}(f').

The first term can be evaluated in polynomial time using Gaussian sums, and the second term is reduced to {Z_q(f')} in which {f'} has at most {n-1} variables.

Now we compute the sum {\sum_{x \in \mathbb{Z}_q} \omega_{q}^{c_{1,1} x^2+c_1x}} using Gauss sums. If {c_1 =0}, then this is already {G(c_{1,1}, q)}. Suppose {c_1 \not = 0}, we let {c_{1,1} = c p^a} and {c_1 = d p^b}, where {p \nmid c, d}. Then we can show that if {a \le b}, we can “complete” the square and the sum becomes (a constant multiple of) a Gauss sum. On the other hand if {a > b}, we can show that the sum is 0.

Finally, the second case (II) is reduced to the first case by a change of variable transformation.

Open Problems

I believe that the following should be easy to compute, but I cannot prove it. Compute the sum {S}

\displaystyle  S = \sum_{x \in I} \frac{1}{rx+s}

within an additive error of {2^{-n}} where {I =[a,b]} and {a,b,r,s} are at most {n} bit numbers. Note, it is easy to compute the sum

\displaystyle  \sum_{x \in I} p(x)

where {p(x)} is a polynomial of degree {n} in time polynomial in {n}.

Another set of questions come from the work of CCL. Their algorithm runs in polynomial time. Can we improve it to a smaller complexity class? What about, for instance, the class {\mathsf{NC}}?

19 Comments leave one →
  1. Qiaochu Yuan permalink
    June 9, 2009 5:39 pm

    What reason do you have to suspect that these 1/(rx+s) sums are easy to compute? There’s a big difference between sums of polynomials (which are morally other polynomials) and sums of reciprocals of linear polynomials (which are morally logarithms). It might be that estimating such sums is as difficult as estimating certain sums related to the Stirling numbers (which appear as the numerators of partial sums of the harmonic series).

  2. Pascal Koiran permalink
    June 10, 2009 6:57 am

    Have you tried to reduce open problem 1 (sum of reciprocals) to a sumation of polynomials, which as you point out can be evaluated efficiently ?
    One natural attempt would be to use the approximation of the function 1/(1+x) by its Talor series on e.g. the inteval [0,1/2].

    • rjlipton permalink*
      June 10, 2009 7:49 am

      Yes that is a idea that I have thought about. The trouble is that consider 1/x+a, where a is very large. Then for an interval [a-b,a+b] where b << a, then the Taylor series does not converge very fast.

      The reason I raise this question is that if a single summation was too easy to compute, then might get close to computing the double summation. That would hit the P=NP wall? Or perhaps solve it?

      Any way thats why I raised this issue.

  3. Pascal Koiran permalink
    June 10, 2009 1:30 pm

    Fast convergence of the Taylor series is the key to this approach (and thus maybe to P=NP!), if it can be made to work.
    Here’s a sketch of how it might work on a simple example:

    sum_{x=1}^{3b} 1/x. (*)

    This sum can be rewritten as :

    sum_{x=1}^{2b} 1/x + sum_{x=1}^b 1/(x+2b).
    Ignoring the “detail” that 2b might not be divisible by 3, the first of the two sums is of the same form as (*);
    but the length of the summation was multiplied by 2/3. So by recursion we will be done, if only we can approximate the second sum.
    That sum can be rewritten as (1/2b)*sum_{x=1}^b 1/(1+x/2b).
    To approximate it, we use the Taylor series for 1/(1+y). It converges quickly since the argument y = x/(2b) is in the interval [0,1/2].
    Is there something wrong with this argument ?

    • rjlipton permalink*
      June 10, 2009 2:14 pm

      The problem is this: think of a sum that runs from 2^n to 2^n + 2^n/2 say. Also let the function be \frac{1}{x + m} where m is about 2^n. Then, when factor out get a sum of \frac{1}{x/m + 1}. But this does not converge fast since x/m is almost 1.

      • Pascal Koiran permalink
        June 10, 2009 3:46 pm

        You are right that when m is factored out in your example, there is a problem with the speed of convergence. But the method that I suggested in my previous comment is a bit different, and avoids this problem (as far as I can see).
        To apply it to your example, you can for instance rewrite the sum that you want to compute :

        sum_{x=2^n+m}^{2^n+2^n/2+m} 1/x

        as the difference of the 2 sums:

        sum_{x=1}^{2^n+2^n/2+m} 1/x

        and sum_{x=1}^{2^n+m-1} 1/x.
        Now each sum can be approximated with the method from my previous comment.

  4. June 10, 2009 2:29 pm

    I seem to remember sums like this coming up when I took complex analysis. Maple seems to think that these things can be expressed in closed form using the Digamma function. Namely if you ask it to compute
    sum(1/(a*x+b), x=0..c);
    then it gives back
    (Psi(c + 1 + b/a)-Psi(b/a))/a
    where Psi denotes the Digamma function. My naive guess is that Digamma is approximable in polynomial time, but I don’t rightly know.

    • rjlipton permalink*
      June 10, 2009 2:47 pm

      Nice idea. Will check it out.

      • Eric Towers permalink
        February 15, 2011 1:02 am

        One could run with
        Sum[1/(r x+s), {x,a,b}] == (1/r)(Digamma(b + s/r + 1) – Digamma(a + s/r).
        Digamma(z) = Gamma'(z)/Gamma(z)
        and continue. But perhaps more satisfying to you is
        Digamma(z+1) = -\gamma + Sum[z/(n (n+z)), {n,1,\inf}], (z != -1, -2, -3, …)

        Gamma[z] is the gamma function (Euler’s integral of the second kind).
        Gamma'[z] s its first derivative.
        \gamma is the Euler-Mascheroni constant.
        \inf is (positive) infinity.

        The a, b, s/r that you are thinking of are likely all positive, so z+1 > -1, so the sum is of positive terms, decreasing to zero, like n^-2 (which itself quickly converges). So estimating your error is pretty easy and doesn’t seem to require an exponentially large number of terms in the Digamma sum.

        I don’t see any obstruction to computing your sum S in polynomial time.

  5. jmount permalink
    June 11, 2009 4:46 pm

    Looks like some really great work.

    Some references I like for using special functions and Gosper-like methods to compute sums are:

    “A=B” Marko Petkovsek, Herbert S Wilf, Doron Zeilberger (1997)

    “Lattice Sums” M. L Glasser, I. J Zucker Theoretical Chemistry: Advances and Perspectives (1980) vol. 5 pp. 67-139.

    “Special Functions” George E. Andrews, Richard Askey and Ranjan Roy (1999).

    For sums to count points in polytopes:

    “An Algorithmic Theory of Lattice Points in Polyhedra” Alexander Barvinok, James E Pommersheim. (2007)

    And my own small contribution:

    “Fast Unimodular Counting” John A Mount. Probability and Computing (2000) vol. 9 (3) pp. 277-285

  6. none permalink
    June 13, 2009 1:16 am

    Dick, do you think you could turn off the annoying “snap shots” on this blog? They have been driving me nuts on other wordpress blogs for a while (primarily Terry Tao’s) and I finally found instructions for turning them off:

    I left a note requesting this at Terry Tao’s blog too.

    • rjlipton permalink*
      June 13, 2009 8:10 am

      I turned them (the snap shots) OFF. I hope that most agree with this I also found them of marginal value.

  7. Tyson Williams permalink
    July 21, 2009 6:16 pm

    For the first lemma, I am trying to prove to myself that the “claim is easy to see” when k > 0. However, this does not seem to be true for all possible values for M.

    Further down the page, we begin talking about factoring M and whether or not it is prime. Are we assuming that M is an element of the integers? Does the same go for T? Are these questions answered in the text but I didn’t see them?

    • subruk permalink
      July 21, 2009 7:03 pm

      x and y are picked from an interval of integers as noted by Lipton. Even though it is not made explicit, I think M and T can be assumed to be integers. This is because in the only application of the lemma, M and T are defined to be integers. If you assume that x, y, M and T are integers in the lemma, the small claim follows quite easily when k>0.


  1. Computing Very Large Sums « Gödel's Lost Letter and P=NP :Computing Guy
  2. Logic Meets Complexity Theory « Gödel’s Lost Letter and P=NP
  3. Classifying Papers On Classifying Problems « Gödel’s Lost Letter and P=NP
  4. Is it Time to Declare Victory in Counting Complexity? « Gödel’s Lost Letter and P=NP
  5. How Not To Prove Integer Factoring Is In P « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s