Skip to content

Polynomials Are Easier Than Integers

February 26, 2011

Recent results on the sum of square problem

David Parnas is not a theorist, but is one of the experts on software design, software construction, and software engineering in general.

Today I would like to talk about a terrific recent result on an old favorite problem of mine: the sum of square roots problem. It is authored by Neeraj Kayal and Chandan Saha and will appear in CCC 2011.

David Parnas was my Ph.D. advisor when I was a graduate student at CMU years ago. He taught me two really important things—two things about the world at large. Here are Parnas’ dictums:

  • People begin a research area usually for a solid reason, but sometimes the reason for the creation of the area is forgotten by later researchers.
  • People sometimes say “X is true.” A rule to discover if this statement makes any sense is to think about the statement: “X is false.” If the later is silly or obvious, then the original statement made no sense.

I especially like the second rule: if I say that “we should definitely work hard on getting our proofs correct,” consider the negation: “we should work hard on getting our proofs incorrect.” Silly. So the first statement had no meaning. You will be surprised how powerful this rule is. Essentially, it is the following theorem:

Theorem: Let {X} be a binary valued random variable. Then {H(X) = H(\neg X)}, where {H} is the entropy function.

There is a link between the sum of square root problem and Parnas’ first dictum—please read on to see what it is.

The Problem

The problem that Kayal and Saha study is the complexity needed to determine whether or not a sum of square roots is greater than another sum of square roots: is

\displaystyle  \sum_{i=1}^n \delta_i \cdot \sqrt{a_i} > 0,

where each {a_i} is bounded by {N}, the {a_i}‘s are positive, and each {\delta_i} is in {\{-1,+1\}}. This is an important open problem, called the sum of square roots problem (SSRP) in computational numerical analysis.

If {>} is replaced by {=}, then the problem is computationally easy. See this for details.

If you have not seen the problem before, you might think the following simple idea should work: Let

\displaystyle  S = \sum_{i=1}^n \delta_i \cdot \sqrt{a_i}.

Then compute {S} to high precision: say some {\mathsf{poly}(n,\log N)} bits of precision. This is possible thanks to an old result of Richard Brent which shows that square roots can be computed in polynomial time to polynomial bits of precision. He proved much more, see his paper for details.

The obstacle to this approach is that how many bits do we need to compute {S} to? Put another way, can {S} be non-zero but very very small? If we knew that {|S|>0} implied that it was not too small, then the simple compute to high precision approach would work. Currently there is no general bound that proves this—actually I am not sure that I believe that this is even true. I do think that SSRP should be a feasible problem, but that does not mean that {|S|} must be zero or large.


There are results that are close in spirit to the hoped-for bounds on sums of square roots. The most famous are the amazing results of Alan Baker on sums of logarithms—he won the 1970 Fields Medal for this work. Baker considered sums,

\displaystyle  \Lambda = \sum_{i=1}^n \beta_i \log \alpha_i,

where {\alpha_i} and {\beta_i} are algebraic numbers. His results are of the form: if {\Lambda \neq 0}, then

\displaystyle  | \Lambda | \ge F(n,\alpha,\beta).

Here {F} is an explicit function that depends on {n} and the {\alpha_i}‘s, and the {\beta_i}‘s. Not surprisingly it is

exponentially tiny, but how it is exponential is very important. See this for the detailed definition of the function. There are several points that are important about {F}:

  1. The function {F} is explicit, so the results are effective. This is very important for applications.
  2. The exponent’s dependence on {n} is roughly of order {n!} times {O(d)^n} where {d} is the degree of the number field generated by the {\alpha_i}‘s. Note if they are integers, then this term is dominated by the factorial term.
  3. The function’s dependence on the {\beta_i}‘s is only logarithmic.

If we had a result like this for the square root function instead of the logarithm function, then SSRP would be tractable. But we do not.

Kayal and Saha

Their main theorem is based on an extremely clever idea. They work with univariate polynomials instead of integers. I have recently talked about this method, that often changing from integers to polynomials makes a problem tractable. The brilliance here is that the notion of the square root of a polynomial is well defined:

\displaystyle  \sqrt{ x^2 + 1}

is a well defined complex valued function—okay it is many valued, but that can be handled. The next part, on what does {>0} mean for polynomials, is the really clever part. Kayal and Saha are able to show that they can make an inequality over integers into a meaningful notion for polynomials. They prove:

Theorem: Given a sum

\displaystyle  S = \sum_{i=1}^n g_i(x) \sqrt{f_i(x)},

where {f_i(x)} and {g_i(x)} are univariate polynomials with degree bounded by {d} and {f_i(0) \neq 0}, for all {i}, either {S=0}, or the minimum exponent of {x} that has a nonzero coefficient in the power series {S} is bounded above by {dn^2 + n}.

This is an interesting fact on its own. However, it allows them to resolve the following notable special case of SSRP:

Theorem: Suppose that

\displaystyle  S = \sum_{i=1}^n \delta_i \sqrt{a_i},

where each {\delta_i \in \{-1,+1\}}, and every integer {a_i} is of the form

\displaystyle  a_i = X^{d_i} + b_{1d_i}X^{d_i-1} + \cdots + b_{id_i},

where {X} is a positive real number and {b_{ij}} are integers. If

\displaystyle  X > (B+1)^{p_1(n,d)},

where {B} is maximum over the {|b_{ij}|}‘s and {p_1(n,d)} is a fixed polynomial in {n} and {d}, then either {S=0}, or its absolute value is lower-bounded by

\displaystyle  1/X^{p_2(n,d)},

where {p_2(n,d)} is another fixed polynomial in {n} and {d}.

The statement is technical, but really is quite simple in principle. Their idea is to solve the SSRP for what they call polynomial integers. A number of the form

\displaystyle  X^{100} - 156X^{89} + 31

is an example of such an integer—provided {X} is large enough. If they restricted {X} to be a natural number, then these would just be sparse numbers in a high radix.

Not Exactly {\dots}

Kayal and Saha point out that the rationale for their study of SSRP is the famous paper of Mike Garey, Ron Graham, and David Johnson (GGJ) on the Euclidean Traveling Salesman Problem. I have talked about that at length before here—I did say that it is one of my favorite open problems. Here is how it looks in their paper:

GGJ point out in the introduction to their paper that they can only prove that the TSP is {\mathsf{NP}}-hard and not complete, for a very simple reason: they cannot prove that it lies in {\mathsf{NP}}. The problem is that a tour will obviously have length that is the sum of square roots. But comparing that to a given length {L} is what they did not know how to do. Their work was in 1976, and now 35 years later, we still do not know the answer.

The exact problem they need to solve is a bit different from SSRP. Their problem is to decide whether

\displaystyle  S = \sum_{i=1}^n \sqrt{a_i} \ge r,

where {r} is a rational number. Further, they actually do not need a polynomial time algorithm. They only need that the problem of deciding this inequality is in {\mathsf{NP}}.

This is an example of the first dictum of Parnas: do not forget why we are trying to solve the problem. Yes SSRP is a beautiful problem, and I really like the work Kayal and Saha have done on it. But the real problem that has been hanging open is a bit different:

  • The sum of square roots is compared to a rational, not another sum of square roots.
  • We only need a proof system to resolve the status of TSP. That is, the algorithm can be nondeterministic.

Polynomial Numbers

While such numbers are far from those that we would like to understand, I conjecture that this class of numbers could be more important then their result. Can we use the idea of polynomial numbers to attack other problems? For example can we prove that linear programming has a strongly polynomial algorithm when the numbers are polynomial over a large enough {X}? I have thought about a version of this before, and would not be too surprised if this was feasible to prove.

Open Problems

Prove that Euclidean TSP belongs to {\mathsf{NP}}, so that it is {\mathsf{NP}}-complete. Finally. Please.

[Added two hyperlinks to earlier GLL articles, on “game-changing” methods and on linear programming.]

14 Comments leave one →
  1. February 26, 2011 1:42 pm

    The negation of “we should definitely work hard on getting our proofs correct,” is not “we should work hard on getting our proofs incorrect.” A ‘better’ negation would be “we should not work hard on getting our proofs correct”, but I guess the ‘most correct’ negation would be “it is questionable if we should work hard on getting our proofs correct”.

  2. JeffE permalink
    February 26, 2011 3:27 pm

    It is questionable whether we should not work hard on not getting our non-proofs incorrect.

    Makes perfect sense!

  3. February 27, 2011 11:34 am

    One small correction: In order to put Euclidean-TSP into NP, you have to show that you can decide in NP whether $\sum_{i=1}^n\sqrt{a_i}\leq r$ instead of $\sum_{i=1}^n\sqrt{a_i}\geq r$, or equivalently (since equality can be checked in polynomial time) that you can decide in coNP whether $\sum_{i=1}^n\sqrt{a_i}\geq r$.

    Actually, as far as I know, this problem is not even known to lie inside the polynomial hierarchy, and the current best upper bound is the fourth level of the counting hierarchy, see this paper for more details:

  4. Martin Cohen permalink
    February 27, 2011 3:22 pm

    Another example of polynomials being easier that integers is Goldbach’s conjecture. I recently read a proof (in a MAA journal) that not only shows that every monic polynomial is the sum of two irreducible monic polynomials but counts the number of such representations.

    • rjlipton permalink*
      February 27, 2011 3:27 pm


      Good point. I have recently talked about this here.

  5. dorkusmonkey permalink
    March 1, 2011 12:56 am

    Yet another example of polynomials being easier than integers is factoring. LLL gives a (deterministic) polynomial time algorithm to factor polynomials with integer coefficients over the integers whereas it’s obviously still unknown whether integer factoring has a polynomial time algorithm.

    • Mahfoud null belhadj permalink
      March 1, 2011 3:11 pm

      Who is LLL?

      • rjlipton permalink*
        March 1, 2011 8:05 pm

        LLL is the names of the inventors of a key algorithm: Lenstra–Lenstra–Lovász lattice basis reduction algorithm. See this.

  6. March 2, 2011 3:16 am

    I’m probably just missing something, but what’s preventing one from just choosing X=2, writing the integer expanded out in binary where the b_ij values form the bitstrings of each integer? Oh, is p_1 a polynomial “fixed” by the mathematics, or can it be “fixed” by the “user”? I think I just answered my own question.

    • rjlipton permalink*
      March 2, 2011 8:42 am


      They need that X is very large. That is the issue.

  7. Max permalink
    November 3, 2011 9:32 am

    The link is broken


  1. Top Posts —
  2. A Most Perplexing Mystery | Gödel's Lost Letter and P=NP
  3. Integer Multiplication in NlogN Time | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s