# Polynomials Are Easier Than Integers

* Recent results on the sum of square problem *

David Parnas is not a theorist, but is one of the experts on software design, software construction, and software engineering in general.

Today I would like to talk about a terrific recent result on an old favorite problem of mine: the sum of square roots problem. It is authored by Neeraj Kayal and Chandan Saha and will appear in CCC 2011.

David Parnas was my Ph.D. advisor when I was a graduate student at CMU years ago. He taught me two really important things—two things about the world at large. Here are Parnas’ dictums:

- People begin a research area usually for a solid reason, but sometimes the reason for the creation of the area is forgotten by later researchers.
- People sometimes say “X is true.” A rule to discover if this statement makes any sense is to think about the statement: “X is false.” If the later is silly or obvious, then the original statement made no sense.

I especially like the second rule: if I say that “we should definitely work hard on getting our proofs correct,” consider the negation: “we should work hard on getting our proofs incorrect.” Silly. So the first statement had no meaning. You will be surprised how powerful this rule is. Essentially, it is the following theorem:

Theorem:Let be a binary valued random variable. Then , where is the entropy function.

There is a link between the sum of square root problem and Parnas’ first dictum—please read on to see what it is.

** The Problem **

The problem that Kayal and Saha study is the complexity needed to determine whether or not a sum of square roots is greater than another sum of square roots: is

where each is bounded by , the ‘s are positive, and each is in . This is an important open problem, called the *sum of square roots problem* (SSRP) in computational numerical analysis.

If is replaced by , then the problem is computationally easy. See this for details.

If you have not seen the problem before, you might think the following simple idea should work: Let

Then compute to high precision: say some bits of precision. This is possible thanks to an old result of Richard Brent which shows that square roots can be computed in polynomial time to polynomial bits of precision. He proved much more, see his paper for details.

The obstacle to this approach is that how many bits do we need to compute to? Put another way, can be non-zero but very very small? If we knew that implied that it was not too small, then the simple compute to high precision approach would work. Currently there is no general bound that proves this—actually I am not sure that I believe that this is even true. I do think that SSRP should be a feasible problem, but that does not mean that must be zero or large.

** Baker **

There are results that are close in spirit to the hoped-for bounds on sums of square roots. The most famous are the amazing results of Alan Baker on sums of logarithms—he won the 1970 Fields Medal for this work. Baker considered sums,

where and are algebraic numbers. His results are of the form: if , then

Here is an explicit function that depends on and the ‘s, and the ‘s. Not surprisingly it is

exponentially tiny, but how it is exponential is very important. See this for the detailed definition of the function. There are several points that are important about :

- The function is explicit, so the results are effective. This is very important for applications.
- The exponent’s dependence on is roughly of order times where is the degree of the number field generated by the ‘s. Note if they are integers, then this term is dominated by the factorial term.
- The function’s dependence on the ‘s is only logarithmic.

If we had a result like this for the square root function instead of the logarithm function, then SSRP would be tractable. But we do not.

** Kayal and Saha **

Their main theorem is based on an extremely clever idea. They work with univariate polynomials instead of integers. I have recently talked about this method, that often changing from integers to polynomials makes a problem tractable. The brilliance here is that the notion of the square root of a polynomial is well defined:

is a well defined complex valued function—okay it is many valued, but that can be handled. The next part, on what does mean for polynomials, is the really clever part. Kayal and Saha are able to show that they can make an inequality over integers into a meaningful notion for polynomials. They prove:

Theorem:Given a sum

where and are univariate polynomials with degree bounded by and , for all , either , or the minimum exponent of that has a nonzero coefficient in the power series is bounded above by .

This is an interesting fact on its own. However, it allows them to resolve the following notable special case of SSRP:

Theorem:Suppose thatwhere each , and every integer is of the form

where is a positive real number and are integers. If

where is maximum over the ‘s and is a fixed polynomial in and , then either , or its absolute value is lower-bounded by

where is another fixed polynomial in and .

The statement is technical, but really is quite simple in principle. Their idea is to solve the SSRP for what they call *polynomial* integers. A number of the form

is an example of such an integer—provided is large enough. If they restricted to be a natural number, then these would just be sparse numbers in a high radix.

** Not Exactly **

Kayal and Saha point out that the rationale for their study of SSRP is the famous paper of Mike Garey, Ron Graham, and David Johnson (GGJ) on the Euclidean Traveling Salesman Problem. I have talked about that at length before here—I did say that it is one of my favorite open problems. Here is how it looks in their paper:

GGJ point out in the introduction to their paper that they can only prove that the TSP is -hard and not complete, for a very simple reason: they cannot prove that it lies in . The problem is that a tour will obviously have length that is the sum of square roots. But comparing that to a given length is what they did not know how to do. Their work was in 1976, and now 35 years later, we still do not know the answer.

The exact problem they need to solve is a bit different from SSRP. Their problem is to decide whether

where is a rational number. Further, they actually do **not** need a polynomial time algorithm. They only need that the problem of deciding this inequality is in .

This is an example of the first dictum of Parnas: do not forget why we are trying to solve the problem. Yes SSRP is a beautiful problem, and I really like the work Kayal and Saha have done on it. But the real problem that has been hanging open is a bit different:

- The sum of square roots is compared to a rational, not another sum of square roots.
- We only need a proof system to resolve the status of TSP. That is, the algorithm can be nondeterministic.

** Polynomial Numbers **

While such numbers are far from those that we would like to understand, I conjecture that this class of numbers could be more important then their result. Can we use the idea of polynomial numbers to attack other problems? For example can we prove that linear programming has a strongly polynomial algorithm when the numbers are polynomial over a large enough ? I have thought about a version of this before, and would not be too surprised if this was feasible to prove.

** Open Problems **

Prove that Euclidean TSP belongs to , so that it is -complete. Finally. Please.

[Added two hyperlinks to earlier GLL articles, on “game-changing” methods and on linear programming.]

The negation of “we should definitely work hard on getting our proofs correct,” is not “we should work hard on getting our proofs incorrect.” A ‘better’ negation would be “we should not work hard on getting our proofs correct”, but I guess the ‘most correct’ negation would be “it is questionable if we should work hard on getting our proofs correct”.

It is questionable whether we should not work hard on not getting our non-proofs incorrect.

Makes perfect sense!

One small correction: In order to put Euclidean-TSP into NP, you have to show that you can decide in NP whether $\sum_{i=1}^n\sqrt{a_i}\leq r$ instead of $\sum_{i=1}^n\sqrt{a_i}\geq r$, or equivalently (since equality can be checked in polynomial time) that you can decide in coNP whether $\sum_{i=1}^n\sqrt{a_i}\geq r$.

Actually, as far as I know, this problem is not even known to lie inside the polynomial hierarchy, and the current best upper bound is the fourth level of the counting hierarchy, see this paper for more details: http://dx.doi.org/10.1137/070697926

Another example of polynomials being easier that integers is Goldbach’s conjecture. I recently read a proof (in a MAA journal) that not only shows that every monic polynomial is the sum of two irreducible monic polynomials but counts the number of such representations.

Martin,

Good point. I have recently talked about this here.

Yet another example of polynomials being easier than integers is factoring. LLL gives a (deterministic) polynomial time algorithm to factor polynomials with integer coefficients over the integers whereas it’s obviously still unknown whether integer factoring has a polynomial time algorithm.

Who is LLL?

LLL is the names of the inventors of a key algorithm: Lenstra–Lenstra–Lovász lattice basis reduction algorithm. See this.

I’m probably just missing something, but what’s preventing one from just choosing X=2, writing the integer expanded out in binary where the b_ij values form the bitstrings of each integer? Oh, is p_1 a polynomial “fixed” by the mathematics, or can it be “fixed” by the “user”? I think I just answered my own question.

Neil,

They need that X is very large. That is the issue.

The link w.cs.uni-paderborn.de/uploads/tx_sibibtex/ComputingSumsOfRadicals.pdf is broken