Skip to content

Arithmetic Complexity and Symmetry

July 10, 2009


Understanding the arithmetic complexity of symmetric polynomials is enough

images

Isaac Newton is of course Sir Isaac Newton. He did many many wonderful things, including proving some basic results about symmetric polynomials.

Today, I want to talk about the arithmetic complexity of polynomials. There is a folklore principle that the study of arithmetic complexity of polynomials can be reduced to the study of symmetric polynomials. I will show that this is almost true, and explain some of its consequences.

Since Isaac Newton was born in 1643, I have few–actually no–personal stories about him. However, I think that reading about his life is very enlightening. His life blends theory and practice in ways that almost no one does today. Who today could invent a whole branch of mathematics, calculus, yet also invent a new type of telescope? I think that we need more scientists today that can blend theory and practice, often theorists and practitioners are on completely separate wavelengths.

This was brought out clearly in the recent NSF workshop–especially the discussion of SAT. The design automation experts kept saying, but SAT is easy.

I am not suggesting that one can be a Newton. Some have argued that he had the greatest mind, ever. Ahead of Albert Einstein, for example. I am suggesting that we can profit from reading about him, and understanding his approach to the great problems of his day. There are a large number of readable books on his life. My suggestion is to pick one up for reading at the “beach” or whenever you have time this summer.

I cannot resist giving two examples from his life, based on various books I have read about him–here is one. Perhaps the most famous quote of Newton is:

“If I have seen further than others, it is by standing upon the shoulders of giants.”

The usual interpretation of this quote is that Newton is thanking those that came before him, and actually humbly thanking them. That is the classic view of this famous quote. However, there is a strong argument that he was actually making fun of his arch enemy Robert Hooke–who was a famous scientist in his own right. Newton was involved in a number of bitter disputes in his life over who knew what when, as we say today. His feud with Hooke lasted for years, was over the creation of the theory of gravity, and only ended when Hooke died. Hooke was short, not a “giant.”

Newton was famous, also, for being a terrible lecturer. As part of the rules of his Lucasian chair he had to give periodic lectures. After his first lecture, which was sparsely attended, no one came again. Ever.

“Not a single student showed up for Newton’s second lecture, and throughout almost every lecture for the next seventeen years Newton talked to an empty room, listening merely to his own voice bouncing back at him.”

Let’s turn now to study some basic properties of symmetric polynomials and their arithmetic complexity.

Elementary Symmetric Polynomials

Suppose that {f(z)} is the polynomial:

\displaystyle  \prod_{k=1}^{n}(z-r_{k}).

Then, {f(z) = z^{n} - e_{1}z^{n-1} + e_{2}z^{n-2} - \cdots + (-1)^{n}e_{n}} after expanding the polynomial. The mapping from {r_{1},\dots,r_{n}} to {e_{k}} defines in a natural way the {k^{th}} elementary symmetric function. For example,

\displaystyle  e_{1} = r_{1} + \cdots + r_{n}

and

\displaystyle  e_{n} = r_{1} \cdot r_{2}\cdots r_{n}.

This way of looking at the elementary functions has two advantages. First, it shows that they are easy to compute. Second, it also shows that they are not too hard to “invert.” More, on inversion in a moment. For now it should be clear that they are easy to compute: If {r_{1},\dots,r_{n}} are given, then to compute the values of all the elementary functions, just form the polynomial

\displaystyle  \prod_{k=1}^{n}(z-r_{k})

and expand. This clearly takes at most polynomial time.

Arithmetic Complexity and Symmetric Polynomials

Let {x = (x_1, x_2, \dots, x_n)}. Suppose that {f(x)} is a polynomial–for us the coefficients will always be integers. The arithmetic complexity is the length of shortest straight-line computation that computes {f}. Denote this as {\mathsf{C}(f)}. Define {f_{\text{sym}}(x) = f(e_{1}(x),\dots,e_{n}(x))}. Note, {f_{\text{sym}}(x)} is always a symmetric function.

Lemma: For any {f}, {\mathsf{C}(f_{\text{sym}}) \le \mathsf{C}(f) + n^{O(1)}}.

The proof of this is very simple: the key is that the elementary functions are easy to compute, as I have already pointed out.

We would like the opposite to be true: { \mathsf{C}(f) \le \mathsf{C}(f_{\text{sym}}) + n^{O(1)}}. If this were true, then

\displaystyle  \mathsf{C}(f) = \mathsf{C}(f_{\text{sym}}) + n^{O(1)}

would follow. This would mean that when trying to prove lower bounds on an arbitrary polynomial, one could replace that polynomial by a related symmetric one. Then, the complexity of the original and the symmetric polynomials would almost be the same. I do not know if this is true, but the following approximation to it is:

Theorem: Suppose that {f(x)} is a polynomial where {x=(x_{1},\dots,x_{n})}. Then, there is an algorithm that computes {f(x)} within {\epsilon} in time

\displaystyle \mathsf{C}(f_{\text{sym}}) + \mathsf{poly}(\log ||x||,n,\log 1/\epsilon).

The algorithm is not a circuit. The algorithm needs to perform branching operations, for example. The algorithm does not “cheat” in any sense, but it is not an arithmetic circuit. The point, however, is for lower bounds we can restrict our attention to symmetric polynomials, if we are happy with exponentially close answers.

Note, the extra term has gone from {n^{O(1)}} to {\mathsf{poly}(\log ||x||,n,\log 1/\epsilon)}. The extra dependencies are really not unreasonable. For example, the fact that the algorithm’s running time depends on the number of bits of the input {x} is expected.

The Algorithm

Suppose that we can compute {f_{\text{sym}}(x)} in {S} steps, for any {x=(x_{1},\dots,x_{n})}. Then, we must show how to compute {f(x)} in {S} plus {\mathsf{poly}(\log ||x||,n,\log 1/\epsilon)} steps. Let {a=(a_{1},\dots,a_{n})} be a given input, and we must show how to compute {f(a)}.

I will just outline the main idea, which is really quite simple. Suppose that we could find a {b=(b_{1},\dots,b_{n})} so that {e_{k}(b) = a_{k}} for all {k}. Then,

\displaystyle  		f(a) = f(e_{1}(b),\dots,e_{n}(b)) = f_{\text{sym}}(b) \ \ \ \ \ (1)

by definition of {f_{\text{sym}}}. Thus, the cost of evaluating {f(a)} is reduced to the cost of evaluating {f_{\text{sym}}(b)} and cost of finding the {b}.

But, finding {b} can be reduced to finding the roots of a univariate polynomial. This is the “inversion step.” Consider the polynomial equation,

\displaystyle  	z^{n} - a_{1}z^{n-1} + a_{2}z^{n-2} + \cdots + (-1)^{n}a_{n} = 0. \ \ \ \ \ (2)

Call the roots {r_{1},\dots,r_{n}}. Then, by the definition of the elementary symmetric functions, if we set {b = r}, we are done.

Therefore, the algorithm for evaluating {f(a)} operates as follows. It solves the equation (2) to high precision using any of the well known root finding algorithms–see for example Victor Pan’s survey paper. The running time of the root finder is polynomial in the values {\log ||a||}, {n}, and {\log 1/\epsilon}. Then, the algorithm uses (1) to get the approximate value of {f(a)} to high precision.

Note, if {a \in \mathbb{Z}^{n}}, then the algorithm can get the exact value of {f(a)} by rounding.

Applications

If you wish to prove a lower bound on the permanent of an {n} by {n} matrix, then you can convert it into a symmetric function. Can being symmetric make the lower bound easier to prove? Another consequence is that it shows that determining the arithmetic complexity of a symmetric polynomial is as hard to determine as the complexity of an arbitrary polynomial.

Open Problems

For the boolean case the “equivalence” between general functions and symmetric is clearly false. The proof of the theorem fails because the Galois field on two elements is too small, which means the univariate polynomial will not always have a solution in the field. An obvious question is what happens if we allow the field to be finite, but let it be large enough that the required roots exist?

Note, it is reasonable that the theorem fails in the boolean case. In this case the value of a symmetric boolean function only depends on the weight of the input, i.e. the number of {1}‘s. Thus, all symmetric boolean functions are easy to compute. If the theorem were true, then all boolean functions would be easy, which is of course false.

Another question is can we make the algorithm into a circuit? Or must we leave the domain of circuits to get the theorem?

6 Comments leave one →
  1. none permalink
    July 11, 2009 1:49 am

    Besides inventing calculus and a new type of telescope, Newton also was a law enforcement honcho who busted up counterfeiting rings. There is a book about his work as master of the British mint, reviewed here:

    http://www.balloon-juice.com/?p=23521

    It is definitely on my want-to-read list.

  2. rjlipton permalink*
    July 11, 2009 8:21 am

    I recall he was a very tough enforcer of those who defaced coins or counterfeited them.

  3. July 12, 2009 10:04 am

    Hi Dick,

    Before making my technical comment, a related quip by Peter Winkler: “If I have been able to see far, it is by standing on the shoulders of Hungarians” 🙂

    In computing the symmetric polynomials, you say “form the polynomial prod_k (z – r_k) and expand”. I suppose you mean “form this polynomial and get the coefficients you want by polynomial interpolation”? Also, there is a simple dynamic program that can compute the e_k in polynomial time, but the advantage of polynomial interpolation is that it can also be done in NC; this observation was useful in a paper of Noga Alon and myself.

    • rjlipton permalink*
      July 12, 2009 12:02 pm

      Love Peter’s quote. Yes, also there are many ways to get the elementary fast as you know.

      Great quote.

  4. Josh permalink
    July 14, 2009 4:28 pm

    Regarding your “Application” to the permanent: isn’t the permanent already a symmetric function?

Trackbacks

  1. News Arithmetic Complexity and Symmetry | Web 2.0 Designer

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading