Skip to content

The Accidental Astrophysicists

October 28, 2010


Polynomial equations with conjugates: how many roots?

Albert Einstein is, well, Albert Einstein. He is perhaps the best known, most recognizable, iconic scientist of all time. He once said,

Do not worry about your problems with mathematics, I assure you mine are far greater.

Today I want to talk about some mathematical problems we have. In particular the problem of solving equations—equations of a type that are not as well known as polynomial equations.

There is something fundamental about solving equations. Einstein faced the problem of solving equations, especially in his theory of General Relativity. Many early results in mathematics were driven by the desire to solve an equation of one kind or another. Even simple linear equations were hard for early mathematicians: consider the equation,

\displaystyle  2x + 7 = 3.

For us the solution is clear, {x= -2}, but negative numbers were not accepted as “real” numbers initially. See the fun book by Alberto Martínez on Negative Math for a history of negative numbers.

The Greeks had even more difficult times with quadratic equations: what is the solution to

\displaystyle  x^2 = 2?

The root is irrational, again a “real number” for us, but not apparently for the early Greek mathematicians.

For equations of one variable, in modern terms, there is the famous Fundamental Theorem of Algebra (FTA):

Theorem: Any polynomial equation with {n \ge 1}

\displaystyle  x^n + a_{n-1}x^{n-1} + \cdots + a_1 x + a_0 = 0,

where the coefficients {a_{n-1},\dots,a_0} are complex numbers, has a solution over the complex numbers.

This ends the story of equations in one variable—or does it? Not quite. There are countless kinds of more-general equations to consider, indeed many generalizations of the notion of “polynomial” itself. I will discuss several such generalizations.

Polynomials Plus

The FTA has been generalized to include more general functions of all kinds. For what classes of functions {f} is the equation

\displaystyle f(x) = 0

guaranteed to have a solution over the real or complex numbers? Note that we need to have some restriction on {f} in order to get a reasonable theory.

One interesting family of functions are the harmonic functions. We can form some of them as polynomials in variables {z = x + iy} and their complex conjugates {\overline{z} = x - iy}. For the case of two variables {x,y} with complex coefficients, this in fact gives all the harmonic polynomials. Thus for an ordinary (analytic) complex polynomial {p(z)} we can form the harmonic polynomial {p^*(z) = p(z) + \overline{z}}. Thus, questions arise like: how many roots can the equation,

\displaystyle  p^*(z) = 0

have? The theory here is still incomplete, because the behavior of such equations is much more involved than the behavior of pure polynomials. Yet there are some quite pretty results that are known, and there are many interesting open questions.

Dmitry Khavinson and Genevra Neumann have written a general introduction to this wonderful theory here. Right away the behavior of such polynomials is much more complex. For example, each of the harmonic polynomials

\displaystyle  p_n(z) = z^n - \overline{z}^n = 0

has an infinite number of solutions. The equation holds if and only if {z^n} is a real number, so the solution set is the star formed by {2n} equally spaced rays from the origin.

Given such infinite cases, it is remarkable that one can find broad conditions under which there are finitely many distinct zeroes—though still more than the degree as with ordinary polynomials. There is the following pretty theorem due to Alan Wilmshurst:

Theorem: If {h(z) = p(z) - \overline{q(z)}} is a harmonic polynomial of degree {n} such that { \lim_{z \rightarrow \infty} |h(z)| = \infty}, then {h} has at most {n^2} zeroes.

The intuition behind this is to look at the behavior of the real and the imaginary parts of {h(z)} separately. They have zero sets that are lines. Of course both the real and the imaginary parts must both be zero to have a zero of {h}, so the issue is how many intersections can there be? This is where the {n^2 = n \cdot n} comes from. The black lines are where the real part is zero, and the red lines where the imaginary part is zero.

Polynomials Plus Plus

Khavinson and Neumann proved a related result concerning rational harmonic functions—polynomials plus plus. They proved:

Theorem: Let {r(z) = p(z)/q(z)}, where {p} and {q} are relatively prime polynomials in {z}, and let {n} be the degree of {r}. If {n > 1}, then the number of solutions of

\displaystyle  r(z) -\overline{z} = 0

is at most { 5 n - 5 }.

It turns out that the mathematical upper bound is closely related to the physics concept of gravity lenses. They have been called accidental astrophysicists. I will explain why in the next section.

Gravity Lenses

According to Einstein’s theory of General Relativity light is bent in the presence of gravity. This leads to the phenomena of gravity lenses. A single object in the presence of a powerful source of gravity can appear as multiple objects. Here is a picture of the formation of such images:

One of the critical questions is how many images can be formed? This is of importance to astrophysicists.

The surprise is it possible to model the number of images not with a polynomial equation, but with a harmonic equation. Actually, the astrophysicist Sun Rhie proved a lower bound, and she conjectured that her construction was optimal. Thus, when Khavinson and Neumann proved an upper bound, they were actually solving her conjecture. Apparently they did not know this connection before they proved their theorem. Science, especially mathematics, can be surprising—there are often unforeseen connections. Thus the name “accidental astrophysicists.”

Open Problems

There are many complexity questions that arise from these theorems. Can we decide efficiently how many roots a harmonic polynomial has? Also can we find them—at least approximately? Also are there applications to complexity theory of these results?

About these ads
12 Comments leave one →
  1. October 28, 2010 9:56 am

    A simpler(?) topic for which I can’t find a reference is polynomials over the reals with absolute-value terms. For example, for any n >= 1,

    x^n – |x|^n

    has uncountably many zeroes, and is identically zero if n is even. The degree-2 “absonomial”

    p(x) = 2 – |(x^2 – 3x)|

    has four roots: x = 1, x = 2, and x = (3 +/- \sqrt{17}) / 2. One issue is that |.| does not “commute” with addition the way complex-conjugation does, and I don’t see a simple tie-in to harmonic polynomials.

    I thought I had a conversion from an “absonomial” p of degree n into a polynomial q of degree (at most) 2n such that whenever something like the Wilmshurst non-degenerateness condition holds, all zeroes of p are zeroes of q. This would bound the number of zeroes by 2n whenever the zero-set is finite—for which there might be a simpler algebraic-geometric proof. Does anyone know of a good reference?

  2. October 28, 2010 10:51 pm

    No comments besides the above for a whole day—either everyone is off busily trying to solve my “absonomial” problem, or I’m a comment buzzkill :-).

  3. October 29, 2010 7:25 am

    This seems like a good opportunity for a lurker like me to add a comment :)
    Allow me to introduce the Boolinomials: with whole numbers as input, they are made of the arithmetic operations (+,-,×) and the bitwise operations (using C notation: &, |, ^). I once came up with this notion because I knew nice lower bounds for programs using the arithmetic operations: these lower bounds were proved by considering the polynomials computed by the program. I thought that the theory of Boolinomials could help in finding lower bounds that withstand bit-trickery.
    Unfortunately, I did not know any good theory for those. Maybe it exists out there?

    (You may notice that there is a certain inconvenience regarding negative numbers; a possible simplifcation is to consider subtraction x-y to yield zero when x<y, and remove negative numbers from all consideration; another is to consider an "infinite-precision 2's-complement notation"; it is well-defined, and then you can allow the bitwise-not as well)

    • October 29, 2010 3:59 pm

      Nice comment, so here’s a good question: Can we extend Strassen’s order-nlogn lower bound to boolinomials computing, say, x_1^n + … + x_n^n?

      Or for families of boolinomials with bounded coefficients, can we extend Morgenstern’s order-nlogn lower bound for FFT?

      The FFT problem might be better for the bit-operations territory, not only relating to your papers with Zvi G. from the 1990′s, but also Soren Riis’s FOCS’96 joint paper. Perhaps you’ve answered it?

      • October 30, 2010 4:02 am

        Perhaps you’ve answered it? -

        If you mean the FFT question, no, I haven’t thought about it at all. Let me tell you all that I found regarding boolinomials: they respect congruence modulo 2^n, for any n. This can be used to prove that sorting takes Omega(n log n) on an idealized (infinite-precision) RAM with the instructions in question, and some similiar results which I do not recall in detail.

        I think that I was nurturing a kind of hope that there might be a clever algebraic-geometry-style theory that could be used to prove other kinds of results.

    • October 29, 2010 4:02 pm

      Incidentally, “kwregan” is forced to be my name when I’m logged in as a WordPress user. I wish at least I could make the KWR uppercase, which is the handle I use everywhere else.

    • Sniffnoy permalink
      October 29, 2010 10:16 pm

      I have to wonder if boolinomials with^ as the only bitwise operation would be nicer than general boolinomials…

  4. Tom permalink
    October 31, 2010 2:55 pm

    Amusing video: “So you want to get a PhD in theoretical computer science?” http://www.xtranormal.com/watch/7520547/

    • November 2, 2010 7:44 pm

      Tom, this is (at least) the third TCS blog in which commentators have linked to this video … and up until today, not a single person of leading stature has commented upon it.

      As of today, however, that has changed … I have just come from a CSE Colloquium by Stanford University president John Hennessey titled The Future of Research Universities … in which Hennessey sounded pretty much the same themes … to a standing-room-only audience of several hundred UW faculty and CSE students.

      Now, Hennessey’s lecture as-given was pretty dry … but the facts he reviewed were sufficiently sobering, that WLOG Hennessey’s could have looped Tom’s So you want to get a PhD in theoretical computer science animation, in alternation with Bob Dylan’s classic Hard Rain’s Gonna Fall.

      Eventually the TCS blogosphere too will grapple more directly with these sobering realities … in recognition that there is increasing pressure from below … accompanied by authorization from above … to regard these concerns seriously.

      Obviously, what’s in short supply is not concern for these sobering TCS/STEM/global realities … but rather, good ideas for grappling with them.

Trackbacks

  1. The Accidental Astrophysics
  2. Vignette : : Of M&M’s, Gravitational Lenses, the Hubble and the Last Space Shuttle Mission. - Brick Let's Go!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,255 other followers