Skip to content

Could Euler Have Solved This?

April 20, 2011

A new problem about the unit circle

Leonhard Euler needs no introduction. But let’s give him one anyway. He can be called the first truly modern polymath, one man who made foundational contributions to what are today considered widely disparate areas: graph theory, number theory, power series, complex analysis, harmonic numbers, and more. He made the first key step toward the Riemann Hypothesis, proving that

\displaystyle  \sum_{n=1}^{\infty} \frac{1}{n^s} = \prod_{\mbox{primes }p}\frac{1}{1 - p^{-s}},

which converts the distribution of primes, a discrete problem, into a problem of analysis. He and Nicolaus Copernicus are the only mathematicians or scientists commemorated in the Lutheran Calendar of Saints.

And of course the number 2.71828… represented by {e} is named for him, although it was implicitly known to John Napier and used under different names by several others. Euler may have helped his own Matthew Effect by using the letter {e} for it.

Instead of presenting some of Euler’s work, today we will do the opposite. We will pose an open problem that we feel could have been explained to him, and ask how he would have attacked it. Could he have solved it? We will try to find out, even though we currently cannot solve it ourselves. It is a simple problem about points on the unit circle, nothing more.

Why Euler? Well, he connected the unit circle and complex analysis to five fundamental constants of mathematics in his famous formula

\displaystyle  e^{\pi i} + 1 = 0.

The Chordal Product

Suppose we have a set {S = \{x_1,x_2,\dots,x_n\}} of {n} points on the unit circle. There are {n}-choose-{2 = (n^2 - n)/2} chords connecting each pair of points. Define the chordal product

\displaystyle  cp(S) = \prod_{1 \leq i < j \leq n} |x_i - x_j|,

which is just the product of the lengths of the chords. How can you maximize this product? When there are no constraints on {S}, your intuition is correct: space the points equally as far apart as possible.

Theorem 1
The maximum value of {cp(S)} is {n^{n/2}}, and is achieved by locating the points at the complex {n}th roots of unity. This optimum is unique up to rotations.

To prove this, we show that the product {P} of the lengths of the chords emanating from any one point is {n}. Taking this product over all {n} points then makes {n^n}, but since each chord is double-counted, the final answer is {n^{n/2}}.

Now consider the chords coming from the point {1} in the complex plane. Taking {\omega = e^{2\pi i/n}} to be the principal {n}th root of unity, so the other points in {S} are {\omega^i} for {i = 1} to {n}. Those {n-1} points are the roots of the complex polynomial {q(z) = \frac{z^n - 1}{z-1}}, so by the fundamental theorem of algebra,

\displaystyle  \prod_{i=1}^n (z - \omega^i) = q(z) = z^{n-1} + z^{n-2} + \cdots + z + 1.


\displaystyle  P = \prod_{i=1}^n |1 - \omega^i| = |q(1)| = n.

(One can also observe that {q(1)} is the derivative of {z^n - 1} evaluated at {z = 1}.)

A second proof notes that the chordal product for the {n}th roots of unity is the absolute value of

\displaystyle  \prod_{1 \leq i < j \leq n} (\omega^i - \omega^j) = \det(F_n) =  \det\left(\begin{array}{ccccc}  1 & 1 & 1 & \dots & 1\\  1 & \omega & \omega^2 & \dots & \omega^{n-1}\\  1 & \omega^2 & \omega^4 & \dots & \omega^{n-2}\\  1 & \omega^3 & \omega^6 & \dots & \omega^{n-3}\\  \vdots & \vdots & \vdots & \ddots & \vdots\\  1 & \omega^{n-1} & \omega^{n-2} & \dots & \omega  \end{array}\right).

This is because the DFT matrix {F_n} is a Vandermonde matrix whose {(i,k)} entry, indexing from {0} to {n-1}, is {\omega^{ik}}. The matrix {M_n = \frac{1}{\sqrt{n}}F_n} is unitary, meaning {M_n M^*_n = M^*_n M_n = I}. Thus {|\det(M_n)| = 1}, so

\displaystyle  P = |\det(F_n)| = (\sqrt{n})^n = n^{n/2}.

The New Problem

Now suppose we bar any point in {S} from the open interval {(0,\epsilon)} in radians. When {n > 2\pi/\epsilon}, this constraint prevents the equal-spread solution. Thus subject to this constraint, {cp(S) < n^{n/2}}.

Problem. Find the maximum value {cp(n,\epsilon)} of {cp(S)} when {|S| = n} and {S \cap (0,\epsilon) = \emptyset}, and find an {S} that achieves it.

We can also ask, what is the asymptotic growth of {cp(n,\epsilon)} for fixed {\epsilon}? Although {cp(n,\epsilon) < n^{n/2} = 2^{n\log(n)/2}}, it could still have {2^{\Theta(n\log n)}} growth. What may be surprising is that it does not have any “growth” at all:

Theorem 2
For any fixed {\epsilon > 0} and all {c > 0}, {cp(n,\epsilon) = o(\frac{1}{2^{cn\log n}})}.

Note that this means the chordal products crash to zero even faster than the reciprocal of the unconstrained answer {n^{n/2}}. “Dude—what happened to my nice exponential growth?” We believe, but haven’t proved, that in fact they approach {0} at the fastest possible rate:

Theorem 3 (?)
For any fixed {\epsilon > 0} there is a constant {c} such that {cp(n,\epsilon) \longrightarrow 0} as {\frac{1}{2^{cn^2}}}.

Complexity-Theory Excursion

Now let {\epsilon} itself vary with {n}. That is, consider {cp(n,\epsilon_n)} where {\epsilon_n \longrightarrow 0}. I and Maurice Jansen, who obtained his PhD under me in 2006, and has since enjoyed postdocs in Aarhus, Beijing, and currently Edinburgh with Rahul Santhanam, worked out that if we could keep {cp(n,\epsilon_n) \geq 1/2^{O(n\log n)}} whenever {\epsilon_n \longrightarrow 0}, then we could improve some lower bounds by Ran Raz and by Peter Bürgisser and Martin Lotz. These bounds concern arithmetical circuit families whose numerical constants are bounded in size.

We had thirty pages of work all set to go, pending the needed estimates. Unfortunately we guessed wrong—or at least I did, while Maurice proved:

Theorem 4
When {\epsilon_n = \frac{1}{n^{1/5}}}, {cp(n,\epsilon_n) = o(\frac{1}{2^{cn\log n}})}, for any constant {c > 0}.

However, when {\epsilon_n = \frac{1}{n^{1/4}}}, there exists {c > 0} such that for all {n}, {cp(n,\epsilon_n) \geq \frac{1} {2^{cn\log n}}}.

Likewise unfortunately, having {\epsilon_n = \frac{1} {n^{1/4}}} was no good for our complexity-bound machinery. We have not been able to close the gap between {1/4} and {1/5} in the exponent, on exactly where the chordal product approaches {0} faster than {\frac{1}{2^{O(n \log n)}}}. What is so special about {1/4} versus {1/5} here, anyway?

What Would Euler Do?

If we had a closed-form expression for {cp(n,\epsilon)} then all this asymptotic mystery would be lifted. Or it might be—it would depend on how nasty the expression came out to be.

How would Euler have attacked this problem? We can make facile guesses that he would have tried to express {cp(n,\epsilon)} as a sum-of-products, since {cp(n,0) = n^{n/2}} has that form as a determinant. Or he might have used power series or infinite sums like {\sum_{m=1}^{\infty} \frac{f(m)}{m^s}}, where {s} might depend on {n} and {\epsilon} in some hopefully-simpler manner than {cp(n,\epsilon)} itself. He might have treated it as a problem in analysis, regarding the constraint as first breaking the circle and then deforming it.

I have been unable to locate anything like this simple problem in the literature, and have wondered why. The closest known problem that has been suggested to me is the Thomson spheres problem. This also sounds simple and turns out to be complex, even though it does not have the “missing bite” feature. That feature may be the historical cause of the problem being overlooked. People have been motivated by problems that are natural, but Nature does not arbitrarily exclude matter from some region of space. Perhaps it is only computation theory that brings “unnatural” motivations.

We have tried one thing that Euler couldn’t do: unleash computers on the problem. One can try standard heuristics of starting with a given {S_0} and perturbing its points, retaining sets {S_1,S_2,\dots} that increase the chordal product. However, a student who did so reported to me that numerical precision issues quickly prevented him from getting interesting results even for {n} about {20}. He tried writing in C with the Gnu MP Library for arbitrary (as-needed) precision, but computation quickly became too slow. This speaks to why the problem may ultimately be difficult.

The DFT: An Incredible Balancing Act?

Let’s go back to the original unconstrained problem, and ask how we might estimate {cp(n,0)}, without using either of the above proofs of {2^{(n\log n)/2}} growth. Here is a rough “Fermi-style” estimate: The chords go from {0} to {\pi/2} on average, and so an “average” chord has length {\sqrt{2}}. Hence estimate {cp(n,0)} as:

\displaystyle  (\sqrt{n})^{n(n-1)/2} = 2^{\Theta(n^2)}.

Of course this is a great over-estimate, and the flaw is easy to see: While long chords have length at most {2}, short chords can have lengths approaching {0}.

This does mean, however, that a tendency of long chords to produce {2^{\Theta(n^2)}} is balanced by a cabal of short chords that produce {2^{-\Theta(n^2)}}. When a slice of the circle is excluded, almost however thin, the points in {S} are driven closer together, causing more enough shorter chords to tip the balance of power to the cabal, so that {2^{-\Theta(n^2)}} dominates all. If the circle is inflated slightly, then we can expect the positive {2^{\Theta(n^2)}} growth tendency to prevail. Thus the actual nearly-linear growth in the exponent is an exquisitely fine balance between the two tendencies.

The effect shows even better upon taking logarithms. For large {n} and any {S = \{x_1,\dots,x_n\}},

\displaystyle  \log cp(S) = \sum_{1 \leq i < j \leq n} \log(|x_i - x_j|)

can be grouped into sub-expressions that contribute {\pm \tilde{C}n^a} for all {a}, {0 \leq a \leq 2}. Here {\tilde{C}} means that the “constant” {C} may contain factors of {\log n}. The upshot is that when {S = \{\omega^i\}}, for all {a} other than {a = 1}, every term {+\tilde{C}n^a} is magically canceled by another term {-\tilde{C}n^a}. All of the large magnitudes involving long and short chords cancel, leaving a residue of {\,\frac{n}{2}\log_2 n}.

We would be interested to hear if—and how—other literature has presented the Discrete Fourier Transform as a study in huge cancellations, or has traced this as the “cause” of numerical-instability issues that are associated to it. Are there cases in Nature described by formulas that can be said to result from huge cancellations? We speculate on this in connection with renormalization in quantum field theory. Even more speculatively, we note that “rough” estimation from quantum-field theory gives a value for the energy content of our vacuum that is more than 120 orders of magnitude higher than the measured value reflected in our tiny-but-definitely-positive cosmological constant. Is there a chance that this value reflects the built-in nature of the Fourier Transform rather than mere multiversal chance?

Open Problems

Would Euler have solved the problem of computing {cp(n,\epsilon)?} Can you solve it? Can you at least approximate it asymptotically for values {(n,\epsilon_n)} as {n \longrightarrow \infty}?

[fixed a jagged line]

23 Comments leave one →
  1. April 20, 2011 5:00 pm

    This is my first solo-everything post; Dick is ill at home this week and was only able to read an earlier draft of this and suggest a few edits. He is OK—we just spoke—but e-mail to him may take awhile to answer; mine is [my-lowercased-surname] [at] buffalo [point] edu. I ran the Python scripts to translate LaTeX source into WordPress’ HTML-with-LaTeX-environment, cleaning up some things manually afterwards. The way symbols drop below the baseline pertains to the WordPress environment, and I see it is worsened by having superscripts and things like tildes stacked on top. I don’t know if that can be worked around.

  2. eqnets permalink
    April 20, 2011 5:41 pm

    Trivial comment: the chordal product itself is generically the determinant of a Vandermonde matrix, not just for roots of unity.

    Highly speculative and probably useless comment: do you think there may be a way to crack this nut using random matrix theory?

    • April 20, 2011 5:52 pm

      Thanks—I could have mentioned the general Vandermonde fact. We have indeed tried some approaches related to work by Tao et al. involving random matrix theory, but maybe haven’t gone full-bore with it.

      • eqnets permalink
        April 20, 2011 10:08 pm

        Also there’s the whole Coulomb gas avenue, which is related and which you’ve probably tried but I didn’t think of until the update to this post.

  3. Mitchell Porter permalink
    April 20, 2011 8:54 pm

    Dramatic cancellations in quantum field theory often involve supersymmetry.

  4. April 21, 2011 7:22 am

    You emulated Dick’s style quite well (whether intentionally or not)… I had no idea it was a second author until I got to the comments. Thanks for a nice post!

    • April 21, 2011 7:37 am

      Thank you in return. There’s actually some empirical evidence: I ran this post thru Typealyzer, and it came out “J” like other posts by Dick, whereas I’m generally a strong “P” :-). I did try to keep my sentences shorter.

  5. Craig permalink
    April 21, 2011 12:50 pm

    It seems to me that Euler would have tried easier problems first, like what if $\epsilon=\pi$? Is the answer to this known?

  6. Anon permalink
    April 21, 2011 3:04 pm

    Didn’t you forget to prove the upper bound in Theorem 1, that is, that one cannot do better than equal spacing? This probably follows from Hadamard inequality.

  7. Tom permalink
    April 21, 2011 10:29 pm

    Unless my maths is spectacularly terrible, the stochastic case is quite different. E.g. choose a distribution over the circle to maximise the expectation of the chord product when you draw n points from that distribution. From Jensen’s inequality you have that the expected chord product in the epsilon=0 case is bounded below by exp(2n(n-1)/pi) and in the positive epsilon case, even preserving a sub-optimal uniform distribution you have that it’s bounded below by something proportional to exp(2n(n-1)/pi) provided epsilon’s less than 2. (It’s also a lot easier to work out the optimal distribution here than in the deliberate case, you basically just use the convolution theorem on the first order condition for the density (assuming it exists).)
    Is this result obvious for some reason? It seems a bit odd that just by randomising you can switch the expected chordal product from decreasing in n to increasing in n.
    Interestingly, if you look at expected log chordal products then things swap round again. In the epsilon = 0 case you then get a zero expected log chordal product for all n (and presumably a negative and declining one for epsilon > 0, though I haven’t checked.)

    • Tom permalink
      April 21, 2011 10:47 pm

      Oops the claims in the first paragraph above are wrong. Late night maths really is beyond me. It doesn’t have the high rate of growth I claimed, as is obvious from the fact that via Jensen’s you are just comparing it to the expected log chordal product case…

      So in the stochastic case from Jensen’s you just get a zero lower bound on the growth rate, which is rather less interesting. You could probably do better.

  8. April 21, 2011 11:57 pm

    This can be analyzed using classical potential theory in the plane. Taking the negative of the logarithm of the chordal product gives the logarithmic energy for the point set (i.e., the energy with respect to the pair potential given by r -> – log |r|). On any reasonable curve, such as the circle minus an arc, there is a unique continuous unit mass distribution that minimizes the logarithmic energy. As n tends to infinity, the optimal points will converge to that distribution, and its energy will govern the asymptotics for the n-point energy. Specifically, if you divide the energy for n points by n^2, it will converge to the limiting energy. See Section 2 of or other papers by Ed Saff and his collaborators.

    This is enough to prove Theorem 3. The chordal product behaves like e^(-(C+o(1)) n^2), where C is the logarithmic energy for the optimal (continuous) distribution, so all we have to prove is that C>0. For the entire circle we have C=0, with the uniform distribution being the unique optimizer. For fixed epsilon>0, we must have C>0, since any distribution on the circle minus an arc can also be viewed as one on the entire circle (which must be strictly worse than the uniform distribution).

    The case where epsilon varies as a function of n is trickier, and I don’t know offhand how to deduce it immediately from results in the literature (although I’m far from an expert in potential theory). However, I strongly suspect that known techniques could say a lot about it.

    • April 22, 2011 4:53 pm

      Thank you very much! This also supplies the proof asked for by Anon 3:04pm above, which I left out from thinking I’d known and then not. I have printed off the paper and will scrutinize the methods.

      The review paper linked by John Sidles below will also be useful for something else we are doing…

      • April 23, 2011 10:00 am

        I don’t think you can actually deduce Theorem 1 from the potential theory approach (which gives only asymptotics), but the upper bound follows from the complex Hadamard inequality, the way Anon thought it would. Incidentally, the earliest reference for this I know is a 1918 paper by Schur (, which gives the Hadamard inequality proof. He thanks Polya for pointing the result out to him.

      • April 23, 2011 10:02 am

        Oops, I meant to specify the page in that article: bottom of page 385.

      • April 23, 2011 3:15 pm

        Ken, I’m glad you found useful material in Charles van Loan’s article “The Ubiquitous Kronecker product“. Our QSE Group regards van Loan’s article as being like fine Hungarian paprika … this article’s “spicy” ideas are likely to improve pretty much any line of research! 🙂

        A striking example of the “paprika power” that Kronecker products possess was presented at least week’s ENC, by a young researcher named Ilya Kuprov, who with his Oxford collaborators began their recent article “Spinach–a software library for simulation of spin dynamics in large spin systems” with the amusing assertion:

        It is a truth universally acknowledged that a brute force solution of the Liouville–von Neumann equation for a general system with over 20 spins is not practically feasible because the dimension of the matrices involved scales exponentially with the number of spins…

        As with Jane Austen’s Pride and Prejudice, the intent of the Oxford group’s first sentence is openly ironic: pretty much no-one attending the ENC believes (any more) that large-dimension quantum spin systems are infeasible to simulate; moreover this “can do” attitude is becoming prevalent in fields as diverse as quantum chemistry, condensed matter physics, synthetic biology and (more broadly) compressive sampling and image processing.

        Nowadays many lines of computational research are seek to evade the “curse of dimensionality” by embracing representations that are sparse and/or factored—and thus efficient in terms of storage and computation—and yet faithfully respect the symmetries and conservation laws of the parent systems. According to our engineer’s reading of van Loan’s article, it provides us with a Rosetta Stone that illuminates the Kronecker-centric algebraic and geometric concepts (including even chordal sampling theorems?) that are providing broader and stronger foundations for many modern lines of research.

  9. April 22, 2011 5:46 am

    Charles van Loan discusses the intimate relation between the FFT, the Perfect Shuffle, and the Kronecker product in his enjoyably thought-provoking review article The ubiquitous Kronecker product.

    One wonders whether the problem of computing chordal products can be associated to the Nyquist–Shannon sampling theorem? That is, suppose our sampling of a bandwidth-limited function almost satisfies the sampling theorem? Then is the chordal product hypothesis trying to tell us about the error bound on our reconstruction of the function?

    Without knowing the answer myself, van Loan’s review provides a good entre into the vast literature on sampling theorems … it would be mighty cool/fun/neat if a connection to sampling theory could be established.

    By the way, Harry Cohn’s post (above) was terrific.

  10. July 9, 2011 3:23 am

    A typo: The first two products in the proof of Theorem 1 shall have n – 1 elements, otherwise they equal 0.

    • July 11, 2011 10:33 pm

      You are correct—it seems the upper bounds should be n-1 not n. Thank you very much. I’ll double-check that and then edit the post; I’ve been traveling and have a wonky connection.


  1. Fourth Linkfest
  2. Who’s your Doktorvater? « Gödel’s Lost Letter and P=NP
  3. The Halting Problem For Linear Machines | Gödel's Lost Letter and P=NP
  4. 20,000 Comments and More | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: