Skip to content

High School Trig Is Powerful

August 31, 2012


Trying to cheat a decision problem

Leopold Kronecker proved many great theorems, including one on Diophantine Approximation theory. This is the theory of how closely real numbers {r} can be approximated by rational numbers {a/b}. It has many beautiful results, with many applications, and is an early type of complexity theory.

Today I want to talk about a decision question that arose in reading a paper on number theory.

It’s a simple-to-state problem concerning equations involving cosine functions, yet seems hard to resolve. A partial solution involves—surprise—Kronecker’s famous theorem. I will explain his theorem in a moment, but first I must talk about trig functions.

Let state my feelings about such functions: I love and hate trigonometry. Let me explain why I love it, then we can move on to why I hate it. Okay?

Trig functions arise all over mathematics, have many interesting properties, and are critical to Fourier methods. In particular they play a major role in the analysis of some important number theory problems. The Prime Number Theorem, the statement that the number of primes less than {x} is approximately {x/\ln x}, is a deep theorem. We now have many proofs of this key fact about the behavior of the primes. Some use complex analysis, some are elementary, some use a combination of both.

A Motivating Example

Let me give an example of an innocent-looking identity:

\displaystyle \cos(2x) = 2\cos^2(x) - 1,

which “factors out” the multiple of {2}. Substituting this proves the identity (*):

\displaystyle  3 +4\cos(x) + \cos(2x) = 2(1+\cos(x))^{2}.

It may seem amazing that deep theorems can be based on trig results like this, yet it is true. The reason it is so useful is that the identity implies the inequality,

\displaystyle  3 +4\cos(x) + \cos(2x) = 2(1+\cos(x))^{2} \ge 0.

One of the shortest proofs that the Riemann zeta function has no roots at {s= \sigma + it} for all real {t} was due to Franz Mertens, and uses the identity (*). He used the identity to prove that {\zeta(s)} satisfies:

\displaystyle  \zeta(\sigma)^{3}|\zeta(\sigma + it)^{4}\zeta(\sigma + 2it)| \ge 1.

The connection is that the real part of {1/n^{s}} can be expressed using cosine functions—see here for more details.

Schooling Me on Equations

I can imagine that I was once asked to prove such trig identities back in high school—this is pretty basic. Speaking of high school that is why I hate trig. When I took it in school, it seemed like it was a lot of memorization—what is the definition of all those strangely named functions? Why not just use sine and cosine everywhere?

I also learned trig in summer school from a Mr. Ditka—I do not know his first name. He was like his name—tough—and later when I saw the professional football coach Mike Ditka on television I wondered if they might have been related. Both Ditkas were tough. Mr. Ditka, I can call him nothing else, taught me an important lesson about math. During that summer class I remember getting a problem wrong on an exam. I walked up to him after class with my exam and asked why I got zero points. The question was about writing down the equation for something, and the answer I had written was

\displaystyle  x^{2} -x + 1.

I asked him why I got zero points, since this was the correct expression. Mr. Ditka looked at me and said,

I asked for an equation. You did not write an equation. Zero points.

He was right, if I had only added the equality sign and written

\displaystyle  x^{2} -x + 1 = 0 ,

I would have scored full points. Oh well. Lesson learned.

Not The Question

I once tried to see if there were more general equations like the identity (*). I first thought about this question: Suppose

\displaystyle  a_{1}\cos(b_{1}x) + \cdots + a_{m}\cos(b_{m}x) \ge 0,

where the {a_{i}>0}‘s and {b_{i} > 0}‘s. Is there an inequality like this?

The answer is that there is no such inequality. The reason is that any correct inequality must have a constant term. So without a constant term there is no such inequality. Here is the proof:

Suppose for all {x},

\displaystyle  f(x) = a_{1}\cos(b_{1}x) + \cdots + a_{m}\cos(b_{m}x) \ge 0,

where the {a_{i}} and {b_{i}} are positive reals.

The key is to look at the integral {\int_{0}^{t} f(x)dx}—are we allowed to use integrals in theory? This flips the cosine functions to sine functions, and yields

\displaystyle \frac{a_{1}}{b_{1}}\sin(b_{1}x) + \cdots + \frac{a_{m}}{b_{m}}\sin(b_{m}x).

evaluated at {x=t}, since {\sin(0) = 0}. But for {x} just below {0}—or rather for {t} just below {2\pi}—the expression becomes negative. This is a contradiction.

The Question

The real question concerns determining the possible values of a sum of cosine functions. This is a natural problem, but also has important application in number theory. Define a function {f} by

\displaystyle  f(t) = \sum_{k=1}^{m} a_k \cos(t \ln k).

We want to determine whether or not this function can be equal, even approximately, to some given value. Since the {a_k} are fixed, it is a question in one variable—how hard can that be? The initial idea might be to just use calculus to solve the problem. But there is an issue. The function is defined for all real {t}, and therefore the answer can be any real number. Since this is an infinite interval, it seems to me that no standard search method works. If we could bound the limits of where we have to look for {t} then we could solve the problem.

The issue is that the arguments to the cosines have different periods, and we do not know in advance where or how their values might augment or cancel each other. The key insight is that Diophantine approximation theory allows us to factor out {2\pi p} for integers {p} where they are unchanged, so that the remaining variation is restricted to a finite domain that can be searched. We would also like to iron out dependencies of the terms on each other, so that we get an easier-to-analyze system, such as a linear system.

An Example

Let’s look at an example first. Let

\displaystyle  f(t) = a \cos(t\ln 2) + b \cos(t \ln 3) + c \cos(t \ln 5).

Suppose we want to know whether {f(t)} is approximately equal to some value {v}. How do we do this? The trouble, as we stated already, is that {t} is not restricted to a bounded interval, so standard methods do not apply. If we could restrict {t} to say {[0,1]}, then at worst we could try {t} at all the following values:

\displaystyle  0, \frac{1}{N}, \frac{2}{N}, \dots, \frac{N-1}{N},1,

for some large {N}.

The key observation is this: the numbers {\ln 2}, {\ln 3}, and {\ln 5} are linearly independent over the rationals. This follows since they are distinct primes. But now we can use Kronecker’s theorem. It is perfect for this situation.

Theorem: Let {1,\alpha_{1},\dots,\alpha_{\ell}} be real numbers that are linearly independent over the rationals. Let {\nu_{1},\dots,\nu_{\ell}} be arbitrary real numbers, {\epsilon>0} and {N>0} an integer. Then there exist integers {q_{1},\dots,q_{\ell}} and an integer {t > N} such that for all {i},

\displaystyle  | t\alpha_{i} - \nu_{i} - q_{i} | < \epsilon.

The theorem seems a bit complex at first glance, but we can use it to solve our problem—here is how. We have {\ell = 3} and {\alpha_1 = \frac{\ln 2}{2\pi}}, {\alpha_2 = \frac{\ln 3}{2\pi}}, and {\alpha_3 = \frac{\ln 5}{2\pi}}. Also let {\nu_{1},\nu_{2},\nu_{3}} be arbitrary real numbers. The theorem shows there are integers {t} and {q_{1},q_{2},q_{3}} so that

\displaystyle  | t\alpha_{i} - \nu_{i} - q_{i} | < \epsilon.

for {i=1,2,3}. This implies that

\displaystyle  \left| t \frac{\ln 2}{2\pi} -\nu_{1} - q_{1} \right| < \epsilon,

and clearly it follows that

\displaystyle  \left| t \ln 2 -2\pi \nu_{1} - 2\pi q_{1} \right| < 2\pi\epsilon.

Since cosine has period {2\pi} it follows that {\cos(t\ln2)} is approximately equal to {\cos(2\pi\nu_{1})}, but this means that we can make {\cos(t\ln2)} as close to any value that we like. Even more we can make each of the other cosines simultaneously have any value we like.

Thus, {f(x)} is equal to, within some small error, {ax+by+cz} where {x,y,z} are only constrained to be in the interval {[-1,1]}. This constraint follows since cosine’s range is that interval. The point is that we can reduce the search for {t} to the solution of a linear system, which is easy. Done.

A Tougher Example

Note, this method works for any number of terms provided the logarithms are all independent. What happens if they are not? Things get more difficult—the above method fails, but not all is lost.

Let’s modify the previous example. Let

\displaystyle  f(t) = a \cos(t\ln 2) + b \cos(t \ln 3) + c \cos(t \ln 4) + d \cos(t \ln 5).

We can try to proceed as before, but the problem we hit is that

\displaystyle  \ln2, \ln 3, \ln 4, \ln 5

are not linearly independent over the rationals: {2\ln 2 - \ln 4 = 0}.

All is not lost. We note that the method from the last section can be used on the three logarithms that are independent, and then state {\cos(t \ln 4)} is terms of {\cos(t\ln2)}. We get

\displaystyle  ax + by +cw +dz,

where as before {x,y,z} must lie in {[-1,1]}. The value of {w} is {2x^{2}-1}, since

\displaystyle  \cos(2\theta) = 2\cos^{2}(\theta)-1.

We now can solve our problem since we have a polynomial question over a bounded domain. Done—again.

Extending the Idea, and a Problem

I had hoped that we might be able any cosine sums by the same trick. But alas I am stuck. We can handle cosine sums where all logarithms are primes or powers of primes. This follows since {\cos(nx)} is a polynomial in {\cos(x)}. Such polynomials are treated here.

The natural next step is to allow a term of the form {\cos(t \ln 6)}. Unfortunately this appears not to be a polynomial in {\cos(t\ln2)} and {\cos(t\ln3)}. We can use the identity

\displaystyle \cos(x + y) = \cos(x)\cos(y) - \sin(x)\sin(y),

but the substitution {|\sin(x)| = \sqrt{1 - \cos^2(x)}} involves an absolute value as well as a radical.

Open Problems

Can this method be generalized to handle arbitrary cosine sums of this type?

A more general question is, perhaps there is a completely different approach to the whole question that solves it more directly. Any suggestions?

[clarified proof in “Not the Question” section, fixed some words]

10 Comments leave one →
  1. August 31, 2012 4:01 pm

    I don’t think your proof in “Not The Question” is correct. For example, 1 is always greater than zero, but its integral does go below zero, in particular just to the left of the origin. The problem is that the integral means something different when the “lower bound” is greater than the “upper bound” of integration.

    So… is there a nice short proof of the same fact? The most intuitive approach I can see is to consider x taken at random from a large interval. It’s intuitively clear and not hard to formalize that when x is u.a.r from the interval [0, M], the expectation E[sum_k a_k cos(x b_k)] tends to 0. This contradicts that the sum is bounded below by a positive constant. But it doesn’t rule out that the sum is always positive, yet gets arbitrarily close to zero. Could that be possible? Is there an easy proof of why not?

  2. Sam permalink
    August 31, 2012 6:39 pm

    Wow, wow – slower, slower. Several typos, errors, and missing words. Which makes it somewhat difficult to follow. Please proofread carefully once more.

    In the “Not the Question” section, when you integrate, the b_i’s go to the denominator, not the numerator.

    • August 31, 2012 8:53 pm

      Thanks to you and daveagp. I clarified the proof hopefully enough, and fixed up the prose in spots.

      • Sushant permalink
        September 2, 2012 3:11 pm

        Still, the proof holds only for integral b_i (or b_i with the same denominator). For the full proof we probably need Kronecker’s theorem.

  3. August 31, 2012 9:49 pm

    While you’re at it, don’t forget that evaluating integrals of the form

    $\int_0^{2\pi} dx \prod_{i=1}^n \cos (a_i x)$,

    for integers $a_1, \ldots, a_n$, is #P-complete!

  4. Dr. Dolittle permalink
    September 1, 2012 1:12 am

    Your notation as well as the proof in the section “Not the question” is still bad and misses the point: using a value “just below 2*Pi” of course doesn’t suffice, as for any such given value there is a linear combination of cosines which invalidates the claim (when trying to use this particular value as a witness). Also, the frequencies are not taken into account.
    You have to use an approximation property as in the remainder of the article.

    Also, integrating is unneccessary (might as well look at values of the cosine near its zeros).

  5. September 1, 2012 5:34 am

    Why is integral of 0 on right hand side of the cosine equation have to be 0? The right hand side of the sine equation will be some arbitrary constant not necessarily zero. Isn’t it?

  6. John Sidles permalink
    September 1, 2012 9:51 am

    Dick Lipton asks  “We want to determine whether or not [a trigonomic] function can be equal, even approximately, to some given value.”

    In this regard, please let me commend to Gödel’s Lost Letter readers a computation-centric (and very readable) article by Martin Mohlenkamp and Lucas Monzón titled “Trigonometric identities and sums of separable functions” (Mathematical Intelligencer 2005).

    In effect, Mohlenkamp and Monzón extend Dick Lipton’s question by asking: “What functions in high-dimensional spaces (like Hilbert spaces) can be given, or accurately approximated, by low-rank separable functions (like low-rank trigonometric functions)?”

    As with Dick Lipton’s post, Mohlenkamp and Monzón discuss this question very entertainingly … without arriving a definitive answers to (what appears to be) an immensely large class of profoundly deep questions … questions whose answers in individual cases commonly have considerable practical importance.

    So this was another wonderfully interesting, profoundly deep, eminently useful Gödel’s Lost Letter essay/question!🙂

  7. joh permalink
    October 20, 2012 11:50 pm

    professor Lipton can you say more about why you hate high school trig? I also do not like trig identities because it seems like a bunch of formulas which have to be memorised and I end up forgetting them. Can you tell me if there are other elementary ways to look at trig, which links it to other parts of math and gives a deeper meaning to them? I am a student.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s