Tricks are used in deep results.

Pierre Deligne is a famous number theorist who has won most of the top honors in mathematics. Among many achievements he developed Alexander Grothendieck’s idea of motives, a way to unify various notions of cohomology.

Today I thought we would look at tricks that are used in proving deep theorems, like Deligne’s result on the finite Riemann hypothesis.

The finite Riemann hypothesis can be framed around polynomials ${f(x,y)}$ of some degree (at most) ${d}$. A natural question is how many points ${(x,y)}$ are there in the finite field of ${q}$ elements so that $\displaystyle f(x,y) = 0.$

Deligne proved, solving a famous open problem, that the number ${N}$ of such points is near ${q}$: The bound is $\displaystyle |N-q| \le (d-1)(d-2){\sqrt q} + O(1).$

Such bounds are immensely important whenever one needs to bound the number of solutions to equations over finite fields.

## Deligne’s Trick

Timothy Gowers outlines the trick in his essay for Deligne’s Abel prize. One of the objectives is to establish the following expression for the zeta function ${Z(x)}$ of the system of degree- ${d}$ equations over ${\mathbb{F}_q}$: $\displaystyle Z(x) = \frac{P_1(x) P_3(x) \cdots P_{2d-1}(x)}{P_0(x) P_2(x) \cdots P_{2d}(x)}.$

where each ${P_i}$ is a polynomial with integer coefficients. Then one needs to show that the reciprocals ${\alpha}$ of the roots of each ${P_i}$ are algebraic integers of modulus ${q^{i/2}}$.

A general technique gives an upper bound of ${q^{i/2 + 1/2}}$ on ${|\alpha|}$—that is, a lower bound of ${q^{-(i+1)/2}}$ on the modulus of the roots themselves. The additive ${1/2}$ in the power at first seems a substantial gap. However, the technique is general enough to apply to ${k}$-fold tensor products of the system of equations, in which the corresponding reciprocals are ${\alpha^k}$. This gives $\displaystyle |\alpha^k| \leq q^{ki/2 + 1/2}$

for all even integers ${k}$. The additive ${1/2}$ does not change, so under taking ${k}$-th roots of both sides it gets whittled down. The conclusion is $\displaystyle |\alpha| \leq q^{i/2}$

as desired. A previously-known duality condition—we were just talking about that subject—makes this an equality and the rest of the analysis goes through.

Deligne commented on Grothendieck’s reaction in to his proof:

If I had done it using motives, he would have been very interested, because it would have meant that the theory of motives had been developed. Since the proof used a trick, he did not care.

Indeed deep results sometimes rely on “tricks”; Deligne’s result is an example of this phenomenon. We do not mean to say his proof is anything but amazing. The point is that while the proof uses advanced machinery it also needs a trick.

What do we mean by a trick? A trick is a simple, usually obvious, fact that has important consequences. Perhaps the best way to explain what we mean as tricks is to give some more examples.

## Some Favorite Tricks ${\bullet }$ Prime Divisors I: If ${p}$ does not divide the integer ${A}$, then ${A}$ is not zero. ${\bullet }$ Prime Divisors II: If too many primes ${p}$ divide the integer ${A}$, then ${A}$ is zero. ${\bullet }$ Even vs Odd: Many arguments use the trivial fact that ${x \neq y}$ when ${x}$ is even and ${y}$ is odd. The version of this for ${0 \neq 1}$ is the key in many proofs that certain numbers are not rational. Think ${\pi}$ and ${e}$. ${\bullet }$ The Pigeonhole Principle: Of course the fact that placing ${n+1}$ objects into ${n}$ or less bins yields some bin with at least two or more objects is use often. ${\bullet }$ Polynomial Roots: This is the often used fact that a polynomial ${f(x)}$ of degree ${d}$ can have at most ${d}$ roots, unless its is identically zero. ${\bullet }$ Expectation: Suppose that some random variable ${X}$ has positive expectation. Then there must be some value so that ${X}$ is positive. This is the basis of the probabilistic method. ${\bullet }$ Epsilon Trick: This is that $\displaystyle a \le b + \epsilon$

for all ${\epsilon>0}$, implies that ${a \le b}$. Note that this is used in the final step of Deligne’s trick. ${\bullet }$ A trigonometry Identity: An early proof of the Prime Number Theorem uses that $\displaystyle 3 + 4\cos \theta + \cos 2\theta \ge 0.$

This follows since $\displaystyle 3+4\cos \theta + \cos 2\theta = 2(1+\cos \theta)^{2} \ge 0.$

This plays a key role in showing that the Riemann Zeta function is not zero for points with real part at least ${1}$. ${\bullet }$ Sums of Squares: The fact that $\displaystyle \sum_{k} x_{k}^{2} \ge 0,$

for reals, is used often. ${\bullet }$ Injective and Surjective: This is the fact that a map ${f:S \rightarrow S}$ is 1-1 if and only of it is onto for any map provided ${S}$ is finite. This is critical in many proofs, especially in showing that certain results on finite fields can move over to the complex numbers. ${\bullet }$ The Geometric Identity: Many deep arguments in number theory use the fact that $\displaystyle 1 + x + x^{2} + \dots + x^{m} = \frac{x^{m+1}-1}{x-1},$

provided ${x \neq 1}$. This shows that $\displaystyle 1 + x + x^{2} + \dots + x^{m} = O(1),$

when ${x}$ is a root of unity bounded away from ${1}$. This is used in the analysis of exponential sums, which are used to prove many deep results. ${\bullet }$ Absolute Values: The following simple idea is used in number theory to prove results about the Riemann Zeta function. Suppose $\displaystyle \left| \sum_{k=1}^{m} a_{k} \right| < \sum_{k=1}^{m} |a_{k}|.$

Then it follows that the sequence $\displaystyle a_{1}, \dots, a_{m}$

changes sign somewhere. The point is that showing an upper bound $\displaystyle \left| \sum_{k=1}^{m} a_{k} \right| \le S,$

and a lower bound $\displaystyle S < \sum_{k=1}^{m} |a_{k}|,$

are often easier than trying to show there is some sign change in a sequence.

This last trick is one of my recent favorites. I have never used it, but I find it appealing. Here is one place where it is critical. Suppose that we have a real valued function ${f(t)}$ defined for ${t}$ in the interval ${[0,1]}$. We wish to prove that ${f(a)=0}$ for some point ${a}$ in the interval ${[0,1]}$. By the famous intermediate value theorem we only need to show that ${f(0)}$ and ${f(1)}$ have different signs. This can be hard, if the function ${f(t)}$ is complicated, as it is in the application. So suppose we can prove this $\displaystyle \left| \int_{0}^{1} f(t)dt \right| < \int_{0}^{1} |f(t)| dt.$

Then the integral version of the above trick shows that there is a zero as needed. This is used to show that many of the zeroes of the Riemann Zeta function lie on the critical line—neat. The key is that while proving lower and upper bounds are hard, they are possible.

## Open Problems

I wanted to point out some tricks to make the point that deep results can pivot on knowing certain simple but important facts—tricks. Perhaps the more tricks we know the better our chances at solving an open problem.

What are some of your favorite tricks?

1. February 3, 2020 2:58 pm

Since you anyhow mentioned Gowers: http://www.tricki.org/

2. February 3, 2020 4:47 pm

Telescoping sums and products

Also, Professor Tao has written about a similar trick “tensor power trick” https://terrytao.wordpress.com/2007/09/05/amplification-arbitrage-and-the-tensor-power-trick/

• February 4, 2020 9:12 am

Dear i like tricks:

Yes Tao has written about this. I think he has shown that it can be quite powerful. Thanks for the comment.

Best

Dick

3. February 5, 2020 8:09 am

Between two points there is a line. Is that a trick? Should be right?

• February 6, 2020 1:13 pm

Yes line defined by two points is a kind of trick. Or could use: Suppose that a set of points in the plane cannot be covered by N > 1 lines, implies that must have at least 2N-1 points. Proof: Suppose that has only 2N-2 points. Then can cover it by N-1 lines, since each covers two points.

Best

Dick

• February 7, 2020 3:24 am

When is a trick not an axiom?

• February 7, 2020 8:58 pm

What I gave was one of Euclid’s axioms.

4. February 7, 2020 3:59 pm

One trick that I like is the trick of using generic variables. Suppose you want to prove that every matrix divides its characteristic polynomial (the Cayley Hamilton theorem). The proof is easy if the matrix is diagonal. And it also easy if the matrix can be diagonalized, e.g. if all eigenvalues are distinct. But then you replace the entries by generic variables. Now all eigenvalues must be distinct so you diagonalize and prove for this symbolic matrix which implies the identity for every instance. Now, maybe this involves some cheating but at the very least it gives a very good intuition. And, of course, it is important for randomized algorithm…

5. February 7, 2020 7:34 pm

Dear Gil:

I love this trick. I heard this first in a paper by Richard Bellman, if I recall. You are right that it is a very powerful trick.

Are there ways to do this in finite fields? I know it works in the reals or the complex. Is there a way to do something like this in finite structures?

Best

Dick

• February 10, 2020 10:08 am

Dear Dick, well, I think that the idea of replacing constants with generic elements or variables should work also in finite characteristics. So it can give you identities (like the fact that substituting a matrix in the characteristic polynomial gives zero) also over finite fields. Over the real and complex numbers you can use a limiting argument but probably this is not needed. I vaguely remember that there are some caveats that you need to worry about to get a complete proof even for the Cayley-Hamilton theorem. But I don’t remember what they are.

I used this trick heavily in my Ph D thesis for something called “algebraic shifting”. (This can be done over every characteristics.)

best Gil