Why the Riemann hypothesis is hard and some other observations.

 ICM 2018 “Matchmaking” source

Michael Atiyah, as we previously posted, claims to have a proof that the Riemann Hypothesis (RH) is true. In the second half of a 9:00–10:30am session (3:00–4:30am Eastern time), he will unveil his claim.

Today we discuss what the RH is and why it is hard.

Of course we hope that it was hard and now is easy. If Atiyah is correct, and we at GLL hope that he is, then the RH becomes an “easy” problem.

Update 9/24, 8am: The website Aperiodical whose post on RH was mentioned below have a Twitter thread on the talk and another with all the slides. Atiyah has released a short paper with the main technical work contained in a second paper, “The Fine Structure Constant.” This Reddit thread has Python code for a short computation relevant to the latter. A typo ${F(b) = 0}$ in the main slide is corrected to ${F(0) = 0}$ in the paper. The analysis rests on an mapping ${T}$ named for John Todd and used by Friedrich Hirzebruch. It is variously represented as a mapping ${T: \mathbb{C} \to \mathbb{C}}$ and as an operator on power series, and in what is evidently the latter form yields the following simple complex function in composition with zeta:

$\displaystyle F(s) = T\{1 + \zeta(s + b)\} - 1.$

The claim is that ${F(2s) = 2F(s)}$ for ${s}$ in the critical strip and then the analyticity of ${F(s)}$ at ${0}$ yields the contradiction of ${\zeta}$ vanishing everywhere. We will defer further comment at this time but there are opinions in the above.

In its original form, the RH concerns the simple zeta function of one complex variable ${s}$:

$\displaystyle \zeta(s) = \sum_{n=1}^{\infty} n^{-s} = 1 + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \cdots$

Leonhard Euler analyzed this function for ${s}$ a positive integer and proved that ${\zeta(2) = \frac{\pi^2}{6}}$. Further values quickly followed for positive even numbers but the nature of ${\zeta(3)}$ remained open until Roger Apéry proved it irrational in 1978.

As a function the sum converges provided the real part of ${s}$ is greater than ${1}$, but for other complex ${s}$ it is definable by analytic continuation. This yields the surprising—perhaps—values ${\zeta(0) = - \frac{1}{2}}$, ${\zeta(-1) = -\frac{1}{12}}$, and ${\zeta(-2) = \zeta(-4) = \zeta(-6) = \cdots = 0}$. The latter are the trivial zeroes of zeta. In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard. The RH is the statement:

All other zeroes of ${\zeta(s)}$ have real part ${\frac{1}{2}}$, that is, ${s = \frac{1}{2} + i\tau}$ for some ${\tau}$.

That is to say, there are other zeroes besides the trivial ones. The behavior of ${\zeta(s)}$ near those zeroes is not only complex but universally so. The impact of analyzing that behavior was presaged by Leonhard Euler’s discovery

$\displaystyle \zeta(s) = \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}}.$

For intuition, consider how ${\frac{1}{1 - x} = 1 + x + x^2 + \cdots}$ converges for ${x \in [0,1)}$ and picture the product of all these series over ${x = \frac{1}{p}}$ for each prime ${p}$. Every finite term in the product gives the prime factorization of a unique ${n \geq 1}$ and the exponent ${-s}$ merely carries through.

The way in which ${\zeta}$ encodes the “gene sequence of the primes” endures when ${s}$ is complex. There are many equivalent forms of RH, many having to do with tightness of upper or lower bounds for things—a staple of analysis but also what we like to talk about in computational complexity theory. One of our favorites is described in this post:

${\text{RH} \iff (\forall \epsilon > 0)(\exists C)(\forall x)\sum_{k = 1}^x \mu(k) \le Cx^{1/2 + \epsilon}.}$

Here ${\mu(k)}$ is the Möbius function, named for the same August Möbius as the strip, and defined by

$\displaystyle \mu(k) = \begin{cases} 1 & \text{if } k \text{ is a product of an even number of distinct primes}\\ -1 & \text{if } k \text{ is a product of an odd number of distinct primes}\\ 0 & \text{otherwise, i.e., if } k \text{ is not square-free.}\\ \end{cases}$

In this and other ways, the zeroes of ${\zeta}$ govern regularities of the distribution of the primes. One rogue zero off the line causes enough ripples to disturb the ${x^{1/2 + o(1)}}$ bound. But the ripples are not tsunamis and the tight RH bound is demanding a lot: an ${o(x)}$ bound is equivalent both to the Prime Number Theorem (PNT) and to no zero wandering over as far as the line for real part ${1}$.

Even wider ramifications emerge from this essay by Alain Connes. Connes like Atiyah is a Fields Medalist (1982) and among his later work is an extension of the famous index theorem proved by Atiyah with Isadore Singer. Connes’s essay even goes into the wild topic of structures that behave like “fields of characteristic ${1}$“—but maybe not so wild, since Boolean algebra where ${1 \vee 1 = 1}$ is a simple and familiar example.

Why Hard?

We thought we would try to give some intuition why the RH is hard. Of course like all open problems, its hard because we have not yet proved it. Every open problem is hard. But lets try and give some intuition why it is hard. Enough about “hard.”

Consider any quantity ${A}$ that is defined by a formula of the summation type:

$\displaystyle A = a_{1} + a_{2} + \dots$

Now showing that ${A}$ is not zero is quite difficult in general. Even if the summation is finite, showing it does not sum to ${0}$ is in general a tough one.

Imagine that ${A}$ is also equal to a product type:

$\displaystyle B = b_{1} \times b_{2} \times \dots$

Now showing that ${B}$ is not zero is not so impossible: If the product was a finite one, then you need only show that each term ${b_{k}}$ is not zero. If the product is an infinite product, then its harder. But there is at least hope.

As above, the Riemann function is indeed given by a summation as its definition. But it also is given by the product type formula. This formula is the key to proving even weaker than RH bounds on where the Riemann zeroes are located, such as for PNT.

Back to ${B}$ as a product. Take logarithms naively and see that we can replace studying ${B}$ by

$\displaystyle L = \log(b_{1}) + \log(b_{2}) + \dots$

This can be made to work. But. One must be quite careful when handling the logarithm over the complex numbers. Put simply, the logarithm function is multi-valued. It is only defined up to a multiple of ${2\pi i}$ and so must be handled very carefully.

This is the same reason that

$\displaystyle 1 = e^{2\pi i} = e^{2\cdot 2\pi i} = e^{3 \cdot 2\pi i} = \dots$

is true. Apparently this phenomenon is one reason that many attempts at the RH have failed. At some point the proof compute two quantities say ${Q}$ and ${R}$ and conclude incorrectly that they are equal. But in reality

$\displaystyle Q = R + 2\pi i,$

for example. Of course there is no way that this is a mistake an Atiyah can make, but many others have run into this issue.

It may help also to discuss a product type formula that is classic.

$\displaystyle \sin(\pi z) = \pi z\prod_{n=1}^{\infty} \left( 1 - \frac{z^{2}}{n^{2}} \right).$

Note that this converges for all values of ${z}$. This helps one see that the zeroes of the sine function are exactly as you expected: multiples of ${\pi}$. If there was a product formula like this for the zeta function the RH would be easy. Of course the known formulas for zeta are only convergent for values that have a real part greater than ${1}$. Too bad.

Some Observations

On the eve of the lecture, we have not found much more information since the news broke early Thursday. There is a more-visual description of RH by Katie Steckles and Paul Taylor, who are attending the Heidelberg Laureate Forum and will be blogging from there. But we have a couple of meta-observations.

One is to compare with how Andrew Wiles’s June 1993 announcement of a proof of Fermat’s Last Theorem (FLT) was handled. Wiles was given time for three long lectures on three days of a meeting at Cambridge University under the generic title, “Elliptic Curves and Galois Representations.” He kept his cards in his hand during the first two lectures as he built the tools for his proof; the rumors of where it was going ramped up mainly before the third.

His proof had already been gone over in depth by colleagues at Princeton, initially Nicholas Katz and with John Conway helping to keep the process leak-free. Nevertheless, the proof contained a subtle error in an estimate that was not found until Katz posed followup questions three months later. Repairing the gap took a year with assistance by Richard Taylor and required a change in the strategy of the proof. We draw the following observations and contrasts:

• We do not know if any of Atiyah’s colleagues at Oxford or elsewhere have read his manuscript. This doesn’t mean anything yet—no one knew about Wiles until the day-of.

• Wiles used an avenue of attack on FLT that had been opened by Ken Ribet and others in 1986 and that already had opened a fruitful web of connections to other areas of analysis and number theory. He proved a newer conjecture that implies FLT. Atiyah’s proof evidently draws on mathematical physics but we have not seen any harbinger or informed speculation of what channels it may use.

• Atiyah has just one 45-minute time slot. This may not be enough time. Wiles’s example shows how a small detail can have wide impact.

• Atiyah gave the Abel Lecture at the International Congress of Mathematicians in Rio de Janeiro just last month. Its title—“The future of mathematical physics: new ideas in old bottles”—may be read as hinting about “new ideas.” However, the talk is almost entirely historical and elementary. We leave it to our audience to read any tea leaves here.

Our final observation—have we said it already?—is that RH is hard. How hard was impressed on Ken by a story he recollects as having been told by Bernard Dwork during a undergraduate seminar at Princeton in 1980 on Norman Levinson’s proof that over one-third of the nontrivial zeroes lie on the “critical line” of real part ${\frac{1}{2}}$. Different versions may be found on the Net but the one Ken recalls went approximately this way:

After giving up on a lifetime of prayers to prove Riemann, an already-famous mathematician turned to the other side for help early on a Monday. The usual price was no object, but the Devil said that because of the unusual subject he could not offer the usual same-day service. The contract was drawn up for delivery by Saturday midnight. Projecting his gratitude, the mathematician arranged a private feast for that day and exchanged his usual rumpled clothes for a tailored suit to match the Devil’s dapper figure. The sun set as he poured his wine and kept a roast pig and fine fare on the burner, but there was no sign of his companion. The clock struck 9, 10, 11, and the minute hand swept round the dial. Suddenly at the first chime a sulfrous blast through a window revealed that the Devil had missed his landing point by a dozen yards. The mathematician opened his door and through it staggered an unshaven frazzled figure, horns askew and parchments akimbo, pleading: “I just need one more lemma…”

The moral of the story is the same as in other versions: there are no brilliant mathematicians Down There.

Open Problems

Well we will see soon if the RH is still hard or if it is now easy—or on the road to easy. It is rare to have such excellence in our mathematical endeavors. We hope that the talk is sufficiently clear that we will be able to applaud the brilliance of Atiyah. We wish him well.

[Added update at top and made separate section “About RH”; 9:50am changed general “a” in “as” to “2” since 2 is special in the paper; fixed Euler product formula for sines; expanded observations about the Todd function in the intro including using braces in the equation defining ${F(s)}$]

September 23, 2018 10:10 pm

The story you relate at the end of the piece is a somewhat modified version of the short story “The Devil and Simon Flagg” by Arthur Porges. I first read it in Clifton Fadiman’s collection “Fantasia Mathematica” when I was in high school. During the 70’s it was made into a short film called “The Mathematician and the Devil” by the Soviet director V. Shuplyakova. The film is available on Youtube.

• September 23, 2018 10:35 pm

Thanks! I would guess likely that Dwork (or whoever) melded in the story. Different tone of the ending, though—the rush and pleading “one more lemma!” is what I (Ken) remember most distinctly.

September 24, 2018 2:40 am

In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard.

The problem that I have with unrestrictedly drawing conclusions about the behaviour of the primes from the behaviour of $\zeta(\sigma + it)$ using analytic continuation is that the relationship of $\zeta(\sigma + it)$ to the primes is well-defined only for $\sigma >1$.

In other words, if some function is defined only for points along the $x$-axis for $x>0$, then how can we deduce any behaviour of the function for $x0$ is $x=ct$, where $c$ is the speed of light.

Obviously. The function $x=ct$ is well-defined for values of $t<0$.

However, any such value cannot describe any property of the signal $S$, since $S$ does not exist for $t<0$.

In case this seems far-fetched, Sections 4 to 6 here analyse a Zeno-type example, where even the theoretical limiting behaviour of an elastic string, under a specified iterative transformation, is not given by the putative Cauchy limit (which can only be described as a misleading mathematical myth) of the function that seeks to describe the iteration.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem draws conclusions about the behaviour of the prime counting function $\pi(n)$, as $n \rightarrow \infty$, by the limiting behaviour of $\zeta(\sigma + it)$ along $\sigma =1$.

September 24, 2018 3:33 am

In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard.

The problem that I have with unrestrictedly drawing conclusions about the behaviour of the primes from the behaviour of $\zeta(\sigma + it)$ using analytic continuation is that relationship of $\zeta(\sigma + it)$ to the primes is well-defined only for $\sigma >1$.

In other words, if some function is defined only for points along the $x$-axis for $x>0$, then how can we deduce any behaviour of the function for $x0$ is $x=ct$, where $c$ is the speed of light.

Obviously. The function $x=ct$ is well-defined for values of $t<0$.

However, any such value cannot describe any property of the signal $S$, since $S$ does not exist for $t<0$.

In case this seems far-fetched, Sections 4 to 6 here analyse a Zeno-type example, where even the theoretical limiting behaviour of an elastic string, under a specified iterative transformation, is not given by the putative Cauchy limit (which can only be described as a misleading mathematical myth) of the function that seeks to describe the iteration.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem draws conclusions about the behaviour of the prime counting function $\pi(n)$, as $n \rightarrow \infty$, by the limiting behaviour of $\zeta(\sigma + it)$ along $\sigma =1$.

September 24, 2018 11:04 am

There is a mistake in the product formula for sin. There shouldn’t be a pi^2 in the denominator on the right hand side.

• September 24, 2018 10:49 pm

Thanks very much!

September 24, 2018 2:13 pm

1) the Riemann Zeta function also admits some explicit and globally convergent series on $\mathbb{C}-{1}$, see for instance this recent paper of Blagouchine https://arxiv.org/abs/1606.02044 These have perhaps been under-exploited so far and lay lead to a proof, but at least it shows that analytic continuation of the usual definition is not the main point.
2) in Bristol last june a conference on RH took place : some slides and videos are available here https://heilbronn.ac.uk/2017/08/08/perspectives-on-the-riemann-hypothesis/

6. September 24, 2018 8:32 pm

The “one more lemma” line occurs in the short story The Devil and Simon Flagg, by Arthur Porges, which centers on FLT (Whose last what? asked the Devil weakly), not Riemann. If Atiyah is right, we might have to substitute P=NP in the tale now.

September 25, 2018 10:05 am

Atiyah must have read my paper since Todd function would be T = 1/lnP^2^n , P is prime