# Preview of the Atiyah Talk

*Why the Riemann hypothesis is hard and some other observations.*

ICM 2018 “Matchmaking” source |

Michael Atiyah, as we previously posted, claims to have a proof that the Riemann Hypothesis (RH) is true. In the second half of a 9:00–10:30am session (3:00–4:30am Eastern time), he will unveil his claim.

Today we discuss what the RH is and why it is hard.

Of course we hope that it **was** hard and now is easy. If Atiyah is correct, and we at GLL hope that he is, then the RH becomes an “easy” problem.

**Update 9/24, 8am:** The website Aperiodical whose post on RH was mentioned below have a Twitter thread on the talk and another with all the slides. Atiyah has released a short paper with the main technical work contained in a second paper, “The Fine Structure Constant.” This Reddit thread has Python code for a short computation relevant to the latter. A typo in the main slide is corrected to in the paper. The analysis rests on an mapping named for John Todd and used by Friedrich Hirzebruch. It is variously represented as a mapping and as an operator on power series, and in what is evidently the latter form yields the following simple complex function in composition with zeta:

The claim is that for in the critical strip and then the analyticity of at yields the contradiction of vanishing everywhere. We will defer further comment at this time but there are opinions in the above.

## About RH

In its original form, the RH concerns the simple zeta function of one complex variable :

Leonhard Euler analyzed this function for a positive integer and proved that . Further values quickly followed for positive even numbers but the nature of remained open until Roger Apéry proved it irrational in 1978.

As a function the sum converges provided the real part of is greater than , but for other complex it is definable by analytic continuation. This yields the surprising—perhaps—values , , and . The latter are the *trivial zeroes* of zeta. In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard. The RH is the statement:

All other zeroes of have real part , that is, for some .

That is to say, there *are* other zeroes besides the trivial ones. The behavior of near those zeroes is not only complex but universally so. The impact of analyzing that behavior was presaged by Leonhard Euler’s discovery

For intuition, consider how converges for and picture the product of all these series over for each prime . Every finite term in the product gives the prime factorization of a unique and the exponent merely carries through.

The way in which encodes the “gene sequence of the primes” endures when is complex. There are many equivalent forms of RH, many having to do with tightness of upper or lower bounds for things—a staple of analysis but also what we like to talk about in computational complexity theory. One of our favorites is described in this post:

Here is the *Möbius function*, named for the same August Möbius as the strip, and defined by

In this and other ways, the zeroes of govern regularities of the distribution of the primes. One rogue zero off the line causes enough ripples to disturb the bound. But the ripples are not tsunamis and the tight RH bound is demanding a lot: an bound is equivalent both to the Prime Number Theorem (PNT) and to no zero wandering over as far as the line for real part .

Even wider ramifications emerge from this essay by Alain Connes. Connes like Atiyah is a Fields Medalist (1982) and among his later work is an extension of the famous index theorem proved by Atiyah with Isadore Singer. Connes’s essay even goes into the wild topic of structures that behave like “fields of characteristic “—but maybe not so wild, since Boolean algebra where is a simple and familiar example.

## Why Hard?

We thought we would try to give some intuition why the RH is hard. Of course like all open problems, its hard because we have not yet proved it. Every open problem is hard. But lets try and give some intuition why it is hard. Enough about “hard.”

Consider any quantity that is defined by a formula of the summation type:

Now showing that is not zero is quite difficult in general. Even if the summation is finite, showing it does not sum to is in general a tough one.

Imagine that is also equal to a product type:

Now showing that is not zero is not so impossible: If the product was a finite one, then you need only show that each term is not zero. If the product is an infinite product, then its harder. But there is at least hope.

As above, the Riemann function is indeed given by a summation as its definition. But it also is given by the product type formula. This formula is the key to proving even weaker than RH bounds on where the Riemann zeroes are located, such as for PNT.

Back to as a product. Take logarithms naively and see that we can replace studying by

This can be made to work. But. One must be quite careful when handling the logarithm over the complex numbers. Put simply, the logarithm function is multi-valued. It is only defined up to a multiple of and so must be handled very carefully.

This is the same reason that

is true. Apparently this phenomenon is one reason that many attempts at the RH have failed. At some point the proof compute two quantities say and and conclude incorrectly that they are equal. But in reality

for example. Of course there is no way that this is a mistake an Atiyah can make, but many others have run into this issue.

It may help also to discuss a product type formula that is classic.

Note that this converges for **all** values of . This helps one see that the zeroes of the sine function are exactly as you expected: multiples of . If there was a product formula like this for the zeta function the RH would be easy. Of course the known formulas for zeta are only convergent for values that have a real part greater than . Too bad.

## Some Observations

On the eve of the lecture, we have not found much more information since the news broke early Thursday. There is a more-visual description of RH by Katie Steckles and Paul Taylor, who are attending the Heidelberg Laureate Forum and will be blogging from there. But we have a couple of meta-observations.

One is to compare with how Andrew Wiles’s June 1993 announcement of a proof of Fermat’s Last Theorem (FLT) was handled. Wiles was given time for three long lectures on three days of a meeting at Cambridge University under the generic title, “Elliptic Curves and Galois Representations.” He kept his cards in his hand during the first two lectures as he built the tools for his proof; the rumors of where it was going ramped up mainly before the third.

His proof had already been gone over in depth by colleagues at Princeton, initially Nicholas Katz and with John Conway helping to keep the process leak-free. Nevertheless, the proof contained a subtle error in an estimate that was not found until Katz posed followup questions three months later. Repairing the gap took a year with assistance by Richard Taylor and required a change in the strategy of the proof. We draw the following observations and contrasts:

- We do not know if any of Atiyah’s colleagues at Oxford or elsewhere have read his manuscript. This doesn’t mean anything yet—no one knew about Wiles until the day-of.
- Wiles used an avenue of attack on FLT that had been opened by Ken Ribet and others in 1986 and that already had opened a fruitful web of connections to other areas of analysis and number theory. He proved a newer conjecture that implies FLT. Atiyah’s proof evidently draws on mathematical physics but we have not seen any harbinger or informed speculation of what channels it may use.
- Atiyah has just one 45-minute time slot. This may not be enough time. Wiles’s example shows how a small detail can have wide impact.
- Atiyah gave the Abel Lecture at the International Congress of Mathematicians in Rio de Janeiro just last month. Its title—“The future of mathematical physics: new ideas in old bottles”—may be read as hinting about “new ideas.” However, the talk is almost entirely historical and elementary. We leave it to our audience to read any tea leaves here.

Our final observation—have we said it already?—is that RH is *hard*. How hard was impressed on Ken by a story he recollects as having been told by Bernard Dwork during a undergraduate seminar at Princeton in 1980 on Norman Levinson’s proof that over one-third of the nontrivial zeroes lie on the “critical line” of real part . Different versions may be found on the Net but the one Ken recalls went approximately this way:

After giving up on a lifetime of prayers to prove Riemann, an already-famous mathematician turned to the other side for help early on a Monday. The usual price was no object, but the Devil said that because of the unusual subject he could not offer the usual same-day service. The contract was drawn up for delivery by Saturday midnight. Projecting his gratitude, the mathematician arranged a private feast for that day and exchanged his usual rumpled clothes for a tailored suit to match the Devil’s dapper figure. The sun set as he poured his wine and kept a roast pig and fine fare on the burner, but there was no sign of his companion. The clock struck 9, 10, 11, and the minute hand swept round the dial. Suddenly at the first chime a sulfrous blast through a window revealed that the Devil had missed his landing point by a dozen yards. The mathematician opened his door and through it staggered an unshaven frazzled figure, horns askew and parchments akimbo, pleading: “I just need one more lemma…”

The moral of the story is the same as in other versions: there are no brilliant mathematicians Down There.

## Open Problems

Well we will see soon if the RH is still hard or if it is now easy—or on the road to easy. It is rare to have such excellence in our mathematical endeavors. We hope that the talk is sufficiently clear that we will be able to applaud the brilliance of Atiyah. We wish him well.

[Added update at top and made separate section “About RH”; 9:50am changed general “a” in “as” to “2” since 2 is special in the paper; fixed Euler product formula for sines; expanded observations about the Todd function in the intro including using braces in the equation defining ]

The story you relate at the end of the piece is a somewhat modified version of the short story “The Devil and Simon Flagg” by Arthur Porges. I first read it in Clifton Fadiman’s collection “Fantasia Mathematica” when I was in high school. During the 70’s it was made into a short film called “The Mathematician and the Devil” by the Soviet director V. Shuplyakova. The film is available on Youtube.

Thanks! I would guess likely that Dwork (or whoever) melded in the story. Different tone of the ending, though—the rush and pleading “one more lemma!” is what I (Ken) remember most distinctly.

In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard.The problem that I have with unrestrictedly drawing conclusions about the behaviour of the primes from the behaviour of using analytic continuation is that the relationship of to the primes is well-defined

onlyfor .In other words, if some function is defined only for points along the -axis for , then how can we deduce

anybehaviour of the function for is , where is the speed of light.Obviously. The function is well-defined for values of .

However, any such value cannot describe any property of the signal , since does not exist for .

In case this seems far-fetched, Sections 4 to 6 here analyse a Zeno-type example, where even the theoretical limiting behaviour of an elastic string, under a specified iterative transformation, is not given by the putative Cauchy limit (which can only be described as a misleading mathematical myth) of the function that seeks to describe the iteration.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem draws conclusions about the behaviour of the prime counting function , as , by the limiting behaviour of along .

In complex analysis analytic continuation is a method that extends a function—when possible—to more values. This extension is one reason we believe that the RH is hard.The problem that I have with unrestrictedly drawing conclusions about the behaviour of the primes from the behaviour of using analytic continuation is that relationship of to the primes is well-defined

onlyfor .In other words, if some function is defined only for points along the -axis for , then how can we deduce

anybehaviour of the function for is , where is the speed of light.Obviously. The function is well-defined for values of .

However, any such value cannot describe any property of the signal , since does not exist for .

In case this seems far-fetched, Sections 4 to 6 here analyse a Zeno-type example, where even the theoretical limiting behaviour of an elastic string, under a specified iterative transformation, is not given by the putative Cauchy limit (which can only be described as a misleading mathematical myth) of the function that seeks to describe the iteration.

The significance of this is that Hadamard and de la Vallée Poussin’s proof of the Prime Number Theorem draws conclusions about the behaviour of the prime counting function , as , by the limiting behaviour of along .

There is a mistake in the product formula for sin. There shouldn’t be a pi^2 in the denominator on the right hand side.

Thanks very much!

Some comments:

1) the Riemann Zeta function also admits some explicit and globally convergent series on , see for instance this recent paper of Blagouchine https://arxiv.org/abs/1606.02044 These have perhaps been under-exploited so far and lay lead to a proof, but at least it shows that analytic continuation of the usual definition is not the main point.

2) in Bristol last june a conference on RH took place : some slides and videos are available here https://heilbronn.ac.uk/2017/08/08/perspectives-on-the-riemann-hypothesis/

The “one more lemma” line occurs in the short story The Devil and Simon Flagg, by Arthur Porges, which centers on FLT (Whose last what? asked the Devil weakly), not Riemann. If Atiyah is right, we might have to substitute P=NP in the tale now.

Atiyah must have read my paper since Todd function would be T = 1/lnP^2^n , P is prime

1 = W/s , s is roots of zeta

😦 after dedicating a lot of time to this, quite a bit on reddit/ stackexchange etc, realize the consensus among math experts (but many/ most who refuse to state this on the record) is that Atiyahs recent writing is outside his speciality and not fully coherent. yes, even for a math wizard, there are limitations. ageing can have a brutal and tragic side, it spares no man. it is like entropy/ 2nd law of thermodynamic decay in biological/ human form. his physics work lately seems to be veering into quixotic territory to say the least, favorite areas of science breakthru dreamers (“deriving” physics constants like fine structure constant etc). feel that some ethical questions come into play and rather than confronting them head on, the community choose instead to insert its head in the sand so to speak am planning to blog on this but it will be with a somewhat heavy heart. one might say Atiyah is going down on his ship valiantly fighting until the end. there is some bittersweet honor in that…

… postscript, aka seems his own “crew” has now jumped ship/ deserted the captain… 😥