When has a strikingly simple proof come first?

 Cropped from London Times 2017 source

Michael Atiyah is giving a lecture next Monday morning at the Heidelberg Laureate Forum (HLF). It is titled, simply, “The Riemann Hypothesis.” An unsourced image of his abstract says what his title does not: that he is claiming not only a proof of Riemann but a “simple proof using a radically new approach.”

Today we discuss cases where theorems had radically simpler proofs than were first contemplated.

Sir Michael’s talk is at 9:45am Central European time, which is 3:45am Eastern. It will be live-streamed from the HLF site and will appear later on HLF’s YouTube channel. We have not found any other hard information. The titles of all talks are clickable on the program, but abstracts are not yet posted there.

Preceding him from 9:00 to 9:45am, the opening talk of the week’s proceedings, is John Hopcroft whom we know so well. If you’ve heard of the expression “a hard act to follow,” John has the opposite challenge. He is speaking on “An Introduction to AI and Deep Learning.”

Although all speakers are past winners of major prizes—Abel, Fields, Nevanlinna, Turing, and the ACM Prize in Computing—the forum is oriented forward to inspire young researchers. Our very own graduate Michael Wehar received support to attend last year, and one of this year’s young attendees is John Urschel, whom we profiled here.

## Simpler Proofs

Naturally, in a blog named for Kurt Gödel we should lead with him as an example. There is an often-overlooked component to the title of his famous paper on incompleteness theorems in logic:

Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I.

The component does not need to be translated. It is the “I” at the end. It does not mean “System 1”—it means that this is Part One of a longer intended paper. Gödel wrote the following at the very end on page 26—OK, now we will translate:

In this work we’ve essentially limited attention to System ${P}$ and have only hinted at how it applies to other systems. The results in full generality will be enunciated and proved in the next installment. That work will also give a fully detailed presentation of Theorem 11, whose proof was only sketched here.

Gödel’s “Part II” was slated to stretch over 100 pages. It never appeared because the essence of Gödel’s argument was perceived and accepted quickly, details notwithstanding. Once Alan Turing’s theory of computation emerged six years later, it became possible to convey the essence in just one page:

1. There are explicitly-defined sets ${D}$ that are not computably enumerable.

2. For any effective formal system, the set ${E}$ of cases ${x}$ such that it proves “${x \in D}$is computably enumerable.

3. Hence ${E}$ cannot equal ${D}$.

4. If ${E}$ contains a case ${y}$ that is not in ${D}$, then the system proves “${y \in D}$” but that is false.

5. Thus if the system is sound, ${E}$ must be a proper subset of ${D}$.

6. Then any ${z \in D \setminus E}$ yields a statement “${z \in D}$” that is true but not provable in the system.

The condition that the system is sound is technically stronger than needed, but Gödel didn’t optimize his conditions either—Barkley Rosser found a simple trick by which it suffices that the system be consistent. The Turing-based rollout can be optimized similarly. For one, we can define ${D}$ in such a way that the falseness of “${y \in D}$” has a finite demonstration, and can specify “effective” so that exceptional cases ${z}$ are computed. For a fully sharp treatment see pages 34 and 35 of these notes by Arun Debray—scribing Ryan Williams—which I am using this term.

The relevant point of comparison is that Gödel’s essence lay unperceived for decades while the leading lights believed in the full mechanizability of mathematics. Had it slept five years more, the lectures by Max Newman that Turing attended in the spring of 1935 might have ended not with Gödel but with more on the effort to build computing machines—which Newman later joined. See how David Hilbert, more than Gödel, animates Andrew Hodges’s description of Turing’s machines on pages 96–107 of his famous biography, and see what Hodges writes in his second paragraph here. The lack of Gödel might have held Alonzo Church back more.

Then Turing would have burst out with a “simple” refutation of Hilbert’s programme via “a radically new approach.” Equating “simple” with “one page” as above is unfair—Turing computability needs development that includes Gödel’s pioneering idea of encoding programs and logical formulas. But the analogy with Atiyah still works insofar as his use of “simple” evidently bases on the following works:

• A 1954 paper on algebraic varieties by Friedrich Hirzebruch;

• Paul Dirac’s seminal 1928 paper, “The Quantum Theory of the Electron”; and

• A 1936 paper by John von Neumann, “On an algebraic generalization of the quantum mechanical formalism (Part I).” (This “Part I” was actually a “Part II” to an earlier paper by von Neumann with two others.)

We have on several occasions noted the appreciation that methods originating in quantum computing have gained wide application in areas of complexity theory that seem wholly apart from quantum. The quantum flow may run even deeper than anyone has thought.

## Other Examples

${\bullet}$ Abel-Ruffini Theorem. Whether fifth-degree polynomial equations have a closed-form solution had been open far longer than Riemann has been. Paolo Ruffini in 1799 circulated a proof manuscript that ran over 500 pages and yet was incomplete. Niels Abel, whom we just mentioned, in 1824 gave a full sketch in only six pages, which were later expanded to twenty. This was still six years before the nature of the impossibility was enduringly expounded by Évariste Galois.

${\bullet}$ Lasker-Noether Theorem. Emanuel Lasker was not only the world chess champion but also a PhD graduate in 1900 of Max Noether, who was Emmy Noether’s father. In 1905, Lasker published a 97-page paper proving the primary decomposition theorem for polynomial ideals. Sixteen years later, Emmy Noether proved a more powerful theorem in a paper of 43 pages. But as Wikipedia’s list of long proofs states,

Lasker’s [proof] has since been simplified: modern proofs are less than a page long.

We invite readers to suggest further favorite examples. This MathOverflow thread lists many cases of finding shorter proofs, but we want to distinguish cases where the later simpler proof ushered in a powerful new theory.

The flip side is when the simpler proof comes first. These are harder to judge especially in regard to whether longer “elementary” methods could have succeeded. The Prime Number Theorem could be regarded as an example in that the original analytic proofs are elegant and came with the great 19th-century tide of using analysis on problems in number theory.

## Open Problems

Of course the main open problem will be whether Sir Michael’s claims are correct. Even if he walks them back to saying just that he has a new approach, its viability will merit further investigation.

Coincidentally, we note today’s—now yesterday’s—Quanta Magazine article on mounting doubt about the correctness of Shinichi Mochizuki’s claimed 500-page proof of the ABC Conjecture. The challenge comes from Peter Scholze of Bonn and Jacob Stix of Frankfurt. Luboš Motl today has coverage of both stories. Scholze, a newly minted Fields Medalist, will also attend in Heidelberg next week.

Updates 9/21: Stories by IFLScience! (which I neglected to link above) and by the New Scientist. The latter has new quotes by Atiyah. John Cook has more details about both Riemann and ABC. A MathOverflow thread has been started.

September 21, 2018 7:24 am

Later proofs of Jordan Curve Theorem supposedly were simpler than the original one.

September 21, 2018 11:50 pm

Huge mistake, zeros all line on Zero line !!

September 22, 2018 1:31 pm

The condition that the system is sound is technically stronger than needed, but Gödel didn’t optimize his conditions either—Barkley Rosser found a simple trick by which it suffices that the system be consistent.

Perhaps it’s worth reiterating that Barkley Rosser’s simple trick‘ is actually a very subtle illusion’!

Reason: Rosser’s proof appeals to his Rule C, which is equivalent to Gödel’s $\omega$-consistency.

See Section 15.6 on p.112 of Chapter 15 of this thesis.

Moreover, by Corollary 8.2 on p.12 of this paper in the December 2016 issue of Cognitive Systems Research, both are extraneous assumptions which imply that any formal first-order arithmetic that admits them is not sound.

September 22, 2018 3:12 pm

As to Turing vis è vis G&oumt;del and Hilbert; Turing implicitly proved, but sadly did not realise and claim, the argument that:

1. There are explicitly-defined number-theoretic Boolean (Halting) functions $F(x)$ that are not computably enumerable as true; if we define a Boolean number-theoretical function as enumerably true if, and only if, it is computably enumerable as, say, 1 for all $n$.

2. There is an explicitly-defined number-theoretic Boolean (G&oumt;delian) function $R(x)$ such that $R(x)$ is not enumerably true; but $R(n)$ is is enumerably true for any specified $n$. This was the first—thitherto unsuspected, and really significant—part of Theorem VI in G&oumt;del’s seminal 1931 paper on formally undecidable’ arithmetical propositions.

3. Expressed otherwise, there is an explicitly-defined number-theoretic Boolean function $R(x)$ such that $R(x)$ is verifiably true but not enumerably true; if we define a Boolean number-theoretical function $F(x)$ as verifiably true if, and only if, $F(n)$ is enumerably true for any specified $n$.

4. Thus, Turing’s reasoning—in his seminal 1936 paper on computable numbers—implicitly provided evidence-based definitions of arithmetical truth/falsity algorithmically in two—essentially different, and thitherto unsuspected—ways (evidence-based definitions’ in the sense that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic).

5. Had he not attempted to view his work as essentially reflecting, and affirming, the conclusions drawn by G&oumt;del in his seminal 1931 paper, Turing probably would have realised that the axioms and rules of inference of the first-order Peano Arithmetic PA are verifiably true, and preserve such truth, under the (weak) standard interpretation of PA; whilst the axioms and rules of inference of PA are enumerably true, and preserve such truth, under a (strong) finitary interpretation of PA as had been sought by Hilbert in the second of his 1900 Millennium problems (ergo, both interpretations establish that PA is sound).

September 22, 2018 3:24 pm

(Some typos corrected)

As to Turing vis à vis Gödel and Hilbert; Turing implicitly proved, but sadly did not realise and claim, the argument that:

1. There are explicitly-defined number-theoretic Boolean (Halting) functions $F(x)$ that are not enumerable as true; if we define a Boolean number-theoretical function $F(x)$ as enumerably true if, and only if, it is computably enumerable as, say, 1 for all $n$.

2. There is an explicitly-defined number-theoretic Boolean (Gödelian) function $R(x)$ such that $R(x)$ is not enumerably true; but $R(n)$ is enumerably true for any specified $n$. This was the first—thitherto unsuspected, and really significant—part of Theorem VI in Gödel’s seminal 1931 paper on formally undecidable’ arithmetical propositions.

3. Expressed otherwise, there is an explicitly-defined number-theoretic Boolean function $R(x)$ such that $R(x)$ is verifiably true but not enumerably true; if we define a Boolean number-theoretical function $F(x)$ as verifiably true if, and only if, $F(n)$ is enumerably true for any specified $n$.

4. Thus, Turing’s reasoning—in his seminal 1936 paper on computable numbers—implicitly provided evidence-based definitions of arithmetical truth/falsity algorithmically in two—essentially different, and thitherto unsuspected—ways (evidence-based definitions’ in the sense that one can view the values of a simple functional language as specifying evidence for propositions in a constructive logic).

5. Had he not attempted to view his work as essentially reflecting, and affirming, the conclusions drawn by Gödel in his seminal 1931 paper, Turing probably would have realised that the axioms and rules of inference of the first-order Peano Arithmetic PA are verifiably true, and preserve such truth, under the (weak) standard interpretation of PA; whilst the axioms and rules of inference of PA are enumerably true, and preserve such truth, under a (strong) finitary interpretation of PA as had been sought by Hilbert in the second of his 1900 Millennium problems (ergo, both interpretations establish that PA is sound).

September 23, 2018 6:48 am

The Turing-based rollout can be optimized similarly. For one, we can define $D$ in such a way that the falseness of “$y \in D$” has a finite demonstration, and can specify “effective” so that exceptional cases $z$ are computed.

Undoubtedly. Moreover, had Turing realised that his reasoning implicitly defined, and differentiated between, verifiable arithmetical truth and enumerable arithmetical truth, he would probably have defined `effective computability’ in terms of verifiable arithmetical truth; which immediately falsifies the (CT) thesis that he actually proposed in terms of enumerable arithmetical truth (as detailed in Chapter 12 of this thesis).

7. October 17, 2018 10:05 pm

I just saw this now! I actually attended HLF in 2017. It was a really neat experience and I highly recommend interested graduate students and post-docs to apply. 🙂