Triple Century Post
Does continuous mathematics impact discrete complexity?
Godfrey Harold Hardy was an avid fan of cricket, besides being one of the 20th Century’s most influential mathematicians. During cricket season at Oxford and Cambridge, he would work on mathematics in the morning, and then spend a long afternoon watching a match at the university cricket ground.
Today is this blog’s 300th post. In cricket terms this means we have scored a triple century and are still not out. Until recently a score in the 300′s was the highest by any individual batsman in a Test Match. Brian Lara, however, scored 400 not out for the West Indies against England in 2004. This gives us our next objective.
Cricket was invented in England but has gone global. For instance, cricket is the true national sport of India’s 1.2 billion inhabitants, and of the surrounding 355 million in Pakistan, Bangladesh, and Sri Lanka besides. This puts baseball, the subject of our last post, in some perspective.
Test Matches run for five days, with three two-hour sessions per day. Whereas baseball games have nine discrete innings of three outs apiece, and last only a few hours, Test Matches have two innings of ten outs apiece, and thus feel continuous. Drawing on some of Hardy’s work, we discuss whether continuous mathematics may have any vital impact on , even though that question belongs to discrete mathematics.
Hardy and Cricket
Hardy himself was a good cricket batsman in his youth, and played various sports actively until a heart problem in 1939 at age 62. He also needed to be outdoors for sunshine, especially in winter. In his late forties he ranked the accomplishments he most desired to achieve in this order:
- Prove the Riemann Hypothesis.
- Make a match-winning score of 211 not-out in the last innings at The (Kennington) Oval, which traditionally hosts the last Test of a major series in England.
- Find an argument for the non-existence of God that would convince the general public.
- Be the first to climb Mount Everest.
- Lead a communist unified Britain and Germany.
- Assassinate Benito Mussolini.
Note that he did not ask for a triple century. At the time no one had scored one. The record then was 287 at the Sydney Cricket Ground by England’s Tip Foster in 1904, while the previous record had been 211 by Australia’s Billy Murdoch at The Oval in 1884. The number 211 is the first prime after 200, while being not-out (written 211*) would still make it better than Murdoch’s score.
Later the man billed as the “Babe Ruth of cricket,” Sir Donald Bradman, would hold the individual Test batting record by scoring 334 for Australia against England at Leeds in 1930. Hardy’s own reaction to Bradman’s feats, compared to England’s “Master” Sir John Hobbs, was:
Bradman is a whole class above any batsman who has ever lived: if Archimedes, Newton and Gauss remain in the Hobbs class, I have to admit the possibility of a class above them, which I find difficult to imagine. They had better be moved from now on into the Bradman class.
Bradman remained active in cricket administration and promotion and commentating well into the 1990′s. I (Ken) remember this version of a story that has become subject to Internet “phone tag”: In 1992 or 1993 as a guest on a Test telecast he was asked how much he thought he would score on a wicket that looked benign but had batsmen getting themselves out cheaply. He replied, “about 50.” Commentator: “Only 50?” Bradman: “Well, I am eighty-four years old!”
Hardy-Littlewood and Notation
The computer science Donald in the Bradman class is of course Donald Knuth. Knuth credited Hardy and his longtime collaborator John E. Littlewood with introducing the “Big-” asymptotic notation, though with the different meaning . The difference from the currently accepted meaning
is that could oscillate. For instance, if , then would meet Hardy-Littlewood’s definition of being but not Knuth’s now-standard one.
One can similarly use trigonometric functions to define and for which none of , , and hold, indeed neither nor . This is summarized by saying that “trichotomy fails” for asymptotic comparisons. However, Hardy and Littlewood proved that if one limits attention to real functions defined by arithmetic, exponentiation, and logarithms, then trichotomy does hold. Here is a “modern computer science-y” way of stating their result:
Let and be defined recursively from in the grammar:
Then either , , or .
This says that almost all of the functions we use to define complexity classes fall into a nice total linear ordering. Thus for these so-called “log-exp functions” or “Hardy-Littlewood” functions, there is no difference between their definition and Knuth’s.
We would like to find a nice, accessible proof of this theorem. An aside, Dick used this theorem with his co-author Rich DeMillo once in a paper on independence issues surrounding the question.
From Analysis to Primes
The Riemann Hypothesis itself is stated for continuous functions but has major implications for prime numbers, which are discrete. The Prime Number Theorem (PNT) was first proved via continuous analysis, before an “elementary” proof was found. The PNT states that the number of prime natural numbers below the real number satisfies .
Hardy and Littlewood themselves made two notable conjectures about the distribution of prime numbers. For the first, call a set of distinct natural numbers “nice” if for every prime number , the number of distinct values of mod for is at most . Define
Then when , is just . Thus the following conjecture generalizes the PNT:
For every nice of size ,
The second conjecture is simpler: it is just that for any real numbers and , there are always no more primes from to than from to . In symbols, this says
For all natural numbers and , .
This seems obvious, since the sequence of prime numbers thins out. However, it contradicts certain cases of the first conjecture. It seems odd that Hardy-Littlewood would have advanced two mutually contradictory conjectures, but perhaps one came from Hardy while the other came from Littlewood.
From Primes to Complexity
Further versions of the Riemann Hypothesis have even more direct implications in computational complexity. The generalized Riemann hypothesis (GRH) states that for any group character on the corresponding complex L-function
has all of its nontrivial zeroes on the line . Here , , and . (We have talked about group characters on this blog before. GRH is different from the extended Riemann hypothesis (ERH) which involves zeta functions over number fields.)
Pascal Koiran proved (note erratum) that GRH places the problem of whether a set of polynomial equations has a solution over the complex numbers into the second level of the polynomial hierarchy, indeed into the Arthur-Merlin class .
Koiran’s main GRH-dependent technique intuitively transfers solvability over into statements about solvability modulo many primes . Let bound the binary length of coefficients in the finite set of -many polynomial equations in -many variables, and let bound their degree. Set
Theorem 2 For any finite set of polynomial equations there exist such that:
- if the equations are not solvable over , then the number of primes such that they are solvable over is at most ;
- if they are solvable over , then the number of primes such that they are solvable over is at least .
From Continuous to Discrete
We view Koiran’s theorem as a way of translating properties of continuous mathematics into discrete domains. The theory of poles and winding numbers that is basic to complex analysis is one of the ways to connect the discrete and the continuous, while Knuth himself co-authored the book Concrete Mathematics which marries them. Three other avenues by which people are applying tools from continuous mathematics to do research on computational complexity are:
- Ketan Mulmuley’s program of Geometric Complexity Theory.
- The theory of computing over the reals, jump-started by Lenore Blum, Michael Shub, and Stephen Smale. This survey by Mark Braverman and Stephen Cook gives a brief introduction and outlines a competing (older) view.
- The theory of information-based complexity originated by Joseph Traub and Henryk Woźniakowski.
We would be interested to know of other such avenues and opinions about them. There is also some relation to the subject of “Algebrization”.
Is it cricket to try to get discrete-complexity consequences out of continuous mathematics?
Are there other striking applications of Koiran’s density-gap phenomenon?
What would Hardy have accomplished in computational complexity?
[fixed typo in L-function, clarified Braverman-Cook refc. and added BSS paper]