Skip to content

Goldbach: A Curious Conjecture

November 11, 2019


Models of the primes

[ Montreal ]

Andrew Granville writes brilliant papers that explain hard results in number theory. He also proves hard results in number theory.

Today, Ken and I use the famous Goldbach conjecture to discuss a third rail: how to identify which results “should be” true even though they have been too hard to prove.

Granville has just recently published a graphic novel, Prime Suspects: The Anatomy of Integers and Permutations, with his sister Jennifer Granville and illustrator Robert Lewis. It features constables named Green and Tao, a detective Jack von Neumann, and students named Emmy Germain and Sergei Langer among other allusions to real (and complex) mathematicians. It grew out of a 2009 musical play that premiered at IAS.

The driver of the plot is a deep connection between the frequency of primes below a number {x} and that of permutations in {S_N} that are “prime” in the sense of having only one cycle. Substituting {\log(x)} for {N} and vice-versa tends to create correspondences of known results in number theory vis-à-vis permutation group theory. See this MAA review. Going beyond the known theorems described in the novel, we wonder how far such heuristic sleuthing methods can go on long-unsolved cases.

The Goldbach Reminder

The statement we know as the (Strong) Goldbach Conjecture is that every even number {n \ge 4} can be written as the sum of two prime numbers. It was made in 1742 by Christian Goldbach. He wrote to Leonhard Euler:

“Every integer that can be written as the sum of two primes can also be written as the sum of as many primes as desired, until all terms are {1}.”

Well, that is not what we call Goldbach’s conjecture. He and many others at the time considered {1} to be a prime number. What he’s getting at can be seen from this snippet of his letter:

Above his sums, Goldbach put an asterisk * to the note in the margin, in which he asserts his conjecture:

“Every integer greater than 2 can be written as the sum of three primes.”

Wait—that is not the Goldbach conjecture either. It is the “Weak” one and was apparently proved in 2013 by Harald Helfgott. We discussed this in a 2014 post whose larger theme we are continuing here. It also proves the first conjecture, but not the strong conjecture.

But what Goldbach seems to be driving at with his drawing of sums is having one of the “primes” be {1}. Then the strong conjecture is needed. Euler pointed this out in his reply to Goldbach’s letter. But Euler, who was a saint in many ways, charitably reminded Goldbach of a communication earlier that year when Goldbach had observed that his first conjecture followed from the strong statement. Euler went on to say:

“That every even number should be a sum of two primes I hold to be a completely certain theorem, irrespective of my not being able to prove it.”

Ken has translated this a little differently from Wikipedia’s article and its source, reading into Euler’s words the stance of truth shining apart from proof. How one can justify this stance is what we want to discuss.

A Curious Conjecture

The conjecture is curious on several fronts. For one it is usually said to be “obviously correct.” It has been checked to about {10^{18}} or so by computation. There are many open conjectures in number theory that are likely to be true. But few are claimed to be “true” with such a strong bias. Many other conjectures are likely to be true, but none as likely as the Goldbach.

In 1975, Hugh Montgomery and Robert Vaughan proved that the Goldbach is true for most even numbers. That is that the number of possible even numbers {x} less than some {N} are not sums of two primes grows like {o(N)}. Thus if one picks a random even number {x} it is likely to be the sum of two primes. Here the “likely” is a mathematical certainty.

How do we “know” that it is likely to be true? One source is the method of prime models. Primes are quite mysterious and hard to understand. So there are heuristic models that suggest we think of the primes as “random”. Of course this is silly, since the primes are a deterministic fixed sequence of numbers. But the hope is that the following is true.

If {\Gamma} is a statement about the primes that is with high probability in the random model, then it is true.

Of course this is nonsense.

But it is interesting nonsense. Harald Cramér has a model that is simple. Granville add some refinements to this model here and here. More recently William Banks and Kevin Ford and Terence Tao have a new model for the primes here.

These models are useful for making and thinking about number theory conjectures. Perhaps one day they will be able to really be used to determine truth. They are certainly good heuristics to have when studying the prime numbers. We are jealous. In complexity theory it would be wonderful to have anything like these models. Perhaps {\dots}

A Model Example

Cramér’s model is simple to state. Imagine that the primes {\cal P} are replaced by a random set {\cal R} by placing {n} in with probability {1/\ln n}. And we make these choices independently. The Fermat numbers are those

\displaystyle  2^{2^{n}} + 1.

The first of these

\displaystyle  3, 5, 17, 257, 65537

are prime. Fermat thought this continued but it is not true. Euler showed that the next is not a prime

\displaystyle  641 \times 6,700,417.

An interesting problem is are there any more prime Fermat numbers? Many believe that there are no more, or art most there are a finite number in total. Let’s look at using the model to understand the Fermat numbers:

\displaystyle\sum_{n=0}^{\infty} \frac{1}{\ln F_{n}} = \frac{1}{\ln 2} \sum_{n=0}^{\infty} \frac{1}{\log_{2}\left(2^{2^{n}}+1 \right)} = \frac{1}{\ln 2} \sum_{n=0}^{\infty} 2^{-n}  = \frac{2}{\ln 2}.

Therefore, the total expected number of Fermat primes is at most finite. Of course this is assuming the model is predictive.

Proved?

Our friends at Wikipedia say:

As with many famous conjectures in mathematics, there are a number of purported proofs of the Goldbach conjecture, none of which are accepted by the mathematical community.

Try a Google search yourself for “Goldbach conjecture proved”. The top hits include several “proofs” that the conjecture is true. The proofs are all short and simple. All are believed to be wrong. I find it interesting that the proofs, in many cases, use a random like argument in there “proofs”. The trouble is that the above models are only heuristics. So the proofs seem to be incomplete.

Open Problems

Can we imagine getting heuristic models for complexity theory? For quantum algorithms perhaps. What would such heuristic models even look like? We wonder.

Hobgoblins in Our Equations

October 31, 2019


With consequences for physics

Britannica source

Paul Painlevé was a French mathematician who specialized in classical mechanics. He is known for a conjecture in 1895 about anomalies in Isaac Newton’s equations that was proved in 1988.

Tonight, Ken and I celebrate Halloween with some mathematical horrors that may have real physical consequences.

Painlevé is also known for having been Prime Minister of France for two short stints in 1917 and 1925. He was also Minister of War under President Raymond Poincaré, a cousin of Henri Poincaré. He was the first Frenchman in powered flight—alongside Wilbur Wright in 1908—and at the end of his life headed the French Air Force.

A basic fact about Newtonian gravity is that the gravitational potential energy of two point masses {m} and {M} is proportional to

\displaystyle  -\frac{mM}{R},

where {R} is the distance between them. We usually think of how gravity becomes weaker as {R} grows but when {R} is tiny it becomes quite strong. Since the potential is negative, it is possible for an individual particle in a finite {n}-body system to accelerate to arbitrary speed without violating the conservation of energy. But can it happen in a finite amount of time—and without {R} actually becoming zero in a collision?

Painlevé proved a ‘no’ answer for {n = 3} but suspected ‘yes’ for {n \geq 4}. Zhihong Xia proved in 1988 it can happen for {n = 5}, extending earlier advances by his thesis advisor, Donald Saari. The case {n = 4} is still open; in all cases, the initial conditions for it to happen form a measure-zero subset of the configuration space.

The difference from {n = 3} is shown by this diagram from their 1995 AMS Notices paper, which has a greatly understandable telling of the whole story. We, however, envision a fantasy story about what could have happened much earlier.

A Fantasy

We want to imagine that Xia’s result was proved not near the end of the 20th century but near the start—in particular, before Albert Einstein’s creation of General Relativity, but after Special Relativity. Say the result was obtained in 1908 by Painlevé after his flight with Wright. That same year, Hugo von Zeipold proved a startling consequence of Painlevé’s conjecture, which we now know to be a theorem:

Theorem 1 Under Newtonian gravity, a {n}-body set of point masses can eject a particle to infinity in finite time.

That is, without collisions, a Newtonian {n}-body problem of point masses can create separations {R'} between particles that grow to infinity within a finite time. Saari and Xia say that the effect can be partially seen if you do the following:

Place a tennis ball on top of a basketball and drop the pair from chest height. The tennis ball rebound is pretty dramatic—enough so that it should be done outside.

Ken tried it and it works. It does not work so well if the tennis ball is replaced by a piece of Halloween candy. If the basketball is replaced by a pumpkin, it definitely will not work.

We know already, however, from Einstein’s theory of special relativity that it cannot work. We would have an instant before the singular time at which the point mass has just passed the speed of light. At that time it has acquired infinite energy according to special relativity, but cannot have withdrawn it from the potential by then.

In 1908 this would have been an internal indication that Newton’s theory of gravity must break down. There were of course external indicators, such as the theory’s incorrect prediction of the orbit of Mercury. Maybe internal ones were known, but this seems most glaring in retrospect. Are we right in this interpretation? The result by Xia is shocking enough as it stands, and makes us wonder what other surprises lurk in equations.

It Could End With a Bang

Other possibilities of singularities happening within finite time have not been ruled out by any physical theories. The New York Times Magazine in July 2015 ran a profile titled “The Singular Mind of Terry Tao” with a great opening sentence:

This April, as undergraduates strolled along the street outside his modest office on the campus of the University of California, Los Angeles, the mathematician Terence Tao mused about the possibility that water could spontaneously explode.

Indeed, Tao proved it can happen under a plausible modification of the Navier-Stokes equations of fluid dynamics. He writes there:

Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

As with Painlevé’s conjecture—Xia’s theorem—the point would be not that the initial conditions could happen with any perceptible probability, but that our world is capable of their happening at all. At least, that is, with the equations by which we describe our world. Our own mention at the start of 2015 alluded to the possibility of Tao’s blowup applying to fluid-like fields in cosmological theories.

Just last week, a column on the Starts With a Bang blog alerted us to an issue with equations that could be skewing cosmological theories today. The blog is written by Ethan Siegel for Forbes and its items are linked regularly on RealClear Science. Siegel draws an analogy to a phenomenon with Fourier series that feeds into things Dick and I (Ken writing this part) have already been thinking about. Things used all the time in theory…

Hobgoblins You Can Hear

The Fourier phenomenon is named for the American physicist Josiah Gibbs, but Gibbs was not the original discoverer. We could add this to our old post on a law named for Stephen Stigler, who did not discover it, that no scientific law is named for its original discoverer. The first discoverer of the Gibbs Phenomenon was—evidently—Henry Wilbraham in 1848. Through the magic of the Internet we can convey it by lifting Wilbraham’s original figure straight from his paper:

What this shows is the convergence of a sum of sine waves to a square wave. The convergence is pointwise except at the jump discontinuities {x = t\pi}, but it is not uniform. The {y}-values do not converge on any interval crossing {t\pi} but instead rise about {18\%} above the square wave function value {f(x)} in the vicinity of {t\pi}, no matter how many terms are summed in the approximations {f_n}. Wilbraham’s middle drawing depicts this in finer detail than any other rendering I have found. The persistent overshoot is physically real—it is the cause of ringing artifacts in signal processing.

Now the convergence does satisfy a criterion of {\epsilon}-approximation of a function {f} that we use all the time in theory: for any {\epsilon > 0} and large enough {n}, {|f_n(x) - f(x)| < \epsilon} except for an {\epsilon} fraction of {x} in the domain. This kind of convergence is used internally in the proofs of quantum algorithms for linear algebra which we recently discussed. If the value {f(x)} is explicitly what you’re after, this is fine. But if you use the value only implicitly while composing the approximation with some other function {g}, you must beware that the compositions {g_n = g\circ f_n} are not thrown off in a constant way by the overshoots.

Dark Matter in Shadows?

Siegel draws attention to what is alleged as something similar actually happening to current physical theories that use a well-known class of algorithmic simulations. The details are in a new paper by Anton Baushev and Sergey Pilipenko, titled “The central cusps in dark matter halos: fact or fiction?” To show the relation between this and the opening example in this post, we need only quote the paper (page 2)—

However, the present state-of-art of {N}-body simulation tests … can hardly be named adequate. The commonly-used criterion of the convergence of {N}-body simulations in the halo center is solely the density profile stability [per which] the central cusp (close to {\rho \propto r^{-1}}) is formed quite rapidly ({t < \tau_r}).

—and then quote Siegel’s own description of the core-cusp problem and the allegation:

In theory, matter should fall into a gravitationally bound structure and undergo what’s known as violent relaxation, where a large number of interactions cause the heaviest-mass objects to fall towards the center (becoming more tightly bound) while the lower-mass ones get exiled to the outskirts (becoming more loosely bound) and can even get ejected entirely.

Since similar phenomena to the expectations of violent relaxation were seen in the simulations, and all the different simulations had these features, we assumed that they were representative of real physics. However, it’s also possible that they don’t represent real physics, but rather represent a numerical artifact inherent to the simulation itself.

Open Problems

Is all this real physics? Or is it artifacts showing that current theories and/or algorithms are flawed? Whichever is the truth, our equations have tricks that may not lead to treats.

For a relevant postscript, Painlevé did not abandon physics when he rose in politics. In 1921, he effectively removed an apparent singularity at the event horizons of black holes in general relativity. He and colleagues discussed this with Einstein the following year, but apparently not the {n}-body conjecture. How far are we right in our interpretation that the (proved) conjecture plus special relativity suffices to disprove Newtonian gravity by itself?

Quantum Supremacy At Last?

October 27, 2019


What it takes to understand and verify the claim

Cropped from 2014 Wired source

John Martinis of U.C. Santa Barbara and Google is the last author of a paper published Wednesday in Nature that claims to have demonstrated a task executed with minimum effort by a quantum computer that no classical computer can emulate without expending Herculean—or Sisyphean—effort.

Today we present a lay understanding of the claim and discuss degrees of establishing it.
Read more…

A Polemical Overreach?

October 21, 2019


Our 1977 paper on the role of formal methods

[ Harvard ]

Harry Lewis is known for his research in mathematical logic, and for his wonderful contributions to teaching. He had two students that you may have heard of before: a Bill Gates and a Mark Zuckerberg.

Today I wish to talk about a recent request from Harry about a book that he is editing.
Read more…

A Diophantine Obvious Problem

October 11, 2019


With a short solution that was hard for me to find.

[ Royal Society ]

Sir Timothy Gowers is a Fields medalist and fellow blogger. Sometimes he (too) writes about simple topics.

Today I would like to talk about a simple problem that came up recently.

The problem is a simple to state “obvious fact”. The reason I thought you might be interested is that I had a tough time finding the solution. I hope you find the explanation below interesting.

The general issue of proving obviously true statements is discussed here for example. Here is an example from Gowers:

Let {I_{1},I_{2},\dots } be intervals of real numbers with lengths that sum to less than {1}, then their union cannot be all of {[0,1]}.

He says:

It is quite common for people to think this statement is more obvious than it actually is. (The “proof” is this: just translate the intervals so that the end point of {I_{1}} is the beginning point of {I_{2}}, and so on, and that will clearly maximize the length of interval you can cover. The problem is that this argument works just as well in the rationals, where the conclusion is false.)

The Problem’s Statement

The following simple problem came up the other day. Suppose that {t} is an odd number. Show there is some {x} so that

\displaystyle  (x,t) = 1 \text{ and } (4x+1,t) = 1.

Here {(a, b)} is the gcd, the greatest common divisor of {a} and {b}. For example,

\displaystyle  (21, 14) = 7.

This result seems totally obvious, must be true, but I had trouble finding a proof.

The Problem’s Cousin

There is a unproved conjecture in number theory that says: There are an infinite number of {x} so that both {x} and {4x+1} are prime. This clearly shows that there is an {x} for our problem.

I like conjectures like this since they give you an immediate insight that a statement is likely to be true. But we would like a proof that does not use any unproved conjectures. Our problem can be viewed as a poor version of some of these conjectures. Suppose that you have a conjecture that there are an infinite number of {x} so that

\displaystyle  f_{1}(x),f_{2}(x),\dots,f_{m}(x),

are all prime for some given functions {f_{k}(x)}. Then the poor version is to prove that there are {x} so that these numbers are all relatively prime to some given {t}. There are some partial results to the prime version by Ben Green and Terence Tao.

The Problem’s Failed Proofs

My initial idea was to try to set {x} to something like {ty + 1}. The point is that this always satisfies the first constraint: that is {(ty+1, t)=1} for any {y}. Then I planned to try and show there must be some {y} that satisfies the second constraint. Thus the goal is to prove there is some {y} so that

\displaystyle  (4(ty+1)+1, t) = 1.

But this is false. Note that if {5} divides {t} then

\displaystyle  4(ty+1)+1 = 4ty + 5

and so {4x+1} is always divisible by {5}. Ooops.

My next idea was to set {x} to a more “clever” value. I tried

\displaystyle  x = ty + d.

Here I thought that I could make {d} special and control the situation. Now

\displaystyle  4(ty+d)+1 = 4ty + 4d+1.

This looked promising. I then said to myself that why not make {4d+1} a large prime {p}. Then clearly

\displaystyle  4x + 1 = (4t)y + p.

Since {4t} and {p} are relatively prime by the famous Dirichlet’s Theorem on arithmetic progressions we could make {4x+1} a prime too by selecting {y}. This would satisfy the second constraint, and we are done.

Not quite. The trouble is that we need to have also that

\displaystyle  (x,t) = 1.

Now this is

\displaystyle  (ty + d, t) = 1.

The trouble is that {d} might not be relatively prime to {t}. So we could just {\dots} This seems like a recursion and I realized that it might not work.

The Problem’s Solution

I finally found a solution thanks to Delta Airlines. My dear wife, Kathryn Farley, and I were stuck in DC for several hours waiting for our delayed flight home. This time was needed for me to find a solution.

The key for me was to think about the value {t}. It is usually a good idea to look at the simplest case first. So suppose that {t=1}, then clearly the constraints

\displaystyle  (x,t) = 1 \text{ and } (4x+1,t) = 1,

are now trivial. The next simplest case seems to be when {t} is a prime. Let’s try {t=3}. Now {x=1} works. Let’s generalize this to any prime {t>3}. The trick is to set {x} so that

\displaystyle  x \equiv 2 \bmod t.

Then {4x+1} is equal to {9} modulo {t}, which is not divisible by {t}. This shows that when {t} is an odd prime there is always some {x}.

Okay how do we get the full result? What if {t} is the product of several primes? The Chinese remainder theorem to the rescue. Suppose that {t} is divisible by the distinct odd primes {p_{1},p_{2}, \dots, p_{m}}. We can easily see that we do not care if there are repeated factors, since that cannot change the relatively prime constraints.

Then we constraint {x} by:

\displaystyle  x \equiv 1 \bmod 3

and

\displaystyle  x \equiv 2 \bmod p_{k}

for all primes {p_{k}>3}. Then the Chinese remainder theorem proves there is some {x}. Done.

Open Problems

Is there some one line proof of the problem? Do you know any references? There are several obvious generalizations of this simple problem, perhaps someone might look into them.

Writing 33 as a Sum of Cubes

September 30, 2019


Cracking a Diophantine problem for 42 too

Andrew Booker is a mathematician at the University of Bristol, who works in analytic number theory. For example he has a paper extending a result of Alan Turing on the Riemann zeta function. Yes our Turing.

Today Ken and I will talk about his successful search for a solution to a 64 year old problem.
Read more…

Quantum Switch-Em

September 26, 2019


A recipe for changing the objectives of problems

Composite crop of src1, src2, src3

Aram Harrow, Avinatan Hassidim, and Seth Lloyd are quantum stars who have done many other things as well. They are jointly famous for their 2009 paper, “Quantum algorithm for linear systems of equations.”

Today Ken and I discuss an aspect of their paper that speaks to a wider context.
Read more…