# Twin Primes Are Useful

* Why the recent breakthrough is important *

photo sources: UNH and Simons article |

Yitang Zhang, of the University of New Hampshire, has apparently proved a finite approximation to the famous Twin Prime Conjecture. This is a result of the first order. After ten days of progressively more detailed news, including today’s article in the New York Times, Zhang’s 56-page preprint has just been released on the *Annals of Mathematics* journal website. This is the final accepted version of the original submission, which was said in a story in *Nature* last week to have needed only “a few minor tweaks.”

Today Ken and I want to explain important aspects of the Twin Prime Conjecture.

Recall that the Twin Prime Conjecture states that there are an infinite number of primes and that are as close as possible: . Well 3 and 2 are closer, but that can only happen once, so the best one can hope for is primes that are within two of each other.

Zhang’s beautiful result is that there are an infinite number of primes and so that and is bounded by an absolute constant. The constant is large—in the tens of millions—but it is a constant. Perhaps we should call these “Cousin Primes,” since they are not within two; I will leave the naming of them to the number theorists. Whatever you call them, his result is a huge improvement to what was previously proved, and is a singular advance.

The proof is long, which is not unexpected. Ken saw the news of the paper’s release earlier today on Terry Tao’s Google+ page about the work, which gives some idea of how the proof goes. There are many links and comments in a post by Peter Woit that also mentions a recently announced proof by Harald Helfgott of the “ternary Goldbach conjecture” that every odd number above 5 is the sum of three primes.

So what can we possibly add to the discussion about Zhang’s breakthrough? Nothing on the proof right now. Something, however, on why the Twin Prime Conjecture can be really useful. This is all from a personal point of view, but one that I hope you will enjoy. Let’s first take a quick look at what was known before his work, and then discuss what it may be useful for.

## Before Zhang

One measure of the density of the primes is that the summation

does not converge. That is, the sum

tends to infinity as tends to infinity. The growth is slow, but the sum does diverge. In 1915, Viggo Brun used sieve methods to prove that twin primes were rarer in a precise sense: the summation over *twin primes*

converges. Indeed his result can be improved to show that the number of twin primes less that is bounded above by

Using heuristic arguments, Godfrey Hardy and John Littlewood guessed that not only are there an infinite number of twin primes, but that there density is close to what a “random” model would predict. Let be the number of twin primes less than —recall is used to denote the number of primes less than —then the Hardy-Littlewood Conjecture is that

for an explicit constant

Tao is on record as saying that certain approaches based on sieve theory cannot resolve the Twin Prime conjecture—see this for a short discussion. Mark Lewko, in a comment to a MathOverflow thread on Zhang’s paper, indicates that its mechanism alone cannot reduce the gap under , and it does not circumvent a more general obstacle to gaps below . However, even if Zhang’s new techniques do not overcome such general barriers, at least they push against them with a lot more *oomph*.

## Another Problem

Years ago I worked on a problem that had nothing directly to do with the Twin Prime Conjecture. The question is a fundamental one about the complexity of Boolean functions. It is not classic, has not been open for a very long time, and could be trivial. But like many problems about Boolean functions it turned out to fight back hard, and the best we could do was to make a small dent in the problem.

The work was joint with Mihail Kolountzakis, Evangelos Markakis, Aranyak Mehta, and Nisheeth Vishnoi. See the paper for details. Indeed while I helped start the research with Markakis, Mehta, and Vishnoi, without Kolountzakis the work would have never been completed. Our result was then greatly improved by Amir Shpilka and Avishay Tal, as we covered here.

The problem is concretely about Boolean functions of variables, and seems not to involve prime numbers at all. For any subset of the coordinates, the corresponding Fourier coefficient is given by

where is if is odd, and otherwise. The problem is:

What is the smallest value such that for every symmetric 0-1 function on that is not affine linear—by symmetry, this means neither constant nor a parity function—some with does not vanish?

## The Prime Gap Trick

A heuristic that I have used before is this: When trying to prove some theorem in number theory, assume any reasonable property that you need about primes. Either you will at least get a conditional theorem, or you might later be able to weaken the assumption you made. The latter is what happened to us. Or you might be luckier and get your theorem to follow both from the assumption and its negation, so that you don’t need it at all. I noted once how Littlewood did this with the Riemann Hypothesis—incidentally that post was about a theorem of Ravi Kannan, and I am attending a birthday workshop in his honor later this week.

In this case we reduced the problem to showing that a certain integer-valued polynomial is constant over the set . Then we expressed the connection in the paper in these terms:

First, we show that is constant over the union of two small intervals . This is obtained by looking at modulo carefully chosen prime numbers. One way to prove this (at least infinitely often) would be to assume the twin primes conjecture… We manage to replace [this] by choosing four different primes in a more involved manner…

Thus we actually did something else besides use twin primes, but this is how we got the idea. Moreover, Shpilka and Tal used gaps between consecutive primes in a different way, obtaining bounds in terms of the *largest* such gap in the interval , which is known to be . If we can now get our hands on enough cases where the gaps are small, maybe we can improve the estimates further. Why is it useful to have primes with small gaps?

We have covered the use of the Chinese Remainder Theorem for analytical results before. Usually for complexity-theoretic applications such as this one, we want the primes themselves to be small—and don’t mind having a bunch of primes. For logspace we can work with polynomially-many primes in a sequential manner, so long as each needs only bits to store.

When we don’t need size to be so constrained, however, it can be more useful to have the *gap* between primes in the representation be small. Then if we know in advance that values are below , we know that the values and have to be close in an absolute sense. In particular, they cannot be closer than to each other unless , making them equal—and in our case we would get them all to be zero. For higher ranges of one retains a lot of this control. The larger and the smaller , the bigger the effective range.

We will hence be further interested in how dense the pairs of “cousin” primes must be, and how efficiently they can be generated. Anytime there is a breakthrough, it is time to revisit old ideas and see whether they too can profit.

## Open Problems

How far can the gap between consecutive primes, for infinitely many such pairs, be reduced? Do this and Helfgott’s result on Goldbach herald a more general breakthrough in number theory?

[updated link and info about paper in opening paragraph; sourced photo and linked to Simons Foundation article]

This is the 50th “Pip” post—the reasons for this to be our joint-author name are here. Actually Dick wrote almost all of this except parts of the intro noting today’s release of the paper and the last few paragraphs—the main reason I’m posting is that he and Judith are enjoying their anniversary this evening, while Ken is out fighting Androids.

Consider a quantum dynamical system of qudits, each of dimension , comprising a state-space . Fix to be a small number (

e.g., ), and consider the limit (physically, the limit of a many-qudit computer).The answer to the following question is sought (as a warm-up to a question for

TCS StackExchange):Remark: numerical experiments readily establish that the naive guess is a gross underestimate; empirically it seems that or possibly even . The question asked is what is

provablyknown regarding the large- asymptotics of ?The question-asked connects closely to the celebrated (among number theorists!)

Cunningham Project— which is humanity’s longest-running computational research program — and it relates specifically to “classical” (long-known) results regarding thealgebraic factors, Aurifeuillian factors, and “other” factors of Cunningham numbers.What is the connexion to quantum computing? That is the focus of a planned

TCS StackExchangequestion (which is why these references are sought).The Moral of the Story Yes indeed, twin primes (and many other factoring theorems) are exceedingly useful!Typo — in the above question, should have been (obviously) .

E.g., for and — that is, a dynamical system of qubits — it turns out that , which is larger than the naive “factors are randomly distributed” estimate .

Nice result. Unfortunately the paper is behind a pay-wall. Please supply a free version. Nobody should pay to read advances in pure science. Ever.

Agreed—your request is hereby redirected to the “parties of the first and second part…”

What paywall? I thought the PDFs on the

Annals of Mathematicsare open access.Well, for me it says the following:

“PDF Access Denied

We have not been able to recognize your IP address as one of the subscribers to the Annals of Mathematics. Please note that online access to PDF copies of articles is by subscription only.

If you would like to purchase a subscription please refer to our subscription webpage.”

_Annals_ was once an arXiv overlay journal, but this is no longer the case. See Scott Morrison’s blog post, http://sbseminar.wordpress.com/2012/02/01/journals-and-the-arxiv/, for more information.

Apparently journals need revenue to operate. In the coming brave new world where no one will have to pay to read about scientific advances, authors are going to have to pay to write about them.

Well, I disagree with you Xbad. Almost every respectable scientist, in TCS at least, puts his or hers publications online for free these days. So in the very near future no one will pay to read pure science. See e.g., many new open access journals in TCS. They don’t have revenue and they do operate.

I put my own publications (math and physics) online for free as well, and I hope that Zhang will decide to do the same. But there are very troubling signs that science publishing as a whole is rushing headlong towards the author-pays model. This includes some ventures promoted by very prominent mathematicians.

A bit angry: “Nobody should pay to read advances in pure science. Ever.”

Why should people pay tens of thousands (or even hundred thousands) dollars for getting training to make them capable of reading pure science, and then reading the fruits of pure science should be free (which mean that the marginal expenses for administrating the journals be paid by the public or by the scientists themselves?)

Zhang’s breakthrough proves that there is at least one $k$ such that $p_{n+1} – p_n = 2k$ holds for an infinite number of n.

As a layperson to this area I am curious if there is a general feeling in the number theory community about which of the following possibilities is likely to be true?

The above statement holds for every $k$

The above statement holds for an infinite number of $k$

The above statement only holds for a finite number of $k$, so Zhang’s result is close to being the best possible outcome.

The above statement only holds for a finite number of $k$ one of which is $1$, the twin-prime conjecture

Results along these lines could help further unravel the tapestry of the distribution of prime numbers.

It is usually suspected that the statement holds for every k. This is called Polignac’s conjecture.

Thanks Anon! Your response is much appreciated. The conjecture you cite, if proven, would be an astonishing property.

a stellar result. way cool! again shows some of the deep connections between TCS & number theory which Ive always seen as fundamental & imho is sure to continue to strengthen in strong & surprising ways. and the deep/mysterious links between primes & TCS are probably still largely not mapped out.

How about ‘Affine Primes’, the affinity being increased as the upper-bound decreases.

Guy

… an affine prime was had by all …

tribute to zhang quoting lots of recent cyber refs/links. also, the isolated outsider/”academic grind” angle. on how zhang worked in isolation almost entirely outside academia.