# The Amazing Zeta Code

* But can it encode itself? *

src |

Sergei Voronin was an expert in analytic number theory, especially the Riemann zeta function. He proved a sense in which the Riemann Hypothesis “ain’t necessarily so.” That is, he found complex analytic functions that obey functional equations similar to the one for , but where has greater deviations in the frequency of zeroes on the critical line than can have if the hypothesis is true.

Today Ken and I wish to discuss an older result of his—proved in 1975—about the Riemann zeta function that seems to be amazing.

Voronin proved that the zeta function is *universal* in the precise sense that it encodes all possible other functions, to any desired degree of precision. This seems like a complexity-theory result more than a number-theory result to me, and also seems like it should have complexity consequences. We share this opinion by Matthew Watkins, who maintains a page on number theory and physics:

“This extraordinary 1975 result receives surprisingly little coverage…”

He goes on to say, “…it was difficult to find a clear and accurate statement anywhere on the WWW in March 2004.” Let’s turn now to discuss it in detail.

## The Zeta Function

The zeta function is of course defined for complex numbers with real part greater than by the famous equation

It then can be extended to all complex numbers, except that at the function has a simple pole. This means just that the extension near behaves like .

It has been known for several centuries that the zeta function holds the key to our understanding of the distribution of the primes. The proof that it has no zeroes at where is real is equivalent to the Prime Number Theorem (PNT). This is the statement that the number of primes less than , denoted by , is approximately

The zeta function can be used, and was used, by Leonhard Euler to give a proof that the number of primes must be infinite. This is weaker than the PNT, but a very simple argument. It was later modified and used by Gustav Dirichlet to prove that there are an infinite number of primes in any arithmetic progression

provided and have no common factor.

## The Amazing Property

Let be any compact set of complex numbers whose real parts satisfy . Because is closed and bounded, there exists a fixed such that for all with real part . The part can be as close as desired to the critical line , but it must stay some fixed distance away.

We also need to have no “holes,” i.e., to be homeomorphic to a closed disk of radius . Often is simply taken to be the disk of radius centered on the -axis at , but we’ll also think of a square grid. Then Voronin proved:

Theorem 1Given any analytic function that is non-vanishing on , and any , we can find some real value such that for all ,

That is, by translating upward by a distance of , we find a region where produces the same values as does on the original , within a tolerance of . Moreover, as we can find more and more ‘s for which the simulation is better and better. Watkins quotes this from Martin Gutzwiller’s book *Chaos in Classical and Quantum Mechanics* (Springer-Verlag, 1990):

“Although the Riemann zeta-function is an analytic function with [a] deceptively simple definition, it keeps bouncing around almost randomly without settling down to some regular asymptotic pattern. The Riemann zeta-function displays the essence of chaos in quantum mechanics, analytically smooth, and yet seemingly unpredictable…

In a more intuitive language, the Riemann zeta-function is capable of fitting any arbitrary smooth function over a finite disk with arbitrary accuracy, and it does so with comparative ease, since it repeats the performance like a good actor infinitely many times on a designated set of stages.” [emphasis in original]

That is to say, even if we have a function that is analytic and nonzero on some compact region well away from the critical strip, or even overlapping the critical line, we can map conformally to some within the strip, and then can approximate the resulting mapped function .

## A “Concrete” View of the Proof

The reason we think this is relevant to complexity theory comes from a discrete explanation of how the theorem works. Given a , take . Consider to be the square grid centered on on the -axis. We will use the -many centers of each grid square. Note that in the leftmost grid boxes, the real part of is displaced from by about in addition to the fixed distance which is independent of . If we really fix , then we can allow the grid to extend all the way to the critical line .

We also make a discrete set of allowed complex values such that and are integral multiples of the target approximation goal . More precisely we set . Then and may have values for integers such that . Note that these values stay away from zero, this time by an amount that depends on . To every grid center , assign a value from .

Now we could use polynomial interpolation over these -many values to define an analytic function that has those values at those points of , and apply Voronin’s theorem to . The way it works, however, is really the reverse: Given any that is analytic and non-zero on , we observe two consequences of being compact:

Thus for sufficiently small we can choose to make the grid sufficiently fine to give an -approximation of everywhere on it. Namely, for each box center take a value in that is nearest to . There is always a suitable whose real and imaginary parts have magnitude no more than .

The key idea is that Euler’s product formula for enables one to do “Zeta Interpolation” on the resulting finite grid of values . Well this *could* be the idea. The proof sketch given by Wikipedia seems to do the approximation more directly on , using selections of primes and complex phase angles. It might be interesting to work it out as a discrete interpolation.

A little intuition is conveyed by imagining the above colored grid against some extension of Wikipedia’s colored figure on zeta-function universality. Overall this shows how discrete structures emerge out of conditions in continuous mathematics such as compactness—a hallmark of what Ron Graham, Don Knuth, and Oren Patashnik promote as “Concrete Mathematics.”

## The Zeta Code

Take any string over some finite alphabet . We can choose in some minimal manner so that the corresponding satisfy and with values spaced far enough to allow unique decoding from any -approximation of those values. If is a binary string, we can take for some block length that optimizes the relation between and . Then we can plug in the values encoding as the values for our grid—actually, for just one row of grid points . It follows that we can find such that the values encode the string . That is, rounding those values to the closest elements of decodes to yield .

It is amazing enough that we can do this for just one row, one-dimensionally. That we can do this in two dimensions—say encoding a larger as a matrix and mapping blocks inside to single values —is even more amazing. All of this gets encoded by a single numerical value . Let’s call this the **Zeta Code**.

Is the Zeta Code useful? We cannot expect to be a simple value—it must have Kolmogorov complexity at least as great as the it encodes. But this already speaks a connection to complexity theory. The code’s performance is so good that the working ‘s have positive limit density. That is, for any and :

where is Lebesgue measure. We can define in terms of alone by maximizing over satisfying the above constraints. Then can be called the “zeta-density” of the string . What might be its significance?

## Can Zeta Simulate Itself?

Another way complex analysis could be like computational complexity is self-reference. The function is analytic away from the unique pole at . It is non-zero away from the trivial zeroes on the negative -axis, and away from the non-trivial zeroes which the Riemann Hypothesis asserts all have real part

Thus Voronin’s theorem says that the -function on the critical strip can approximate the -function on disk-like regions sufficiently away from the critical line, to any degree of approximation. In that sense it is “fractal.” But can it simulate itself via the theorem within the strip, getting arbitrarily close to the critical line?

The question is equivalent to the Riemann Hypothesis itself. For suppose the hypothesis were false. Then there would be a zero for some fixed . Now fix such that , and take of width and height . This includes the zero, so the condition of Voronin’s theorem does not apply. Thus if the condition applies for arbitrarily close to , then Riemann must be true. The converse also holds: if Riemann is true then is analytic arbitrarily close, and we can apply the theorem to produce infinitely many regions where replicates itself within any desired precision.

This picture has been sharpened just this past summer in a paper by Johan Andersson, titled “Non universality on the critical line.” He shows that definitely does *not* have analogous universal behavior on the line itself. Thus ideas of universal simulation, which we think of as Alan Turing’s province, matter directly to the Riemann Hypothesis, which Turing himself worked on.

## Open Problems

Can more be derived from this nexus of complexity and complex analysis? Can it help attack the Riemann Hypothesis itself?

I (Dick) find Voronin’s universality theorem quite surprising. I also think that we should be able to use it to prove a lower bound on the computational complexity of computing the zeta function. One of the reasons I find this result cool is it seems to be linked—in some way—to the complexity of computing the zeta function. I cannot prove this yet, but the fact that “all” functions are encoded should in principle yield a lower bound on the cost of evaluating the zeta function. Does anyone have an idea of how to make this precise?

What does this do for physicists – can their sums over all surfaces be reduced to integrations over functionals of the Riemann zeta-function?

Amazing theorem.

Tiny errata: Your equation for the Zeta function uses “k” for the summation, but “n” in what follows.

fnord,

Thanks for catch. We try to avoid errors, misteaks do occur—i mean mistakes do occur.

dick

Thanks. Some errores are invisible :-).

One interpretation of the theorem is that there simply aren’t that many analytic functions otherwise $C_0$ wouldn’t be enough to parametrize (ie. $t \in R$) arbitrarily good approximations to all analytic functions.

It’s been a while since I spent any time doing complex analysis, but I seem to remember encountering other theorems which had the same flavor, i.e. analytic functions are “rare”.

Very interesting, thanks for the post!

Here’s an initial reaction for why this feels tricky. zeta can approximate arbitrarily well the function g(x) = Chaitin’s constant (for instance). Since we know zeta is computable, it must be the case that zeta does so only on an (uncomputable) sequence of (uncomputable?) reals.

Similarly, zeta can solve any NP-Complete problem by mapping instances to the constant that encodes its certificates; but if the $t$ required to do so grows exponentially with the size of the input, then we haven’t learned much about the internals of the zeta function — it’s just a passive pump.

So it seems hard to understand zeta’s computational power or limits without understanding its language — the sizes of its inputs and outputs. I think it’d be very cool to follow up on zeta density and think about how much “space” zeta has to encode all of these functions. I wonder what happens if we try to make a counting argument with the number of computable analytic functions to show that they just can’t all fit — some functions will need to be encoded exponentially (that is, the associated sequence of $t$s will need to grow exponentially).

But I think such a result would make finding complexity limits on zeta tougher, because it would show that the zeta function “offloads” difficult computation to its input (rather than showing that zeta itself solves hard problems).

I am far from expert here, but I remember hearing a talk by Gauthier on universality results for various kinds of entire function. See some seemingly related details here

http://www.mfo.de/document/0807a/OWR_2008_06.pdf which suggest that universality should somehow be generic. (But perhaps only in the sense that a “generic” continuous function should be nowhere differentiable.)