Skip to content

Hunting Complexity in Zeta

December 20, 2016

A second look at Voronin’s amazing universality theorem


Anatoly Karatsuba and Sergei Voronin wrote a book on Bernhard Riemann’s zeta function. The book was translated into English by Neal Koblitz in 1992. Among its special content is expanded treatment of an amazing universality theorem about the classic zeta function proved by Voronin in 1975. We covered it four years ago.

Today Ken and I take a second look and explore a possible connection to complexity theory.

Although the theorem has been known for over forty years there is more to say about it. Voronin extended it to other Dirichlet {L}-functions in ways improved by Bhaskar Bagchi, who also proved universality for the derivative of zeta. Within the past fifteen years, Ramūnas Garunkštis wrote two papers improving the effectiveness of the bounds in the theorem, and Jörn Steuding wrote two papers bounding the density of real values {\tau} such that {\zeta(s + 0.75 + i\tau)} approximates a given complex function {f(s)}. Both were part of a 2010 followup paper. Steuding also wrote a wonderful set of notes on the theorem. A 2011 paper by Chris King with beautiful graphic images expands a short 1994 preprint by Max See Chin Woon on how {\zeta} encodes universal information, an aspect highlighted in our earlier post and on Matthew Watkins’s nice page on Voronin’s theorem.

I have known about the theorem for much of this time, but just recently thought there might be some natural way to exploit it to get computational complexity information about the zeta function. Is it surprising to think of such a link? Karatsuba himself emblemizes it, because he devised the first algorithm for multiplying two {n}-digit integers that beats the grade-school {O(n^2)} method. The study of {\zeta} is so rich that a wider translation of complexity aspects could have profound impact.

The Zeta Function

Recall the famous zeta function is defined for complex {s} with real part {\Re(s)>1} by the summation:

\displaystyle  \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^{s}}.

And the zeta is extended to all complex numbers except at {1} where it has a pole, via the Riemann functional equation.

The zeta function holds the secrets to the global behavior of the primes. Even the seemingly weak statement that it does not vanish on complex {s = \sigma + i\tau} with {\sigma \ge 1} is enough to prove the Prime Number Theorem. This theorem says that the number of primes less than {x} is approximately equal to {x/\ln x}.

The critical section of the zeta function is the strip of {s} with real part {\sigma} between {0} and {1}. The critical line is the middle part where {\sigma=1/2}.


The famous Riemann Hypothesis states that all complex zeroes of the zeta function lie on the critical line. This problem remains wide open, although it has been claimed many times. Even proving that no zero {s} lies in the region with real part in {[.99,1)} seem hopeless. But we can all hope for surprises. Perhaps 2017 will be the magical year for a breakthrough, since it is prime and the only prime between 2011 and 2027.

The Zeta Function Is Universal

Let’s turn to a formal statement of Voronin’s Theorem (VT):

Theorem 1 Let {0<r0} there exists {\tau=\tau(\epsilon)} such that

\displaystyle  \max_{|s| \le r} \left|g(s) - \zeta(s+3/4+i\tau) \right| < \epsilon.

Voronin’s book with Karatsuba starts by proving this with the natural logarithm {\ln \zeta(s)} in place of {\zeta(s)}, taking the branch of {\ln \zeta(s)} that is real for real {s}. Then {g} is allowed to have zeroes on the disc. Antanas Laurinčikas has extended universality to some other functions {F(\zeta)} besides {\ln \zeta}.

Really Voronin proved even more: for every {\epsilon>0}:

\displaystyle  \lim_{T \rightarrow \infty}\inf \frac{1}{T} \mu \ \left\{ \tau \in [0,T]: \max_{|s| \le r} \left|g(s) - \zeta(s+3/4+i\tau)\right|  0,

where {\mu} is Lebesgue measure, with a similar statement for {\ln \zeta}.


If the above statements seem a bit technical let’s try a more informal notion of what VT says. Let {U} be some infinite binary string:

\displaystyle  U_{1}U_{2}U_{3}\cdots

We might agree to call it “universal” provided for any finite string {a_{1}a_{2}\cdots a_{m}} there is a {t} so that

\displaystyle  a_{1}a_{2} \cdots a_{m} = U_{t+1}U_{t+2} \cdots U_{t+m}.

Clearly such universal strings easily exist. Even stronger we could insist that there not only be one {t} that makes the above true, but that there are an infinite number of {t}‘s. Even stronger we could insist that the set

\displaystyle  \{ t \mid a_{1}a_{2} \cdots a_{m} = U_{t+1}U_{t+2} \cdots U_{t+m} \}

has positive lower density. That is, there is some {c > 0} such that for all sufficiently large {T}, the number of {t} in the set that occur in the interval {[0,T]} is at least {cT}.

What VT says is essentially a modification of this simple fact about strings. The infinite string {U} is the values of the zeta function of the form {s+i\tau} where {\tau} runs over real numbers. The finite string {a} is replaced by a smooth function {g} defined on a fixed size disc. And exact equality becomes approximation up to some {\epsilon>0}.

What is exciting is not that this is possible but that it happens for a natural and very important particular function: the zeta function. Although universality was long known as an existential phenomenon of power series in analysis, Steuding’s notes remark (emphasis ours):

The Riemann zeta-function and its relatives are so far the only known explicit examples of universal objects.

An Effective Book of Zeta?

One easy application of universality is that with positive density in the critical strip one can compute {\zeta(s)} to within any {\epsilon>0} in time {O(1)}. This is simply because one can take {g} to be the constant function {g(s) = 1} on the disk. Or one can take {g(s) = 2} or {g(s) = b} for any fixed constant {b}.

Well, just not {b = 0}: if {\zeta} could approximate {0} arbitrarily closely in a disk centered on the real part being {3/4} then by analyticity the Riemann Hypothesis would be false. One can, however, use {b = \epsilon/2} to stand for {0} when {\epsilon} is given, or handle {b = 0} directly using {\ln \zeta} in the strip. Perhaps what follows is better understood using {\ln \zeta} after all, which is how Karatsuba and Voronin develop the proofs in their book.

For different constants {b} we get different sets {Z_{b,\epsilon}} of {t} such that {\zeta(s + 3/4 + it)} approximates {b} within {\epsilon}. And we get different sets {L_{b,\epsilon}} such that {\ln\zeta(s + 3/4 + it)} approximates {b} within {\epsilon}. Our first question is:

How do the sets {Z_{b,\epsilon}} relate to each other? Likewise the sets {L_{b,\epsilon}}?

Most in particular, can we “add” these sets? What we mean is that there should be an effective procedure {\mathcal{A}} such that given specifications {\lambda_1} for {L_{b_1,\epsilon}} and {\lambda_2} for {L_{b_2,\epsilon}} outputs a specifier {\lambda_3} for {L_{b_1+b_2,\epsilon}}. The specifiers could just be members {t_1,t_2,t_3} of the sets. Using {L_{b_2,\epsilon}} enables us to do this for {b_2 = -b_1}, though the following purpose can work with positive integers only.

Next, can we “multiply”? That is, can we find an effective procedure {\mathcal{M}} that given {\lambda_1} and {\lambda_2} outputs a specifier {\lambda_3} for {L_{b_1\cdot b_2,\epsilon}}?

Now we can tell where this is going. We will build up arithmetical formulas and circuits for functions {f} and {g} using constants and {+,*} gates. Given specifiers {\lambda_f} and {\lambda_g}, we want:

\displaystyle  \mathcal{A}(\lambda_f,\lambda_g) = \lambda_{f+g},\qquad \mathcal{M}(\lambda_f,\lambda_g) = \lambda_{f\cdot g}.

We will need to start with some basic functions beginning with the identity function {f(x) = x}. Our functions are formally defined on the complex disc {|s| < r} but it is easy to represent functions on {\mathbb{N}} over this domain. The main limitation noted by Garunkštis is that his effectivizations for {\ln\zeta} apply only to discs of small radius {r}, but any radius seems good for encoding discrete functions.

We certainly expect {\mathcal{A}} to be available. The proofs of universality all work by representing {g(s)} as an infinite series. One can add the terms for {f(s)} and {g(s)} to get a series for {f + g}. The multiplication case {\mathcal{M}} would then involve products of infinite series but perhaps they are well-behaved. The hardest challenge may be to make this work neatly when the tokens {\lambda_f,\lambda_g} give only partial information about {f} and {g}. We may also wish to implement some recursion operations besides {+} and {*}. But once we have—if we have—a rich enough set of effective operations, we can program the whole panoply of computable functions into {\zeta}. Note that these representations are already known to exist for any {f}; we are just talking about how easy it is to program them.

The potential payoff is, perhaps questions about whether certain complex arithmetic functions have small circuits or formulas can be attacked via the resulting constraints on the behavior of {\zeta(s)}. Ultimately what is going on is a kind of discrete programming using the prime numbers that go into product formulas for {\zeta(s)}. For {\ln\zeta} and arguments {s = \sigma + i\tau} with {\sigma > 1} there is the particularly nice summation formula

\displaystyle  \ln\zeta(s) = - \sum_p \sum_{k=1}^{\infty} \frac{1}{kp^{ks}},

where {p} ranges over all primes, which is then continued inside the strip. The point is that {\zeta} channels a “natural” way to do the programming. Lots of tools of analysis might be available for complexity questions framed this way.

Well, this might be a pipe dream—especially since we can’t yet even tell whether {\ln\zeta} blows up in the kind of discs we are considering—but this is the week when Santa smokes his pipe preparing for a long journey and lots of children have dreams.

Open Problems

Does our “book of zeta” idea seem feasible to compile?

2 Comments leave one →
  1. o0o permalink
    December 23, 2016 1:31 pm

    I didn’t really get the “point” of zeta’s universality until this post… a bit mindblowing!

  2. December 24, 2016 12:42 pm

    have long wished there was more study of riemann hypothesis by CS experts & would like to attack it someday. universality sounds a lot like fractal self-similarity to me. reminds me of a general framework for studying such problems related to undecidability, refs d10 d11 “evaluating the complexity of math problems” by Calude in this writeup see also wigderson computational complexity and pseudorandomness

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s