A missed? or forgotten? connection from 1976

Richard Brent is an illustrious Australian mathematician and computer scientist. He is known for Brent’s Theorem, which shows that a parallel algorithm can always be adapted to run on fewer processors with only the obvious time penalty—a beautiful example of an “obvious” but non-trivial theorem. He also discovered how to modify arithmetical or Boolean formulas to have depth logarithmic in their size. Both of these theorems were proved in MCMLXXIV.

Today we connect another paper by Brent to the famous conjecture by Juris Hartmanis and Richard Stearns which we recently covered here.

Brent was one of the first to think concretely about the complexity of computing elementary numerical functions. In the spirit of Alan Turing he worked on computing fundamental constants to any desired precision. With Eugene Salamin he re-discovered a fast algorithm for computing the digits of ${\pi}$, and their tweaks may even make this an exception to the rule that everything is named for Gauss. He found a new method for computing the Euler-Mascheroni gamma constant. Also following Turing, he verified that the first 75 million nontrivial zeroes of the Riemann zeta function lie on the critical line. He has recently authored a textbook on computer arithmetic with Paul Zimmerman.

How do problems of computable numbers impact complexity classes and longstanding open questions? That is what we are asking here.

## Hartmanis-Stearns Conjecture Revisited

The Hartmanis-Stearns conjecture, which has been open since 1965, is:

Suppose that a real-time Turing Machine computes the real number ${r}$ in base ten, or in any natural-number base. Then ${r}$ is either a rational number or a transcendental number.

Thus, a real-time computable number cannot be a non-rational algebraic number such as ${\sqrt 2}$. The motivation for this conjecture seems to be the following two observations:

• Clearly a rational number can always be computed by a real-time Turing Machine: this follows since rational numbers eventually are periodic in any base.

• Also many interesting transcendental numbers can be computed by a real-time Turing Machine. For example the famous number

$\displaystyle \sum_{j=0}^{\infty} 10^{-j!},$

which is called a Liouville number can easily be computed in real time.

We cannot resolve this great conjecture, but can show that it is not just a curiosity. No. It is connected directly to two central questions of complexity theory. These connections show that any positive resolution of the conjecture would be quite difficult and surprising.

## The Results

In the following we are interested in Turing Machines (TM)’s that act as generators: they have no input, but from time to time output a symbol. We say that a TM generates a sequence ${s_{1},s_{2},\dots}$ provided it outputs ${s_{1}}$, then ${s_{2}}$, and so on. Such a TM is has delay ${c}$, if the largest gap between the output of symbols is ${c}$. If ${c=1}$, such a machine is called a real-time TM. The following is our main result observation, which is originally due to Patrick Fischer, Albert Meyer, and Arne Rosenberg in this paper in MCMLXX:

Theorem A FMR. Let ${r =r_{1},r_{2},r_{3}\dots }$ be a sequence over a fixed alphabet. Then, the following are equivalent:

1. There is a Turing Machine that generates the sequence ${r}$ with delay ${1}$.

2. There is a Turing Machine that generates the sequence ${r}$ with delay ${c}$, for some constant ${c}$.

3. There is a Turing Machine that given input ${1^{n}}$ outputs

$\displaystyle r_{1},r_{2},\dots,r_{n}$

in time bounded by ${O(n)}$.

By the integer multiplication problem we mean given ${x}$ and ${y}$ determine ${z = x \times y}$. The time to do this when ${x}$ and ${y}$ are at most ${n}$ bits is denoted as usual by ${M(n)}$.

Theorem B. The Hartmanis-Stearns conjecture implies that ${M(n)}$ is super-linear.

Theorem C. The Hartmanis-Stearns conjecture implies that

$\displaystyle \mathsf{DTIME}(n) \neq \mathsf{NTIME}(n).$

The last two theorems show why the conjecture is so deep. A proof of it would in one step prove a non-linear lower bound on integer multiplication and also separate deterministic and nondeterministic time. The former is wide open, and seems beyond reach of modern methods. The latter is proved, but its proof is quite deep, and the mechanics of the latter theorem might give a wider time separation than what is currently known.

## The Hartmanis-Stearns Conjecture Restated

The power of the simple Theorem FMR is that linear time can be converted to real-time, for the generation of sequences. The main “trick” is to compute the next block of symbols of the sequence while the next block is being computed. The doubling trick, an ancient theory trick, allows the computation to start from scratch each time. Yet not take so long that the whole machine does not stay real-time.

Thus, the Hartmanis-Stearns conjecture can be re-stated as follows:

Suppose that a linear time Turing Machine computes the first ${n}$ digits of the real number ${r}$ in base ten. Then, the number is either a rational number or a transcendental number.

There is no need to mention real-time at all. Ken in fact recalls that this is how the conjecture was stated to him at a Schloss Dagstuhl meeting in the 1990’s. But we have had difficulty finding this in the literature—all sources we’ved found state fixed-delay or real-time, including this year’s survey paper by Rusins Freivalds. Until, that is, we were contacted in comments about FMR—though we still give the proof to keep this post self-contained.

One source of confusion we have seen is this: sometimes Hartmanis-Stearns is stated that the running time of the generation of the ${n^{th}}$ digit is in linear time. This is the same as real-time, so that may be what Ken recalls. Or perhaps not.

## Linear Time to Real Time

Here are the proofs.

Proof of Theorem A FMR: It is long known that (1) and (2) are equivalent. This follows by the method of constant speed-up—one simply uses a larger alphabet during the computation. Also (2) certainly implies (3): just run the generator until ${n}$ symbols have been output and then stop. So the only result we need to prove is that (3) implies (2).

So assume that there is a TM ${M}$ that given input ${1^{n}}$ outputs ${r_{1},r_{2},\dots,r_{n}}$ in time bounded by ${O(n)}$. We will show that there is another TM that generates ${r}$ with constant delay ${c}$—the exact value of ${c}$ with be clear after we describe the machine.

The new TM ${G}$ will run two separate computations at the same time. We will think of the two computations as being run by our usual players Alice and Bob. Each player has their own tapes: each has two special tapes which we will call respectively the counter tape and the output tape. Note, we should say: “the stage counter of Alice (Bob),” but which tape should be clear from the context.

The computation of ${G}$ is divided into a series of stages. The key is that at stage ${k}$ one of Alice and Bob will be the leader and the other the follower. At any stage ${k}$ let ${n=2^{k}}$. They operate as follows:

${\bullet}$ The leader has inductively ${1^{n}}$ written on its counter tape, and has on its output tape the next ${n}$ bits of ${r}$ and it has its head at the left most symbol. All its other tapes are blank. During this stage the leader outputs each of the symbols from the output tape, taking ${c}$ steps per symbol. Here ${c}$ is an absolute constant that will be determined in a moment. As it outputs the symbols it erases them, and also at the same time it changes its stage counter to contain ${1^{2n}}$ and places its head at the left most symbol.

${\bullet }$ The follower has its stage counter also containing ${1^{n}}$ with the head at the left most symbol . All of its other tapes are blank. The follower then simulates ${M}$ with ${m=1^{4n}}$ as input. It outputs the ${m}$ symbols onto its output tape. Then it erases all its other work tapes, updates its counter tape to ${1^{2n}}$. Finally, it erases the first half of the output tape and leaves the tape head at the left most symbol of that tape.

At the end of stage ${k}$, Alice and Bob switch roles: the leader becomes the follower and the follower the leader. They continue as above. They do this forever. This ends the description of the TM ${G}$.

We now make two claims:

1. The machine ${G}$ does indeed correctly generate ${r}$.

2. The machine ${G}$ has delay ${c}$ for some constant ${c}$.

These claims clearly will prove the theorem.

Claim (1) is proved inductively: at stage ${k}$ the leader is outputting the next ${2^{k}}$ symbols of ${r}$. The key insight is that the follower computes all the first ${2^{k+2}}$ symbols, but only keeps the last ${2^{k+1}}$. Thus, when it becomes the leader at the next stage, it will output correctly the next ${2^{k+1}}$ symbols.

Claim (2) follows easily from the fact that TM ${M}$ is linear time. Thus all the work that the follower needs to do can easily be done in ${O(2^{k})}$, which is the time the leader uses to output all its symbols.

$\Box$

## The Other Proofs

Proof of Theorem B: Richard Brent’s 1976 paper shows that the first ${n}$ digits of ${\sqrt{2}}$, in any base ${b}$, can be approximated to within additive error ${1/b^n}$ in ${O(M(n))}$ time, on a multitape Turing machine. It makes the assumption that for all ${\alpha < 1}$ there exists ${\beta < 1}$ such that ${M(\alpha n) < \beta M(n)}$ for all large enough ${n}$, but this is certainly satisfied if ${M(n)}$ is linear.

Now approximation to within error ${1/10^n}$, say, is not the same as finding the first ${n}$ digits, because the value may have long runs of ${000000\dots}$ or ${99999\dots}$. The value could even alternate such runs in any integral base. However, staying with base ${10}$, there must exist a fixed ${\gamma < 1}$ such that for all large enough ${n}$ the run occupies no more than the last ${\gamma n}$ digits. Otherwise ${\sqrt{2}}$ would have rational approximations ${p/q}$ with ${q = 10^{(1-\gamma)n}}$ giving

$\displaystyle \left|\frac{p}{q} - \sqrt{2}\right| < \frac{1}{q^r}$

with ${r = \gamma/(1 - \gamma)}$ being unbounded. This would make ${\sqrt{2}}$ a Liouville number and hence transcendental, a contradiction. Hence in ${O(M(n))}$ time, Brent’s method guarantees the first ${(1 - \gamma)n}$ correct digits of ${\sqrt{2}}$.

Thus we have that if ${M(n) = O(n)}$, the first ${(1 - \gamma)n}$ digits of ${\sqrt{2}}$ can be output in linear time. By Theorem A FMR, this would contradict the Hartmanis-Stearns conjecture. $\Box$

To prove Theorem C, we note the following “folklore” result.

Folk Lemma.
Integer multiplication can be done in linear time on an alternating TM with a fixed number of alternations.

Proof: Given ${n}$-bit integers ${x}$ and ${y}$, it suffices to guess ${z}$ and verify ${x\cdot y = z}$. Further we guess ${2n/\ell}$-many ${\ell}$-bit numbers ${m_i}$ and remainders ${x_i,y_i,z_i}$ for ${x,y,z}$ respectively modulo ${m_i}$. Although not all of the ${m_i}$ are primes, there are enough primes in the range whose product is greater than ${z}$, so equality follows if ${x_i \cdot y_i = z_i}$ and the remainders are correct for each ${i}$. Accordingly we use a universal quantification over ${i}$. It remains to verify that ${x}$ modulo ${m_i}$ equals ${x_i}$ for a particular ${i}$. For our use below we could afford to expend a second pair of alternations, to guess remainders at certain points in the long division, but in fact this is doable in deterministic linear time. $\Box$

Proof of Theorem C: Although this kind of collapse is not known for all time bounds ${t(n)}$, the following was proved as a key lemma in the famous paper that separated deterministic and nondeterministic linear time for multitape Turing machines:

$\displaystyle \mathsf{NTIME}[O(n)] = \mathsf{DTIME}[O(n)] \implies \mathsf{\Sigma_4}\text{-}\mathsf{TIME}[O(n)] = \mathsf{DTIME}[O(n)].$

Since integer multiplication is in ${\mathsf{\Sigma_4}\text{-}\mathsf{TIME}[O(n)]}$, its non-membership in deterministic linear time implies ${\mathsf{NTIME}[O(n)] \neq \mathsf{DTIME}[O(n)]}$. Thus the Hartmanis-Stearns conjecture implies this separation. $\Box$

Of course ${\mathsf{NTIME}[O(n)] \neq \mathsf{DTIME}[O(n)]}$ is known, but the significant feature is that a proof via the Hartmanis-Stearns conjecture would evidently avoid the pebbling and graph-segregator details in the original proof. It might also extend the separation to other models such as Turing machines with planar tapes, for which it is currently open.

Finally, a time lower bound for multiplication would translate into one for nondeterministic (linear) time, perhaps greater than the current best known separation for time ${t(n) = n\sqrt{\log^* n}}$ by Rahul Santhanam.

## Open Problems

The main open question is simple: We have shown that the real-time part of Hartmanis-Stearns is not needed. Was this known before? We find the proof of the main theorem pretty simple, but ${\dots}$ We have not (yet) found it in the literature, and neither of us recall it from discussions. Perhaps it was known at the time years ago, but somehow forgotten.

Also has the connection between the conjecture and integer multiplication been observed before? Finally, can we refute the Hartmanis-Stearns conjecture by computing some algebraic irrational number in linear time? Or, can we find a class of algebraic numbers whose computation is linear-time equivalent to integer multiplication being in linear time?

We also thank Jin-Yi Cai for comments on the ideas here.

Update: Albert Meyer kindly contributed a reference for the linear-to-real-time result in a joint paper of his, and we have updated the post to reflect this.

18 Comments leave one →
1. June 15, 2012 9:23 am

Here is a special case of the process in the proof of Theorem A. Let ${f}$ be any linear-time computable function that is length preserving: ${|f(x)| = |x|}$. We can insert rest steps if necessary to make the computation finish in exactly ${c|x|}$ steps. For any seed string ${x}$, we can define a real number ${r_x}$ as follows: Start with ${0.x}$. At each stage apply ${f}$ to the digits ${y}$ computed thus far, and append the result. Thus the expansion looks like

${0.x~f(x)~f(xf(x))~f(xf(x)f(xf(x))) \dots}$

Note that ${f}$ could output its result at the very end of its computation, so that the obvious way to output this sequence has irregular delay. To make it fixed-delay, we ping-pong the computation ${f(y)}$ at each stage between tapes owned by Alice and Bob. The invariant is that at each stage, one of them (say Alice) has two copies of ${y}$, and the generator ${G}$ has just finished outputting the digits before ${y}$. To maintain this invariant, all we need to say is that ${G}$ outputs ${y}$ in exactly the time that two copies of ${f(y)}$ are being computed onto Bob’s tapes. The delay is always the same constant ${c}$.

2. Nicholas Tran permalink
June 15, 2012 1:21 pm

Interesting results! I take it that Turing machines in Theorem A are multi-tape ?

• June 15, 2012 5:58 pm

Yes—importantly, so are the ones in Brent’s reduction from sqrt{2} to integer multiplication, which seems to generalize via similar Newton methods to other algebraic numbers.

3. June 15, 2012 6:19 pm

Moshe Vardi mentioned this post at the ACM Turing 100 panel on the Turing model this afternoon, of which Hartmanis and Stearns are members. Hartmanis and Stearns had a discussion on stage about who should mention it – Hartmanis said to Stearns, “Why don’t you state the conjecture? You’re as guilty as I am.”

• June 16, 2012 11:49 am

Thanks for noting this. A couple more references and details: Brent typed up a bunch of his old papers in LaTeX—some with emendations—and posted them two years ago on the ArXiV—including this companion to the paper we cover. There are also many new ones there. The same has been done at CMU for Hans T. Kung’s paper “The computational complexity of algebraic numbers.”

A key detail perhaps needing further checking at TM level is that the number of known bits grows by a constant factor at each stage, so the time for the k = O(log n) previous satges folds in to be linear in the time for the last stage. We note a possibly related qualifier “Provided that F(n) grows superlinearly” in this Citizendium article which we found linked by this StackOverflow thread on Newton’s method, where F(n) is the cost of evaluating f-over-f-prime, but we do not see its need. We speculate there should be similar Newton methods for every algebraic number, or at least others.

4. June 16, 2012 2:24 pm

The equivalence of real time and linear time Turing Machine sequence generation was observed in
JOURNAL OF COMPUTER AND SYSTEM SCIENCES: 4, 50-73 (1970)
Time-Restricted Sequence Generation,
PATRICK C. FISCHER, ALBERT R. MEYER & ARNOLD L. ROSENBERG
which contrasted the result to the situation for counter-machines where it does not hold.

5. June 16, 2012 5:47 pm

Thank you! That also corroborates my memory of how the conjecture was stated to me.

6. June 16, 2012 5:53 pm

Three questions: First, in the proof of Theorem B, If you use the full power of Roth theorem does it give better lower bound on M(n) then just linear? Second, What is known regarding multiplication for bounded depth computation? Third, is there a formulation with circuit complexity of something like the H-S conjecture and again is something known for bounded depth?

7. December 15, 2012 3:51 pm

I do not understand several things.
could you explain?
Some silly questions?
1) in the proof. q=10^(1-gamma)n with gamma = 1 gives q=1 and r = infinity. But |p/q-sqrt(2)| < 1/q^r has a valid answer with p =1.
2) Gauss-legendre trick for pi guarantees calculation of pi to n digits in O(log(n)) steps. The seed step involves sqrt(2). If we cannot calculate sqrt(2) to n digits in O(log(n)) steps, there is no guarantee pi will be that accurate in O(log(n)) steps. Did mathematicians miss something?

Some really stupid questions?
1) How does having runs of 0000 or 9999 affect digits extracted versus precision? The eventual digit following 0000 or 9999 will not affect the previous digits in an infinite extraction? Say the next digits are 5555 following 99999, will this affect the previous digits until all the infinite digits have been extracted? This seems to imply that there is no reasonable time limit within which we can even guarantee n digits since to confirm after 999999 we would have to wait till 'all' the infinite digits are extracted?
2) Can one show that no strange series of 00000 or 99999 occur in sqrt(2)?