Why The HartmanisStearns Conjecture Is Still Open
A missed? or forgotten? connection from 1976
Richard Brent is an illustrious Australian mathematician and computer scientist. He is known for Brent’s Theorem, which shows that a parallel algorithm can always be adapted to run on fewer processors with only the obvious time penalty—a beautiful example of an “obvious” but nontrivial theorem. He also discovered how to modify arithmetical or Boolean formulas to have depth logarithmic in their size. Both of these theorems were proved in MCMLXXIV.
Today we connect another paper by Brent to the famous conjecture by Juris Hartmanis and Richard Stearns which we recently covered here.
Brent was one of the first to think concretely about the complexity of computing elementary numerical functions. In the spirit of Alan Turing he worked on computing fundamental constants to any desired precision. With Eugene Salamin he rediscovered a fast algorithm for computing the digits of , and their tweaks may even make this an exception to the rule that everything is named for Gauss. He found a new method for computing the EulerMascheroni gamma constant. Also following Turing, he verified that the first 75 million nontrivial zeroes of the Riemann zeta function lie on the critical line. He has recently authored a textbook on computer arithmetic with Paul Zimmerman.
How do problems of computable numbers impact complexity classes and longstanding open questions? That is what we are asking here.
HartmanisStearns Conjecture Revisited
The HartmanisStearns conjecture, which has been open since 1965, is:
Suppose that a realtime Turing Machine computes the real number in base ten, or in any naturalnumber base. Then is either a rational number or a transcendental number.
Thus, a realtime computable number cannot be a nonrational algebraic number such as . The motivation for this conjecture seems to be the following two observations:
 Clearly a rational number can always be computed by a realtime Turing Machine: this follows since rational numbers eventually are periodic in any base.

Also many interesting transcendental numbers can be computed by a realtime Turing Machine. For example the famous number
which is called a Liouville number can easily be computed in real time.
We cannot resolve this great conjecture, but can show that it is not just a curiosity. No. It is connected directly to two central questions of complexity theory. These connections show that any positive resolution of the conjecture would be quite difficult and surprising.
The Results
In the following we are interested in Turing Machines (TM)’s that act as generators: they have no input, but from time to time output a symbol. We say that a TM generates a sequence provided it outputs , then , and so on. Such a TM is has delay , if the largest gap between the output of symbols is . If , such a machine is called a realtime TM. The following is our main result observation, which is originally due to Patrick Fischer, Albert Meyer, and Arne Rosenberg in this paper in MCMLXX:
Theorem
AFMR. Let be a sequence over a fixed alphabet. Then, the following are equivalent:
 There is a Turing Machine that generates the sequence with delay .
 There is a Turing Machine that generates the sequence with delay , for some constant .
 There is a Turing Machine that given input outputs
in time bounded by .
By the integer multiplication problem we mean given and determine . The time to do this when and are at most bits is denoted as usual by .
Theorem B. The HartmanisStearns conjecture implies that is superlinear.
Theorem C. The HartmanisStearns conjecture implies that
The last two theorems show why the conjecture is so deep. A proof of it would in one step prove a nonlinear lower bound on integer multiplication and also separate deterministic and nondeterministic time. The former is wide open, and seems beyond reach of modern methods. The latter is proved, but its proof is quite deep, and the mechanics of the latter theorem might give a wider time separation than what is currently known.
The HartmanisStearns Conjecture Restated
The power of the simple Theorem FMR is that linear time can be converted to realtime, for the generation of sequences. The main “trick” is to compute the next block of symbols of the sequence while the next block is being computed. The doubling trick, an ancient theory trick, allows the computation to start from scratch each time. Yet not take so long that the whole machine does not stay realtime.
Thus, the HartmanisStearns conjecture can be restated as follows:
Suppose that a linear time Turing Machine computes the first digits of the real number in base ten. Then, the number is either a rational number or a transcendental number.
There is no need to mention realtime at all. Ken in fact recalls that this is how the conjecture was stated to him at a Schloss Dagstuhl meeting in the 1990’s. But we have had difficulty finding this in the literature—all sources we’ved found state fixeddelay or realtime, including this year’s survey paper by Rusins Freivalds. Until, that is, we were contacted in comments about FMR—though we still give the proof to keep this post selfcontained.
One source of confusion we have seen is this: sometimes HartmanisStearns is stated that the running time of the generation of the digit is in linear time. This is the same as realtime, so that may be what Ken recalls. Or perhaps not.
Linear Time to Real Time
Here are the proofs.
Proof of Theorem A FMR: It is long known that (1) and (2) are equivalent. This follows by the method of constant speedup—one simply uses a larger alphabet during the computation. Also (2) certainly implies (3): just run the generator until symbols have been output and then stop. So the only result we need to prove is that (3) implies (2).
So assume that there is a TM that given input outputs in time bounded by . We will show that there is another TM that generates with constant delay —the exact value of with be clear after we describe the machine.
The new TM will run two separate computations at the same time. We will think of the two computations as being run by our usual players Alice and Bob. Each player has their own tapes: each has two special tapes which we will call respectively the counter tape and the output tape. Note, we should say: “the stage counter of Alice (Bob),” but which tape should be clear from the context.
The computation of is divided into a series of stages. The key is that at stage one of Alice and Bob will be the leader and the other the follower. At any stage let . They operate as follows:
The leader has inductively written on its counter tape, and has on its output tape the next bits of and it has its head at the left most symbol. All its other tapes are blank. During this stage the leader outputs each of the symbols from the output tape, taking steps per symbol. Here is an absolute constant that will be determined in a moment. As it outputs the symbols it erases them, and also at the same time it changes its stage counter to contain and places its head at the left most symbol.
The follower has its stage counter also containing with the head at the left most symbol . All of its other tapes are blank. The follower then simulates with as input. It outputs the symbols onto its output tape. Then it erases all its other work tapes, updates its counter tape to . Finally, it erases the first half of the output tape and leaves the tape head at the left most symbol of that tape.
At the end of stage , Alice and Bob switch roles: the leader becomes the follower and the follower the leader. They continue as above. They do this forever. This ends the description of the TM .
We now make two claims:
 The machine does indeed correctly generate .
 The machine has delay for some constant .
These claims clearly will prove the theorem.
Claim (1) is proved inductively: at stage the leader is outputting the next symbols of . The key insight is that the follower computes all the first symbols, but only keeps the last . Thus, when it becomes the leader at the next stage, it will output correctly the next symbols.
Claim (2) follows easily from the fact that TM is linear time. Thus all the work that the follower needs to do can easily be done in , which is the time the leader uses to output all its symbols.
The Other Proofs
Proof of Theorem B: Richard Brent’s 1976 paper shows that the first digits of , in any base , can be approximated to within additive error in time, on a multitape Turing machine. It makes the assumption that for all there exists such that for all large enough , but this is certainly satisfied if is linear.
Now approximation to within error , say, is not the same as finding the first digits, because the value may have long runs of or . The value could even alternate such runs in any integral base. However, staying with base , there must exist a fixed such that for all large enough the run occupies no more than the last digits. Otherwise would have rational approximations with giving
with being unbounded. This would make a Liouville number and hence transcendental, a contradiction. Hence in time, Brent’s method guarantees the first correct digits of .
Thus we have that if , the first digits of can be output in linear time. By Theorem A FMR, this would contradict the HartmanisStearns conjecture.
To prove Theorem C, we note the following “folklore” result.
Folk Lemma.
Integer multiplication can be done in linear time on an alternating TM with a fixed number of alternations.
Proof: Given bit integers and , it suffices to guess and verify . Further we guess many bit numbers and remainders for respectively modulo . Although not all of the are primes, there are enough primes in the range whose product is greater than , so equality follows if and the remainders are correct for each . Accordingly we use a universal quantification over . It remains to verify that modulo equals for a particular . For our use below we could afford to expend a second pair of alternations, to guess remainders at certain points in the long division, but in fact this is doable in deterministic linear time.
Proof of Theorem C: Although this kind of collapse is not known for all time bounds , the following was proved as a key lemma in the famous paper that separated deterministic and nondeterministic linear time for multitape Turing machines:
Since integer multiplication is in , its nonmembership in deterministic linear time implies . Thus the HartmanisStearns conjecture implies this separation.
Of course is known, but the significant feature is that a proof via the HartmanisStearns conjecture would evidently avoid the pebbling and graphsegregator details in the original proof. It might also extend the separation to other models such as Turing machines with planar tapes, for which it is currently open.
Finally, a time lower bound for multiplication would translate into one for nondeterministic (linear) time, perhaps greater than the current best known separation for time by Rahul Santhanam.
Open Problems
The main open question is simple: We have shown that the realtime part of HartmanisStearns is not needed. Was this known before? We find the proof of the main theorem pretty simple, but We have not (yet) found it in the literature, and neither of us recall it from discussions. Perhaps it was known at the time years ago, but somehow forgotten.
Also has the connection between the conjecture and integer multiplication been observed before? Finally, can we refute the HartmanisStearns conjecture by computing some algebraic irrational number in linear time? Or, can we find a class of algebraic numbers whose computation is lineartime equivalent to integer multiplication being in linear time?
We also thank JinYi Cai for comments on the ideas here.
Update: Albert Meyer kindly contributed a reference for the lineartorealtime result in a joint paper of his, and we have updated the post to reflect this.
Trackbacks
 Why The HartmanisStearns Conjecture Is Still Open « Gödel's Lost ...  ALife (Biotechnology, Algorithms, Complexity, AI, ...)  Scoop.it
 Happy Unbirthday To Turing « Gödel’s Lost Letter and P=NP
 Connect the Stars « Gödel’s Lost Letter and P=NP
 Michigan Coding, Complexity, and Sparsity Workshop « Gödel’s Lost Letter and P=NP
 One Man’s Floor Is Another Man’s Ceiling « Gödel’s Lost Letter and P=NP
 Algorithms At Oxford « Gödel’s Lost Letter and P=NP
 Triads and Dyads  Gödel's Lost Letter and P=NP
 The Other Pi Day  Gödel's Lost Letter and P=NP
 Raymond Smullyan, 1919–2017  Gödel's Lost Letter and P=NP
 A Retirement Party for Joel Seiferas  Gödel's Lost Letter and P=NP
Here is a special case of the process in the proof of Theorem A. Let be any lineartime computable function that is length preserving: . We can insert rest steps if necessary to make the computation finish in exactly steps. For any seed string , we can define a real number as follows: Start with . At each stage apply to the digits computed thus far, and append the result. Thus the expansion looks like
Note that could output its result at the very end of its computation, so that the obvious way to output this sequence has irregular delay. To make it fixeddelay, we pingpong the computation at each stage between tapes owned by Alice and Bob. The invariant is that at each stage, one of them (say Alice) has two copies of , and the generator has just finished outputting the digits before . To maintain this invariant, all we need to say is that outputs in exactly the time that two copies of are being computed onto Bob’s tapes. The delay is always the same constant .
Interesting results! I take it that Turing machines in Theorem A are multitape ?
Yes—importantly, so are the ones in Brent’s reduction from sqrt{2} to integer multiplication, which seems to generalize via similar Newton methods to other algebraic numbers.
Moshe Vardi mentioned this post at the ACM Turing 100 panel on the Turing model this afternoon, of which Hartmanis and Stearns are members. Hartmanis and Stearns had a discussion on stage about who should mention it – Hartmanis said to Stearns, “Why don’t you state the conjecture? You’re as guilty as I am.”
Thanks for noting this. A couple more references and details: Brent typed up a bunch of his old papers in LaTeX—some with emendations—and posted them two years ago on the ArXiV—including this companion to the paper we cover. There are also many new ones there. The same has been done at CMU for Hans T. Kung’s paper “The computational complexity of algebraic numbers.”
A key detail perhaps needing further checking at TM level is that the number of known bits grows by a constant factor at each stage, so the time for the k = O(log n) previous satges folds in to be linear in the time for the last stage. We note a possibly related qualifier “Provided that F(n) grows superlinearly” in this Citizendium article which we found linked by this StackOverflow thread on Newton’s method, where F(n) is the cost of evaluating foverfprime, but we do not see its need. We speculate there should be similar Newton methods for every algebraic number, or at least others.
The equivalence of real time and linear time Turing Machine sequence generation was observed in
JOURNAL OF COMPUTER AND SYSTEM SCIENCES: 4, 5073 (1970)
TimeRestricted Sequence Generation,
PATRICK C. FISCHER, ALBERT R. MEYER & ARNOLD L. ROSENBERG
which contrasted the result to the situation for countermachines where it does not hold.
Thank you! That also corroborates my memory of how the conjecture was stated to me.
Three questions: First, in the proof of Theorem B, If you use the full power of Roth theorem does it give better lower bound on M(n) then just linear? Second, What is known regarding multiplication for bounded depth computation? Third, is there a formulation with circuit complexity of something like the HS conjecture and again is something known for bounded depth?
I do not understand several things.
could you explain?
Some silly questions?
1) in the proof. q=10^(1gamma)n with gamma = 1 gives q=1 and r = infinity. But p/qsqrt(2) < 1/q^r has a valid answer with p =1.
2) Gausslegendre trick for pi guarantees calculation of pi to n digits in O(log(n)) steps. The seed step involves sqrt(2). If we cannot calculate sqrt(2) to n digits in O(log(n)) steps, there is no guarantee pi will be that accurate in O(log(n)) steps. Did mathematicians miss something?
Some really stupid questions?
1) How does having runs of 0000 or 9999 affect digits extracted versus precision? The eventual digit following 0000 or 9999 will not affect the previous digits in an infinite extraction? Say the next digits are 5555 following 99999, will this affect the previous digits until all the infinite digits have been extracted? This seems to imply that there is no reasonable time limit within which we can even guarantee n digits since to confirm after 999999 we would have to wait till 'all' the infinite digits are extracted?
2) Can one show that no strange series of 00000 or 99999 occur in sqrt(2)?
On the computational complexity of algebraic numbers: the Hartmanis–Stearns problem revisited
by Boris Adamczewski, Julien Cassaigne, Marion Le Gonidec
https://arxiv.org/abs/1601.02771
Theorem 2.3 claims:
.An algebraic irrational real number cannot be generated by
a onestack machine, or equivalently, by a determistic push
down automaton.
So linear complexity TM can not generate an algebraic irrational real number, what it can compute in real time is either rational or transcendental number, thus close the HartmanisStearn Conjecture. What do you think about the article by by Boris Adamczewski, Julien Cassaigne, Marion Le Gonidec