Why The Hartmanis-Stearns Conjecture Is Still Open
A missed? or forgotten? connection from 1976
Richard Brent is an illustrious Australian mathematician and computer scientist. He is known for Brent’s Theorem, which shows that a parallel algorithm can always be adapted to run on fewer processors with only the obvious time penalty—a beautiful example of an “obvious” but non-trivial theorem. He also discovered how to modify arithmetical or Boolean formulas to have depth logarithmic in their size. Both of these theorems were proved in MCMLXXIV.
Brent was one of the first to think concretely about the complexity of computing elementary numerical functions. In the spirit of Alan Turing he worked on computing fundamental constants to any desired precision. With Eugene Salamin he re-discovered a fast algorithm for computing the digits of , and their tweaks may even make this an exception to the rule that everything is named for Gauss. He found a new method for computing the Euler-Mascheroni gamma constant. Also following Turing, he verified that the first 75 million nontrivial zeroes of the Riemann zeta function lie on the critical line. He has recently authored a textbook on computer arithmetic with Paul Zimmerman.
How do problems of computable numbers impact complexity classes and longstanding open questions? That is what we are asking here.
Hartmanis-Stearns Conjecture Revisited
The Hartmanis-Stearns conjecture, which has been open since 1965, is:
Suppose that a real-time Turing Machine computes the real number in base ten, or in any natural-number base. Then is either a rational number or a transcendental number.
Thus, a real-time computable number cannot be a non-rational algebraic number such as . The motivation for this conjecture seems to be the following two observations:
- Clearly a rational number can always be computed by a real-time Turing Machine: this follows since rational numbers eventually are periodic in any base.
Also many interesting transcendental numbers can be computed by a real-time Turing Machine. For example the famous number
which is called a Liouville number can easily be computed in real time.
We cannot resolve this great conjecture, but can show that it is not just a curiosity. No. It is connected directly to two central questions of complexity theory. These connections show that any positive resolution of the conjecture would be quite difficult and surprising.
In the following we are interested in Turing Machines (TM)’s that act as generators: they have no input, but from time to time output a symbol. We say that a TM generates a sequence provided it outputs , then , and so on. Such a TM is has delay , if the largest gap between the output of symbols is . If , such a machine is called a real-time TM. The following is our main
result observation, which is originally due to Patrick Fischer, Albert Meyer, and Arne Rosenberg in this paper in MCMLXX:
AFMR. Let be a sequence over a fixed alphabet. Then, the following are equivalent:
- There is a Turing Machine that generates the sequence with delay .
- There is a Turing Machine that generates the sequence with delay , for some constant .
- There is a Turing Machine that given input outputs
in time bounded by .
By the integer multiplication problem we mean given and determine . The time to do this when and are at most bits is denoted as usual by .
Theorem B. The Hartmanis-Stearns conjecture implies that is super-linear.
Theorem C. The Hartmanis-Stearns conjecture implies that
The last two theorems show why the conjecture is so deep. A proof of it would in one step prove a non-linear lower bound on integer multiplication and also separate deterministic and nondeterministic time. The former is wide open, and seems beyond reach of modern methods. The latter is proved, but its proof is quite deep, and the mechanics of the latter theorem might give a wider time separation than what is currently known.
The Hartmanis-Stearns Conjecture Restated
The power of the simple Theorem FMR is that linear time can be converted to real-time, for the generation of sequences. The main “trick” is to compute the next block of symbols of the sequence while the next block is being computed. The doubling trick, an ancient theory trick, allows the computation to start from scratch each time. Yet not take so long that the whole machine does not stay real-time.
Thus, the Hartmanis-Stearns conjecture can be re-stated as follows:
Suppose that a linear time Turing Machine computes the first digits of the real number in base ten. Then, the number is either a rational number or a transcendental number.
There is no need to mention real-time at all. Ken in fact recalls that this is how the conjecture was stated to him at a Schloss Dagstuhl meeting in the 1990′s. But we
have had difficulty finding this in the literature—all sources we’ ved found state fixed-delay or real-time, including this year’s survey paper by Rusins Freivalds. Until, that is, we were contacted in comments about FMR—though we still give the proof to keep this post self-contained.
One source of confusion we have seen is this: sometimes Hartmanis-Stearns is stated that the running time of the generation of the digit is in linear time. This is the same as real-time, so that may be what Ken recalls. Or perhaps not.
Linear Time to Real Time
Here are the proofs.
Proof of Theorem
A FMR: It is long known that (1) and (2) are equivalent. This follows by the method of constant speed-up—one simply uses a larger alphabet during the computation. Also (2) certainly implies (3): just run the generator until symbols have been output and then stop. So the only result we need to prove is that (3) implies (2).
So assume that there is a TM that given input outputs in time bounded by . We will show that there is another TM that generates with constant delay —the exact value of with be clear after we describe the machine.
The new TM will run two separate computations at the same time. We will think of the two computations as being run by our usual players Alice and Bob. Each player has their own tapes: each has two special tapes which we will call respectively the counter tape and the output tape. Note, we should say: “the stage counter of Alice (Bob),” but which tape should be clear from the context.
The computation of is divided into a series of stages. The key is that at stage one of Alice and Bob will be the leader and the other the follower. At any stage let . They operate as follows:
The leader has inductively written on its counter tape, and has on its output tape the next bits of and it has its head at the left most symbol. All its other tapes are blank. During this stage the leader outputs each of the symbols from the output tape, taking steps per symbol. Here is an absolute constant that will be determined in a moment. As it outputs the symbols it erases them, and also at the same time it changes its stage counter to contain and places its head at the left most symbol.
The follower has its stage counter also containing with the head at the left most symbol . All of its other tapes are blank. The follower then simulates with as input. It outputs the symbols onto its output tape. Then it erases all its other work tapes, updates its counter tape to . Finally, it erases the first half of the output tape and leaves the tape head at the left most symbol of that tape.
At the end of stage , Alice and Bob switch roles: the leader becomes the follower and the follower the leader. They continue as above. They do this forever. This ends the description of the TM .
We now make two claims:
- The machine does indeed correctly generate .
- The machine has delay for some constant .
These claims clearly will prove the theorem.
Claim (1) is proved inductively: at stage the leader is outputting the next symbols of . The key insight is that the follower computes all the first symbols, but only keeps the last . Thus, when it becomes the leader at the next stage, it will output correctly the next symbols.
Claim (2) follows easily from the fact that TM is linear time. Thus all the work that the follower needs to do can easily be done in , which is the time the leader uses to output all its symbols.
The Other Proofs
Proof of Theorem B: Richard Brent’s 1976 paper shows that the first digits of , in any base , can be approximated to within additive error in time, on a multitape Turing machine. It makes the assumption that for all there exists such that for all large enough , but this is certainly satisfied if is linear.
Now approximation to within error , say, is not the same as finding the first digits, because the value may have long runs of or . The value could even alternate such runs in any integral base. However, staying with base , there must exist a fixed such that for all large enough the run occupies no more than the last digits. Otherwise would have rational approximations with giving
with being unbounded. This would make a Liouville number and hence transcendental, a contradiction. Hence in time, Brent’s method guarantees the first correct digits of .
Thus we have that if , the first digits of can be output in linear time. By Theorem
A FMR, this would contradict the Hartmanis-Stearns conjecture.
To prove Theorem C, we note the following “folklore” result.
Integer multiplication can be done in linear time on an alternating TM with a fixed number of alternations.
Proof: Given -bit integers and , it suffices to guess and verify . Further we guess -many -bit numbers and remainders for respectively modulo . Although not all of the are primes, there are enough primes in the range whose product is greater than , so equality follows if and the remainders are correct for each . Accordingly we use a universal quantification over . It remains to verify that modulo equals for a particular . For our use below we could afford to expend a second pair of alternations, to guess remainders at certain points in the long division, but in fact this is doable in deterministic linear time.
Proof of Theorem C: Although this kind of collapse is not known for all time bounds , the following was proved as a key lemma in the famous paper that separated deterministic and nondeterministic linear time for multitape Turing machines:
Since integer multiplication is in , its non-membership in deterministic linear time implies . Thus the Hartmanis-Stearns conjecture implies this separation.
Of course is known, but the significant feature is that a proof via the Hartmanis-Stearns conjecture would evidently avoid the pebbling and graph-segregator details in the original proof. It might also extend the separation to other models such as Turing machines with planar tapes, for which it is currently open.
Finally, a time lower bound for multiplication would translate into one for nondeterministic (linear) time, perhaps greater than the current best known separation for time by Rahul Santhanam.
The main open question is simple: We have shown that the real-time part of Hartmanis-Stearns is not needed. Was this known before? We find the proof of the main theorem pretty simple, but We have not (yet) found it in the literature, and neither of us recall it from discussions. Perhaps it was known at the time years ago, but somehow forgotten.
Also has the connection between the conjecture and integer multiplication been observed before? Finally, can we refute the Hartmanis-Stearns conjecture by computing some algebraic irrational number in linear time? Or, can we find a class of algebraic numbers whose computation is linear-time equivalent to integer multiplication being in linear time?
We also thank Jin-Yi Cai for comments on the ideas here.
Update: Albert Meyer kindly contributed a reference for the linear-to-real-time result in a joint paper of his, and we have updated the post to reflect this.