Skip to content

A Retirement Party for Joel Seiferas

August 29, 2017

A gathering this Labor Day in Rochester

Announcement source

Joel Seiferas retired on December 31, 2016 and is now a professor emeritus in the University of Rochester Computer Science Department.

Today Ken and I wish to talk about his party happening this Labor Day—September 4th.

Joel retired on a holiday—New Year’s Eve—and is having his retirement celebration on another holiday, Labor Day. The former marks the end of each year, and the latter the cultural end of each summer. Labor Day in the US is the first Monday in September. As shown by this chart in Wikipedia’s article, there is some complexity in its otherwise periodic structure—can you name the pope responsible for it?

I’ve never liked Labor Day owing to summer ending, school starting, and another reason—a fact of calendrical life that I share with Ken. Did Jack Benny’s feelings about Valentine’s Day change after he turned 39?

The Retirement Party

Joel asked his department in lieu of a gold watch or a series of talks praising his decades of research, he wanted a series of talks that would be accessible and enjoyable to everyone. A somewhat novel idea, since many talks are not accessible to all.

I would argue that both could be achieved—Joel’s work was often technical, but could definitely be explained to a general audience. For example, at least in my opinion, his paper “Two Heads are Better than Two Tapes” could be fun to hear about. Okay it may not be as exciting as hearing about self driving cars, AI programs that can outplay humans at Go, or a proof that P=NP. But there is something—I believe—beautiful about results of Joel’s that explain the power of various basic computational devices.

But no one asked me. So Joel got his wish and is receiving four talks by leaders in our field that should be enjoyable for everyone:

  • Zvi Galil, “Online Revolutions: From Stringology with Joel to Georgia Tech’s Highly Affordable Master Degree.”

  • Jon Kleinberg, “Social Dynamics and Mathematical Inevitability.”

  • Shafi Goldwasser, “Crypto Computing.”

  • Muthuramakrishnan Venkitasubramaniam, “The Status of Zero-Knowledge Proofs.

The Department’s Retirement Announcement

They have posted a wonderful statement about Joel here. We especially enjoy how it ends:

The fact that Joel is completely ego-free and did all that work (and the single-handed development of widely used bibliographic bridging resources) purely for the advancement of the science—while also mentoring (especially in his ambitious and challenging courses) most of the theory students the department has educated, shaping the department’s faculty recruiting in theoretical computer science, serving as the department’s chair, and wholeheartedly serving the University in his many years on the Academic Honesty Board—makes him all the more of an inspiration to those who know him.

By coincidence, the great economist and public intellectual, Thomas Sowell, retired from his decades of column writing at the same time that Joel retired. At the end of a 2004 interview, Sowell was asked how he would like to be remembered, and he replied: “Oh, heavens, I’m not sure I want to be particularly remembered. I would like the ideas that I’ve put out there to be remembered.” Although our dear colleague Joel is self-effacing and modest, there is no doubt that the deep understanding of computer science that he has contributed will be remembered beyond Joel’s life and ours. We are deeply grateful to him for those ideas, and for his warm, wise friendship.

The earlier parts of the statement include some of Joel’s work and its impact. We’ll explain two of his results that are referenced.


One basic fact about general-purpose computing is the ability of one program {P} to run any given program {Q}, even {Q = P} itself. The program {Q} is encoded in some fashion and may be much larger than {P}. In practice we’re not aware of a time penalty for running {Q} this way rather than “natively” because serving programs is what a general-purpose computer {P} does. But in the underlying model of computation there is a hit.

In some models the hit is only a constant factor depending on the size of {Q}—and importantly, not on the size of the input {x} being run. One of the quirks of the standard multi-tape deterministic Turing machine (DTM) model is that it multiplies the constant factor by an extra {\log|x|} overhead—indeed a factor {\log t} for {t}-step computations where we assume {t \geq |x|} so all of {x} is read. This is not just a feature of {P} running {Q} but governs the best-known simulation of time-{t} deterministic computations by families of Boolean circuits, which have size {\Omega(t\log t)}. It also affects how tightly we can separate time classes. By diagonalization we can prove:

Theorem 1 Let {t_1(n)\log t_1(n) = o(t_2(n))} with {t_2(n)} being “time constructible” in the sense that some DTM given {n} in unary halts in exactly {t_2(n)} steps. Then we can find a language that is accepted by a DTM in {t_2(n)} time but not accepted by any DTM in {t_1(n)} time. In symbols, {\mathsf{DTIME}[t_1(n)] \subset \mathsf{DTIME}[t_2(n)]}, with {\subset} meaning proper containment.

For separating we can improve the factor via certain “padding and translation” techniques. We can relax the first condition to read {t_1(n)(\log t_1(n))^{\epsilon} = o(t_2(n))} for some fixed {\epsilon > 0}. Can we make it go away completely? For DTMs there are specific problems that arise when we try to push the factor any further.

So what about nondeterministic Turing machines, that is, NTMs? It would seem to be harder to get a tight diagonalization to work because we cannot simply interchange ‘yes’ and ‘no’ answers to negate. However, Joel, as part of his thesis work under Albert Meyer, with Patrick Fischer making a trio, proved that one can almost eliminate the factor altogether:

Theorem 2 Let {t_1(n+1) = o(t_2(n))} with {t_2(n)} again being the running time of some DTM. Then {\mathsf{NTIME}[t_1(n)] \subset \mathsf{NTIME}[t_2(n)]}.

The only difference from a constant-factor overhead is having {t_1(n+1)} not {t(n)} on the left-hand side. The ‘+1’ has always struck me (Ken, writing this section). At polynomial time levels we can ignore it and even when {t_1(n) = 2^n} we can drop it: {2^{n+1} = 2\cdot 2^n = O(2^n)}. But with {t_1(n) = 2^{n^2}} we have {t_1(n+1) = 2^{n^2}\cdot 2^{2n+1} \neq O(t_1(n))} At double-exponential time levels the ‘+1’ makes an even greater relative difference. Its employment in the proof seems innocuous but remains indelible.

As the Rochester page mentions, the tightness of the theorem for {t_1(n) = 2^n} was employed by Ryan Williams to prove his breakthrough lower bounds against {\mathsf{ACC}}. Our initial post on Ryan’s results brought out the connection to Joel.

Two Heads are Better than Two Tapes

This is one of Joel’s later results. It is joint with Tao Jiang and Paul Vitányi. The result is:

Theorem 3 The language {L} of strings of the form {x\#y} where {y} is a prefix of the binary string {x}, which can be recognized in real time by a DTM with two heads on one worktape, cannot be recognized in real time by a DTM with two worktapes having one head each.

It is important to specify that the input {z} appears on a read-only tape whose head cannot move left. The “one tape” and “two tapes” are initially blank.

The first statement is easy. As the machine reads the input it writes it down on its tape with one head and cleverly leaves the second head at the beginning. Then when the input hits the symbol {\#} the second head starts checking the remaining input {y} against the written tape that stores {x}. The two heads compare characters in lockstep and reach a verdict by the time the input has ended. The definition of real time often states that the input tape head advances at each step, but it can be relaxed to allow pausing it, provided there is a fixed finite number {m} independent of the input {z} such that {M} always reads a fresh character within {m} steps.

What happens when the tape heads are on separate tapes? {M} can copy the {x} part to one tape, rewind its head to the left edge, and do the same lockstep comparison with the input tape head reading {y} and the second tape reading the copy of {x}. The rub is that the pause for rewinding does not have a fixed finite bound. After copying {x} the second head is just in the wrong position to begin comparing {y}.

The second statement may now look obvious to you, but how about if {M} has three tapes? Does it look equally obvious? In fact, Fischer and Meyer, working with Arnold Rosenberg in 1967, showed that a clever folding and mirroring scheme among three single-head worktapes enables {M} to keep tabs on both ends of the {x} part at all times while copying it. This enables {L} to be recognized in real time. Moreover Joel, with Benton Leong, showed in 1977 that any computation with multi-head worktapes can be simulated in lockstep by enough single-head tapes—even when the `tapes’ are multi-dimensional.

That left {k = 2} single-head worktapes as the next case to try for recognizing {L}. As our readers know, proving something to be impossible in computational theory is really hard. So is the proof in Joel’s joint paper, across eight pages of strategy and “crunch.” The question had after all been open for several decades. If you wonder whether other simple-looking problems are still open, see these two posts, which also reference related work by the FMR trio.

Open Problems

We wish Joel a great retirement and hope that the talks are as wonderful as we expect them to be. Ken is also looking forward to seeing old friends again there. Our thoughts are also with colleagues and friends in Houston and adjoining Gulf Coast areas.

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s