Skip to content

Guessing Simulations

February 26, 2009


Efficiently simulating nondeterministic turing machines

Picture 1

Ron Book was a language theorist–who unfortunately died in 1997. I used the image of his co-author Sheila Greibach since I could not find his–I plan a post later on her own important work. Ron worked in a part of theory that is consider unimportant today–I think incorrectly. In the 1970’s language theory was “the king of the hill”: the core of theory then was all about the properties of languages. Languages were classified according to their complexity: there were regular languages, context free languages, context sensitive languages, and others. Language theorists studied closure properties–is the union of two languages of type X still of type X? They also studied decidability questions–is it decidable whether or not a language of type X has some property? And so on. The theory was (is?) pretty, some of the questions were very hard, and parts of the theory had important practical applications. For example, Don Knuth, who is famous for many things, did his early work on the structure of “deterministic context free languages”. All of modern parsers are based on this and related work.

While most consider this area less important today, it helped create many of the key ideas that we still find central. For example, the LBA question, which today we would phrase as “are nondeterministic space machines closed under complement?” was created by the language theorists. See my post on we “guessed wrong” for a discussion of the wonderful result that showed that the answer is yes.

Book and I overlapped at Yale in the mid 1970’s. He was the senior theorist in the Computer Science Department at Yale at the time. Ron thought for a while that he was close to a proof that P was not equal to NP. Very close. His approach was based on trying to show that certain language theoretic properties of P and NP were different. One day I noticed that there was a fundamental obstacle to his approach–the details today are not really important. I do remember that he was “really upset” with me for showing him that his approach could not work. It was a bit unreal–as if it was my fault that the mathematics worked out the way it did. I only wish I could control such things. Oh well, on to a beautiful result of his.

Simulation of Turing Machines

Book and Greibach have an extremely beautiful result that I want to discuss today. Many of you may not know this result. What is neat about his result is that it is a critical part to a terrific result of Albert Meyer and Charlie Rackoff that I will discuss in a later post. Also while their result is not hard to prove, it shows a distinction between nondeterministic time and deterministic time. This difference could play a role, in the future, on the P=NP question. So I thought I might explain the result.

One of the basic questions then, and still today, is how efficiently can one model of computation simulate another model of computation. Besides being a natural question, there are many applications of efficient simulations. One of the main application is in proving hierarchy theorems–such theorems show that more of a resource yields more power.

They looked at the following basic question: how efficiently could a fixed nondeterministic Turing Machine simulate any other nondeterministic Turing Machine? This may sound trivial: just simulate the machine, after all they have the same abilities. But there is a catch. Suppose that our fixed machine {M} has two working tapes, while a machine {S} has a {100} working tapes. If the machine {S} runs in time {T(n)}, then how can the machine {M} simulate {S} in about the same time? The difficulty is simple: {M} must use its two tapes to somehow simulate efficiently the {100} tapes of {S}. This can be done but the obvious method would take time {O(T(n)^{2})}. A much harder simulation due to Mike Fischer and Nick Pippenger (I will talk about it in another post) shows that such a simulation can be done in time {O(T(n)\log T(n) )} for any type of Turing machines.

Their result was that for nondeterministic Turing machines the simulation could be done in {O(T(n))}. This is clearly optimal: essentially he showed that in this case there was no additional cost in having only two tapes. So if you were buying a nondeterministic Turing Machine at Best Buy you should not be talked into buying the {100} tape model–two tapes are plenty. Save your money.

The Method

So how did Book and Greibach show that two tapes could simulate {100} tapes? If you have not seen the idea you take a second and ponder how you might do it. The method is quite, in my opinion, clever.

Suppose that we want to simulate the Turing Machine {S} on a given input. The idea is to first guess a complete trace of what the machine {S} does at each step and write this into one tape. This trace will include the following information: the state of the finite control of {S}, the value of the currently scanned squares of each work tape and input tape position. This trace information takes at most {O(T(n))} space and therefore time to write down.

We next have to check that the trace is a correct description of an accepting computation of {S}. It is easy to check that the finite state control has been updated correctly and that the last state is an accepting one. The hard part is to check whether or not each work tape was used correctly. The problem is this: suppose that at time {i} we wrote that a certain working tape had {0} on it. Then, at the next time that square is visited it must have the same symbol there.

We check this by using our second tape. For each of the working tapes we go over the trace and simulate the it on the second tape. If the trace is correct, then there is no problem. If the trace is wrong, then this will detect an inconsistency. We therefore do this for each working tape. This is the way that Book and Greibach show that we can simulate {S} in time {O(T(n))}.

Open Problems

There are at least two obvious open problems. First, can this type of result be proved for deterministic Turing machines? The method makes essential use of the ability of a nondeterministic machine to guess. Still there may be a cool way to improve the simulation of deterministic machines. Second, could one exploit this fundamental property of nondeterminism to get a hold of the P=NP problem? I do not see how it could be exploited, but it is an intriguing thought.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,807 other followers