Skip to content

The Power Of Guessing

November 8, 2012


Some results and ideas on guessing and nondeterminism

David Doty and Michael Wehar are theorists who work in different areas of computational complexity, and are at different stages of their careers. Doty is at the California Institute of Technology as a fellow in the DNA and Natural Algorithms Group, while Wehar just finished his BS and MS from Carnegie Mellon University at age nineteen. Doty is one of the experts on self-assembly systems, while Wehar is focused—at least for now—on complexity theory: his honors thesis is titled “Intersection Emptiness for Finite Automata.”

Today Ken and I wish to discuss the power of guessing, the power of nondeterminism, and what we might learn about the question {\mathsf{P} =? \mathsf{NP}} from simulations and from contexts in which {\mathsf{NP}} can be replaced by {\mathsf{P}} or at least sub-exponential time.

While DNA self-assembly systems and complexity theory seem far apart, they are tied by a common thread: what makes a system universal? This thread began for Doty in structural complexity theory and Jack Lutz’s program of resource-bounded theories of measure and dimensionality, and as with Lutz himself moved into biological computing models. This involved it in questions dear to me—self-assembly is one area covered by a conference that I help start, with Eric Baum, almost twenty years ago. The 19th DNA conference is scheduled for next fall—see here for details. It is exciting to see that while the conference has “mutated” some, it is still going strong.

Besides proving nice results we will mention shortly and working with top people at CMU including Klaus Sutner, Manuel Blum, and Richard Statman, Wehar is noted for work on Rubik’s Cube. We quote him:

Back in 2009 when I was 16 years old, I solved the 20x20x20 cube in 20 hours 37 minutes and 7.42 seconds. It was quite an accomplishment even though it was a rather slow solve. I think I could solve it in somewhere between 8 and 14 hours if I were to give a second stab at it. I just wanted to share this with all of you.

The “7.42 seconds” part is what I like best—as complexity theorists we might have been satisfied to say “order-of 20 hours” or even “polynomial in 20 hours.”

Efficiency and Guessing

This degree of time-precision brings to mind the main point of a paper at the just-held FOCS conference co-authored by Doty with Lutz and Matthew Patitz, Robert Schweller, Scott Summers, and Damien Woods. This is on a tight notion of universality for self-assembling tile systems, analogous to a kind of linear-time universality in cellular automata. The tile assembly process itself is nondeterministic and asynchronous.

When a cellular automaton model {{\cal C}} is said to be universal, for itself or for some other model {{\cal M}} such as Turing machines, it can mean several things. It can mean there is an easily computed mapping {f} from computations in {{\cal M}} to computations in {{\cal C}} such that the results by a machine {M \in {\cal M}} on an input {x} can be quickly inferred from notionally-equivalent results of the automaton {C = f(M)} running on the same (or lightly encoded) {x}. Or, stronger, it can mean {f} is a mapping from configurations {I,J} of an {M} to corresponding configurations {I',J'} of a {C}, such that {I} goes to {J} in one step if and only if {I'} goes to {J'} in some number of steps. The latter number of steps and sizes of {I',J'} could still be huge. This was so with Matthew Cook’s seminal exponential-time simulation of arbitrary Turing machines by the cellular automaton Rule 110. Woods and his student Turlough Neary got this down to a polynomial overhead in their 2006 ICALP paper.


The new result is that a standard tile-assembly model is universal for itself, under an even more special mapping {f}. Here, an individual tile {T} of a system {M} being simulated is represented as a “super-tile” {T'} of the fixed universal system {U}. In fact, owing to the nondeterministic and asynchronous nature of the simulator, there may be many super-tiles that encode an individual tile {T}, so {f} is a many-one function from super-tiles of {U} to tiles of {M}. The super-tile is a legal combination of {m^2} tiles from the finite set that constitutes the simulator {U}, where the number {m} depends only on the simulated system {M}. Assemblies of tiles {T} belonging to {M} are directly mimicked by assemblies of the super-tiles {T'}, as are the dynamics of the assembly process.

Note that there is an {m^2} blowup from {T} to {T'}, but this re-scaling depends only on the fixed {M}. So this is a linear-scaled simulation with a known, small constant scale factor, and gives rise to a notion called intrinsic universality (IU). One point of IU is that the requirements on {f} and efficiency are so tight that communication complexity and Kolmogorov complexity techniques can be used to prove that certain other kinds of celullar automata are not IU. In other words, such weak celullar automata are provably not universal under this strict notion of linear-time simulation. As usual see the full paper for details.

§

On the power of guessing in general, I wish there were some great breakthrough to report, but there are a couple of results and an idea that I would like to share with you. All are related in one way or another to the power of guessing. Perhaps the key question in all of complexity theory is, what is the power of guessing, of nondeterminism? Of course that is the core of the {\mathsf{P} = \mathsf{NP}} question, but it extends to many other parts of theory.

The three questions we will discuss are:

  • Can we show that nondeterminism can be simulated better than by brute-force search?

  • Can we simulate the product of finite automata better than the “obvious” method?

  • Can we trade off guessing for correctness?

Let’s look at each in turn.

An Oracle Result

I believe that while {\mathsf{P}\neq\mathsf{NP}} may be true, nondeterministic machines can still be simulated better than by brute-force. Color me a strong disbeliever in the Exponential Time Hypothesis. Ken believes ETH is off only by a log factor in the exponent, in the sense of a {2^{O(n/\log n)}} upper bound. We will see.

In a paper a while ago with Subrahmanyam Kalyanasundaram (Subruk), Ken, and Farbod Shokrieh, we worked on a modest improvement to the best known deterministic simulation of a Nondeterministic Turing Machine (NTM). Here is the abstract from our paper—I quote it to save me some writing.

The standard simulation of a nondeterministic Turing machine (NTM) by a deterministic one essentially searches a large bounded-degree graph whose size is exponential in the running time of the NTM. The graph is the natural one defined by the configurations of the NTM. All methods in the literature have required time linear in the size {S} of this graph. This paper presents a new simulation method that runs in time {O(\sqrt S)}. The search savings exploit the one- dimensional nature of Turing machine tapes. In addition, we remove most of the time-dependence on nondeterministic choice of states and tape head movements.

Doty proved a pretty oracle result that shows that any great improvement to what we did would have to be non-relativizing. Again I will quote from his paper:

Hartmanis used Kolmogorov complexity to provide an alternate proof of the classical result of Baker, Gill, and Solovay that there is an oracle relative to which P is not NP. We refine the technique to strengthen the result, constructing an oracle relative to which a conjecture of Lipton is false.

Is it cool to be mentioned in an abstract even if it shows that you were wrong? I guess it’s okay, since the proof is only for oracles. I still stand by the conjecture, which was that {\mathsf{NTIME}(n)} is contain in {\mathsf{DTIME}(c^{n})} for some constant {c<2}. But the result of Doty does show that any proof will have to be non-relativizing.

An Automata Result

One of the great mysteries to me is how can the “obvious” algorithm for the intersection of finite automata be the best possible? The usual intersection algorithm forms the direct product of the two machines and then uses the emptiness test on the product machine. For {k} machines of {n} states each, this leads to an {n^{k}} order algorithm. This beautiful result—due to Michael Rabin and Dana Scott—cannot be the best. Or can it? We have found better methods for so many other problems that I find it hard to believe that by using randomness, or some new data structure, or some other trick, we cannot reduce the running time below {n^{k}} But it seems that doing much better is unlikely because a strong improvement will lead to very surprising results in complexity theory.

Years ago working with George Karakostas and Anastasios Viglas, we wrote a paper “On the complexity of intersecting finite state automata” where we proved many consequences of the assumption that the product emptiness problem could be solved in {n^{o(k)}} time for fixed {k}. As we discussed in greater detail here, one was that

\displaystyle \mathsf{NL} \neq \mathsf{NP}.

Wehar in his thesis has proved much more. In particular, he shows that the same assumption yields that if the intersection problem is in {\mathsf{DTIME}(n^{o(k)})}, then {\mathsf{NL} \neq \mathsf{P}}. This of course greatly improves our result. See also his overview.

A New Idea?

Define {\mathsf{NTIME}(T(n),G(n))} to be those languages accepted by a nondeterministic Turing Machine (NTM) that runs in time {T(n)} and uses at most {G(n)} guesses. Note, if {G(n) = T(n)}, then this is just the usual definition of {\mathsf{NTIME}(T(n))}.

Here is the idea that I have been playing with recently. Suppose that we consider a set {S} in {\mathsf{NTIME}({\text{poly}(n)},\delta n)} where {\delta < 1}. Then, there is a polynomial size circuit that solves membership for {S} correctly for at least {2^{(1-\delta)n}} inputs of length {n}. The proof is trivial: each input of length {n} uses some guessing string of length {\delta n}, so by the pigeonhole principle there is a guess that works for the required number of inputs.

I find this simple observation to be quite intriguing. For example, suppose that {\mathsf{SAT}} could be solved by a NTM that runs in polynomial time and makes only {\delta n} guesses. Then again there would be an exponentially large fraction of {\mathsf{SAT}} problems that would be easy for some circuit.

Two remarks about this idea” Since {\mathsf{SAT}} can be checked we should be able to arrange that when the algorithm says “yes” it is always correct. Also we can avoid a trivialization of this observation that relates to the encoding of the problems. In the “usual” encoding many strings will not correspond to a {\mathsf{SAT}} problem. We can fix this by using a more clever encoding.

I will stop now and come back to this idea later with more details.

Open Problems

The gap between what we believe about the power of guessing and what we can actually prove remains huge. As we have discussed many times before, it remains open even to prove that {\mathsf{NTIME}(n)} is not contained in {\mathsf{DTIME}(n\log n)}.

Also if you are interested in the last idea, we can try and work out the details together.

Update 5/13, 5/14: Michael Wehar’s papers have moved to his University at Buffalo webpage.

About these ads
12 Comments leave one →
  1. November 9, 2012 4:52 am

    Another way of “improving” the $n^k$ bound for the intersection of finite automata would be an FPT algorithm running in time $f(k) n^c$ for some function $f$ and constant $c$. Do you know of any result in this direction? Could this have strong consequences like P ≠ NL?

    • Magnus permalink
      November 10, 2012 12:12 pm

      Yes, such an FPT algorithm would imply the result. One can use the self-reducibility of the question, to reduce a question of k machines with at most n states each to a question with, say, k/log k machines of at most n^(log k) states each, and then apply the FPT algorithm.

  2. November 10, 2012 3:21 pm

    Yes! If I understand you correctly, I proved a more general statement. http://www.andrew.cmu.edu/user/mwehar/papers.html

    If you want, you can read “Hardness Results for Intersection Emptiness”. Please give me feedback.

    • November 10, 2012 3:56 pm

      Thanks! We have added links to the papers in the post itself.

      As an aside, the Neary-Woods result for Rule 110 is the most economical demonstration I (Ken) know of universal complexity emerging feasibly from a simple rule. A similar P-completeness result for John Horton Conway’s “Game of Life” underscores the last chapter of Hawking and Mlodinow’s popular book The Grand Design. The P-completeness text by Greenlaw, Hoover, and Ruzzo is a stem reference for the latter result; on p211 (p225 of the PDF file) they gave as open whether a one-dimensional cellular automaton can simulate TM’s, which Matthew Cook showed for exponential time and Neary-Woods answered fully.

  3. Russell Impagliazzo permalink
    November 11, 2012 1:09 pm

    I want to make just a few clarifying comments. First, the Exponential Time Hypothesis is a question, not a conjecture. We were not asking anyone to believe it, just pointing out that it would be interesting to know it is true, and interesting to know that it is false. However, I think strong belief or disbelief is premature at this point.

    Second, the statements you suggest as being contrary to ETH are not necessarily in contradiction to it. You need to be very careful about what n is in your simulation. In some contexts, its the input size, but it could also mean the solution size or the total time of the non-deterministic algorithm. In ETH, n refers to the solution size, the number of guessed bits, and ETH refers to a specific problem, 3-SAT. It is shown to be equivalent to sub-exponential time algorithms in the solution size for a wide variety of NP-complete problems.
    But the instance size for even a sparse formula on n Boolean variables is O(n \log n) bits long. So if n is the instance size or time of a non-deterministic algorithm, a 2^{O(n/\log n) algorithm does not contradict ETH. In the reverse direction, the best reductions from TM’s to SAT blow up the instance and solution by a logarithmic factor, so the negation of ETH would not imply a sub-exponential simulation of arbitrary TMs. Similarly, I don’t see that your conjecture that there is a c^n simulation of linear time NTMs with c< 2 contradicts even strong ETH, the conjecture that the constants for k-SAT go to 2 as k increases.

    • November 12, 2012 9:24 pm

      Thanks, Russ. I think we’ve studiously avoided calling ETH a “conjecture”. You are right that there are two parallel statements, one about SAT and one about NTM’s, with no proven relation. I’ve wondered whether Etienne Grandjean’s 1988 SICOMP paper and this followup could be tweaked to fuse them, but it’s never made my front burner. Dick led an effort to try to show time c^n for all c > 1 (I prefer to think 2^{\epsilon n} for all \epsilon > 0) two years ago, but it didn’t hit any interesting obstacles.

      • November 25, 2012 10:14 pm

        Ah no, it was called a conjecture here, wherein I also missed your similar comment then (Sep. 2010). I found it because I’ve been riffling through the comments in 2010 for the Sunday 11/25 post…

  4. abhinav permalink
    November 12, 2012 9:26 am

    yaa its nice concept can give in a great detail

Trackbacks

  1. Fourier Transforms of Boolean Functions : 1 | Inquiry Into Inquiry
  2. Fourier Transforms of Boolean Functions : 2 | Inquiry Into Inquiry
  3. A Request That We Passed On | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,747 other followers