The Power Of Guessing
Some results and ideas on guessing and nondeterminism
David Doty and Michael Wehar are theorists who work in different areas of computational complexity, and are at different stages of their careers. Doty is at the California Institute of Technology as a fellow in the DNA and Natural Algorithms Group, while Wehar just finished his BS and MS from Carnegie Mellon University at age nineteen. Doty is one of the experts on self-assembly systems, while Wehar is focused—at least for now—on complexity theory: his honors thesis is titled “Intersection Emptiness for Finite Automata.”
Today Ken and I wish to discuss the power of guessing, the power of nondeterminism, and what we might learn about the question from simulations and from contexts in which can be replaced by or at least sub-exponential time.
While DNA self-assembly systems and complexity theory seem far apart, they are tied by a common thread: what makes a system universal? This thread began for Doty in structural complexity theory and Jack Lutz’s program of resource-bounded theories of measure and dimensionality, and as with Lutz himself moved into biological computing models. This involved it in questions dear to me—self-assembly is one area covered by a conference that I help start, with Eric Baum, almost twenty years ago. The 19th DNA conference is scheduled for next fall—see here for details. It is exciting to see that while the conference has “mutated” some, it is still going strong.
Besides proving nice results we will mention shortly and working with top people at CMU including Klaus Sutner, Manuel Blum, and Richard Statman, Wehar is noted for work on Rubik’s Cube. We quote him:
Back in 2009 when I was 16 years old, I solved the 20x20x20 cube in 20 hours 37 minutes and 7.42 seconds. It was quite an accomplishment even though it was a rather slow solve. I think I could solve it in somewhere between 8 and 14 hours if I were to give a second stab at it. I just wanted to share this with all of you.
The “7.42 seconds” part is what I like best—as complexity theorists we might have been satisfied to say “order-of 20 hours” or even “polynomial in 20 hours.”
Efficiency and Guessing
This degree of time-precision brings to mind the main point of a paper at the just-held FOCS conference co-authored by Doty with Lutz and Matthew Patitz, Robert Schweller, Scott Summers, and Damien Woods. This is on a tight notion of universality for self-assembling tile systems, analogous to a kind of linear-time universality in cellular automata. The tile assembly process itself is nondeterministic and asynchronous.
When a cellular automaton model is said to be universal, for itself or for some other model such as Turing machines, it can mean several things. It can mean there is an easily computed mapping from computations in to computations in such that the results by a machine on an input can be quickly inferred from notionally-equivalent results of the automaton running on the same (or lightly encoded) . Or, stronger, it can mean is a mapping from configurations of an to corresponding configurations of a , such that goes to in one step if and only if goes to in some number of steps. The latter number of steps and sizes of could still be huge. This was so with Matthew Cook’s seminal exponential-time simulation of arbitrary Turing machines by the cellular automaton Rule 110. Woods and his student Turlough Neary got this down to a polynomial overhead in their 2006 ICALP paper.
The new result is that a standard tile-assembly model is universal for itself, under an even more special mapping . Here, an individual tile of a system being simulated is represented as a “super-tile” of the fixed universal system . In fact, owing to the nondeterministic and asynchronous nature of the simulator, there may be many super-tiles that encode an individual tile , so is a many-one function from super-tiles of to tiles of . The super-tile is a legal combination of tiles from the finite set that constitutes the simulator , where the number depends only on the simulated system . Assemblies of tiles belonging to are directly mimicked by assemblies of the super-tiles , as are the dynamics of the assembly process.
Note that there is an blowup from to , but this re-scaling depends only on the fixed . So this is a linear-scaled simulation with a known, small constant scale factor, and gives rise to a notion called intrinsic universality (IU). One point of IU is that the requirements on and efficiency are so tight that communication complexity and Kolmogorov complexity techniques can be used to prove that certain other kinds of celullar automata are not IU. In other words, such weak celullar automata are provably not universal under this strict notion of linear-time simulation. As usual see the full paper for details.
§
On the power of guessing in general, I wish there were some great breakthrough to report, but there are a couple of results and an idea that I would like to share with you. All are related in one way or another to the power of guessing. Perhaps the key question in all of complexity theory is, what is the power of guessing, of nondeterminism? Of course that is the core of the question, but it extends to many other parts of theory.
The three questions we will discuss are:
- Can we show that nondeterminism can be simulated better than by brute-force search?
- Can we simulate the product of finite automata better than the “obvious” method?
- Can we trade off guessing for correctness?
Let’s look at each in turn.
An Oracle Result
I believe that while may be true, nondeterministic machines can still be simulated better than by brute-force. Color me a strong disbeliever in the Exponential Time Hypothesis. Ken believes ETH is off only by a log factor in the exponent, in the sense of a upper bound. We will see.
In a paper a while ago with Subrahmanyam Kalyanasundaram (Subruk), Ken, and Farbod Shokrieh, we worked on a modest improvement to the best known deterministic simulation of a Nondeterministic Turing Machine (NTM). Here is the abstract from our paper—I quote it to save me some writing.
The standard simulation of a nondeterministic Turing machine (NTM) by a deterministic one essentially searches a large bounded-degree graph whose size is exponential in the running time of the NTM. The graph is the natural one defined by the configurations of the NTM. All methods in the literature have required time linear in the size of this graph. This paper presents a new simulation method that runs in time . The search savings exploit the one- dimensional nature of Turing machine tapes. In addition, we remove most of the time-dependence on nondeterministic choice of states and tape head movements.
Doty proved a pretty oracle result that shows that any great improvement to what we did would have to be non-relativizing. Again I will quote from his paper:
Hartmanis used Kolmogorov complexity to provide an alternate proof of the classical result of Baker, Gill, and Solovay that there is an oracle relative to which P is not NP. We refine the technique to strengthen the result, constructing an oracle relative to which a conjecture of Lipton is false.
Is it cool to be mentioned in an abstract even if it shows that you were wrong? I guess it’s okay, since the proof is only for oracles. I still stand by the conjecture, which was that is contain in for some constant . But the result of Doty does show that any proof will have to be non-relativizing.
An Automata Result
One of the great mysteries to me is how can the “obvious” algorithm for the intersection of finite automata be the best possible? The usual intersection algorithm forms the direct product of the two machines and then uses the emptiness test on the product machine. For machines of states each, this leads to an order algorithm. This beautiful result—due to Michael Rabin and Dana Scott—cannot be the best. Or can it? We have found better methods for so many other problems that I find it hard to believe that by using randomness, or some new data structure, or some other trick, we cannot reduce the running time below But it seems that doing much better is unlikely because a strong improvement will lead to very surprising results in complexity theory.
Years ago working with George Karakostas and Anastasios Viglas, we wrote a paper “On the complexity of intersecting finite state automata” where we proved many consequences of the assumption that the product emptiness problem could be solved in time for fixed . As we discussed in greater detail here, one was that
Wehar in his thesis has proved much more. In particular, he shows that the same assumption yields that if the intersection problem is in , then . This of course greatly improves our result. See also his overview.
A New Idea?
Define to be those languages accepted by a nondeterministic Turing Machine (NTM) that runs in time and uses at most guesses. Note, if , then this is just the usual definition of .
Here is the idea that I have been playing with recently. Suppose that we consider a set in where . Then, there is a polynomial size circuit that solves membership for correctly for at least inputs of length . The proof is trivial: each input of length uses some guessing string of length , so by the pigeonhole principle there is a guess that works for the required number of inputs.
I find this simple observation to be quite intriguing. For example, suppose that could be solved by a NTM that runs in polynomial time and makes only guesses. Then again there would be an exponentially large fraction of problems that would be easy for some circuit.
Two remarks about this idea” Since can be checked we should be able to arrange that when the algorithm says “yes” it is always correct. Also we can avoid a trivialization of this observation that relates to the encoding of the problems. In the “usual” encoding many strings will not correspond to a problem. We can fix this by using a more clever encoding.
I will stop now and come back to this idea later with more details.
Open Problems
The gap between what we believe about the power of guessing and what we can actually prove remains huge. As we have discussed many times before, it remains open even to prove that is not contained in .
Also if you are interested in the last idea, we can try and work out the details together.
Update 5/13, 5/14: Michael Wehar’s papers have moved to his University at Buffalo webpage.
Another way of “improving” the $n^k$ bound for the intersection of finite automata would be an FPT algorithm running in time $f(k) n^c$ for some function $f$ and constant $c$. Do you know of any result in this direction? Could this have strong consequences like P ≠ NL?
Yes, such an FPT algorithm would imply the result. One can use the self-reducibility of the question, to reduce a question of k machines with at most n states each to a question with, say, k/log k machines of at most n^(log k) states each, and then apply the FPT algorithm.
Yes! If I understand you correctly, I proved a more general statement. http://www.andrew.cmu.edu/user/mwehar/papers.html
If you want, you can read “Hardness Results for Intersection Emptiness”. Please give me feedback.
Thanks! We have added links to the papers in the post itself.
As an aside, the Neary-Woods result for Rule 110 is the most economical demonstration I (Ken) know of universal complexity emerging feasibly from a simple rule. A similar P-completeness result for John Horton Conway’s “Game of Life” underscores the last chapter of Hawking and Mlodinow’s popular book The Grand Design. The P-completeness text by Greenlaw, Hoover, and Ruzzo is a stem reference for the latter result; on p211 (p225 of the PDF file) they gave as open whether a one-dimensional cellular automaton can simulate TM’s, which Matthew Cook showed for exponential time and Neary-Woods answered fully.
Thanks!
I want to make just a few clarifying comments. First, the Exponential Time Hypothesis is a question, not a conjecture. We were not asking anyone to believe it, just pointing out that it would be interesting to know it is true, and interesting to know that it is false. However, I think strong belief or disbelief is premature at this point.
Second, the statements you suggest as being contrary to ETH are not necessarily in contradiction to it. You need to be very careful about what n is in your simulation. In some contexts, its the input size, but it could also mean the solution size or the total time of the non-deterministic algorithm. In ETH, n refers to the solution size, the number of guessed bits, and ETH refers to a specific problem, 3-SAT. It is shown to be equivalent to sub-exponential time algorithms in the solution size for a wide variety of NP-complete problems.
But the instance size for even a sparse formula on n Boolean variables is O(n \log n) bits long. So if n is the instance size or time of a non-deterministic algorithm, a 2^{O(n/\log n) algorithm does not contradict ETH. In the reverse direction, the best reductions from TM’s to SAT blow up the instance and solution by a logarithmic factor, so the negation of ETH would not imply a sub-exponential simulation of arbitrary TMs. Similarly, I don’t see that your conjecture that there is a c^n simulation of linear time NTMs with c< 2 contradicts even strong ETH, the conjecture that the constants for k-SAT go to 2 as k increases.
Thanks, Russ. I think we’ve studiously avoided calling ETH a “conjecture”. You are right that there are two parallel statements, one about SAT and one about NTM’s, with no proven relation. I’ve wondered whether Etienne Grandjean’s 1988 SICOMP paper and this followup could be tweaked to fuse them, but it’s never made my front burner. Dick led an effort to try to show time c^n for all c > 1 (I prefer to think 2^{\epsilon n} for all \epsilon > 0) two years ago, but it didn’t hit any interesting obstacles.
Ah no, it was called a conjecture here, wherein I also missed your similar comment then (Sep. 2010). I found it because I’ve been riffling through the comments in 2010 for the Sunday 11/25 post…
yaa its nice concept can give in a great detail