Level By Level
Impetus to study a new reducibility relation
See Mike’s other projects too |
Michael Wehar has just earned his PhD degree in near-record time in my department. He has posted the final version of his dissertation titled On the Complexity of Intersection Non-Emptiness Problems which he defended last month. The dissertation expands on his paper at ICALP 2014, joint paper at ICALP 2015 with Joseph Swernofsky, and joint paper at FoSSaCS 2016.
Today, Dick and I congratulate Mike on his accomplishment and wish him all the best in his upcoming plans, which center on his new job with CapSen Robotics near Pitt and CMU in Pittsburgh.
Mike’s thesis features a definition that arguably has been thought about by researchers in parameterized complexity but passed over as too particular. If I were classifying his definition like salsa or cheddar cheese,
- Mild;
- Medium;
- Sharp;
then Mike’s definition would rate as
- Extra Sharp.
Too sharp, that is, for -notation, thus putting it beyond our usual complexity theory diet. But the problem it addresses has also seemed beyond the reach of complexity theory while we can’t even get a clear handle on versus :
What are the relationships between the levels of deterministic versus nondeterministic time and/or space, and versus or levels of fairly-counted deterministic and nondeterministic space?
“Fairly counted” space means that we cast away the “linear space-compression” theorem by restricting Turing machines to be binary, that is to use work alphabet . The blank character can be read but not written. This doesn’t mean swearing off -notation for space, but does mean employing it more carefully.
Questions about particular polynomial-time exponents and space-usage factors animate fine-grained complexity, which has seen a surge of interest recently. Consider this problem which is central in Mike’s thesis:
- Intersection (Non-)Emptiness : Given automata , each with states, is ?
The “naive” Cartesian product algorithm works in time basically . The fine-grained question is, can one do better than naive? As we covered before, Mike’s early work showed that getting time would have the sharp consequence , which improved the conclusion in a 2003 paper by Dick with George Karakostas and Anastasios Viglas.
Mike’s thesis expands the mechanism that produces this and related results, with the aim of a two-pronged attack on complexity bounds. He has framed it in terms of parameterized complexity, which we’ll discuss before returning to the thesis.
Some Fixed-Parameter Basics
Consider three classic -complete problems, each given an undirected graph and a number :
- Vertex Cover: Is there of size at most such that every edge is incident to a node in ?
- Clique: Is there of size at least such that all nodes in are connected by edges?
- Dominating Set: Is there of size at most such that every other node has an edge to a node in ?
Parameterized complexity started with the question:
What happens when some value of the parameter is fixed?
For each problem and each , we can define the “slice” to be the language of strings—here denoting graphs —such that the answer for is “yes.” Each individual slice belongs to : just try all choices of , and when is fixed this gives polynomial time. The fine-grained question is:
Can we solve each slice faster? In particular, can we solve in time where the exponent is fixed independent of ?
The start of much combinatorial insight and fun is that the three problems appear to give three different answers:
- Vertex Cover: Build by streaming edges in a fixed order. If neither nor already belongs to , make a binary choice of which to add, until when we accept if it’s a cover. If not, start over and try the next string in to guide the choices. This gives time for some fixed .
- Clique: We can get where (see this), but the exponent is still linear in .
- Dominating Set: No algorithm with time better than is known or expected.
The exact for Vertex Cover may be model-dependent; for RAMs we get and indeed time is known. But the fact of its being fixed holds in any reasonable model and classifies Vertex Cover as belonging to the class for fixed-parameter tractable. To address whether Clique and Dominating Set belong to , a novel reducibility and completeness theory was built.
Parameterized Reductions and Logspace
To exemplify an FPT-reduction, consider the problem
- Short NTM Acceptance: Does a given nondeterministic Turing machine accept the empty string within steps?
When is variable this is a generic -complete problem, so of course we can reduce Clique to it, but consider the reduction done this way: Given and , make an alphabet with a character for each . Code with a table for to write down edges nondeterministically on its tape, then deterministically check whether only different nodes appear in these edges, accepting if so. The computation takes time that is and can be defined solely in terms of . The latter fact makes an FPT-eduction.
FPT-reductions work similarly to ordinary polynomial-time reductions. If our NTM problem belongs to then so does Clique. There is a trickier FPT-reduction the other way, so the problems are FPT-equivalent. Both are complete for the first level, called , of the -hierarchy. Dominating Set is hard for the second level, ; Clique FPT-reduces to it, but not vice-versa unless .
Just like all major -complete problems are related by logspace not just poly-time computable reductions, parameterized complexity has a logspace-founded version. The class consists of parameterized problems whose slices individually belong to —that is, to deterministic logspace—and and are defined similarly via the conditions and . Interestingly, the separation is known by standard diagonalization means, but is not known to be contained in for reasons similar to the vs. problem. A recent Master’s thesis by Jouke Witteveen has details on these classes.
All four of our problems belong to . Furthermore, the slices belong to “almost exactly ” space using binary TMs. The space is needed mainly to write down each candidate and can be quantified as . We could jump from this to define finer variations on and but Mike, joined by Swernofsky, chose to refine the reductions instead. Now we are primed to see how their ideas might impact separations.
LBL Reductions
Our reduction from Clique to Short NTM Acceptance bumped up to . Blowups in are important to a central concept called kernelization which deserves more space—see this survey. They can be exponential—the algorithm for Vertex Cover hints at how—or even worse. Hence people have refined parameterized reductions according to whether they blow up
- polynomially, or
- linearly.
But before now, we haven’t noted in this line a strong motivation to limit the blowup even further, such as to or for some fixed constant (however, see this).
Our reduction also bumped up the alphabet size. This appears necessary because for binary NTMs the problem is in : we can traverse the possible guesses made on the tape, and for each one, solve an instance of graph-reachability.
So what can we do if we insist on binary TMs? Mike and Joseph fastened on to the idea of making the in space become the parameter. We can define “Log NTM Acceptance” to be like “Short NTM Acceptance” except that the NTM is allowed space (where is the size of in states). We get a problem in that is in fact complete for under parameterized logspace reductions. Likewise “Log DTM Acceptance” is the case where is deterministic, which is complete for . Then we define “Medium DTM Acceptance” where the question asks about acceptance within time steps regardless of space. Mike’s thesis also covers “Long DTM Acceptance” where the time is .
The in allows a binary TM to write down the labels of nodes in a size- structure at a time. In that way it subsumes the use of in the above three graph problems but is more fluid—the space could be used for anything. Keying on drives the motivation for Mike’s definition of LBL (for “level-by-level”) reductions, whose uniform logspace version is as follows:
Definition 1 A parameterized problem LBL-reduces to a parameterized problem if there is a function computable in space such that for all and , , where is fixed independent of .
That is, in the general FPT-reduction form , we insist on exactly. This reduction notion turns out to be the neatest way to express how Mike’s thesis refines and extends previous work, even his own.
With our Chair, Chunming Qiao, and his UB CSE Graduate Leadership Award. |
Some Results
The first reason to care about the sharper reduction is the following theorem which really summarizes known diagonalization facts. The subscript reminds us that the TMs have binary work alphabet size; it is not necessary on since it does not affect the exponent.
Theorem 2
- If a parameterized problem is LBL-hard for Log DTM Acceptance, then there is a constant and all , .
- If a parameterized problem is LBL-hard for Log NTM Acceptance, then there is a constant such that for all , .
- If is LBL-hard for Medium DTM Acceptance, then there is a constant such that for all , .
Moreover, if there is a constant such that each belongs to , , or , respectively, then these become LBL-equivalences.
The power, however, comes from populating these hardnesses and equivalences with natural problems. This is where the problem—as a parameterized family of problems asking about the intersection of the languages of -many -state DFAs—comes in. We can augment them by adding one pushdown store to just one of the DFAs, calling them where is the PDA. Then call the problem of whether
by the name . It curiously turns out not to matter whether or the are nondeterministic for the following results:
Theorem 3
- is LBL-equivalent to Log NTM Acceptance. Hence there is a such that for each , .
- is LBL-equivalent to Medium DTM Acceptance. Hence there is a such that for each , .
Between the ICALP 2015 paper and writing the thesis, Mike found that the consequence of (a) had been obtained in a 1985 paper by Takumi Kasai and Shigeki Iwata. Mike’s thesis has further results establishing a whole spectrum of variants that correspond to other major complexity classes and provide similar lower bounds on -levels within them. The question becomes how to leverage the greater level of detail to attack lower bound problems.
Onto Bigger Questions
The way the attack on complexity questions is two-pronged is shown by considering the following pair of results.
Theorem 4 if and only if and are LBL-equivalent.
Theorem 5 If for every there is a such that , then .
It has been known going back to Steve Cook in 1971 that logspace machines with one auxiliary pushdown, whether deterministic or nondeterministic, capture exactly . Since pushdowns are restricted and concrete, this raised early hope of separating from —intuitively by “demoting” some aspect of a problem down to or which are known to be different from . There was also hope of separating and or at least finding more-critical conditions that pertain to their being equal.
What Mike’s theorem does is shift the combinatorial issue to what happens when the pushdown is added to a collection of DFAs. Unlike the case with an auxiliary pushdown to a space-bounded TM, there is a more-concrete sense in which the relative influence of the pushdown might “attenuate” as increases. Can this be leveraged for a finer analysis that unlocks some secrets of lower bounds?
At the very least, Mike’s LBL notion provides a succinct way to frame finer questions. For example, how does hang on the relation between individual levels of and levels of ? The simplest way to express this, using Mike’s notation for Medium DTM Acceptance and for the analogous parameterized problem for space-bounded acceptance, seems to be:
Theorem 6 if and only if and are LBL-equivalent.
Mike’s thesis itself is succinct, at 73 pages, yet contains a wealth of other variants for tree-shaped automata and other restricted models and connections to forms of the Exponential Time Hypothesis.
Open Problems
How can one appraise the benefit of expressing “polynomial” and “” complexity questions consciously in terms of individual and levels? Does the LBL reduction notion make this more attractive?
It has been a pleasure supervising Mike and seeing him blaze his way in the larger community, and Dick and I wish all the best in his upcoming endeavors.
[fixed first part of Theorem 2, added link to Cygan et al. 2011 paper, added award photo]
Snow And Theory
The 25th Anniversary of the ACO Program
Cropped from src1 & src2 in gardens for karma |
Prasad Tetali and Robin Thomas are mathematicians at Georgia Tech who are organizing the Conference Celebrating the 25th Anniversary of the ACO Program. ACO stands for our multidisciplinary program in Algorithms, Combinatorics and Optimization. The conference is planned to be held starting this Monday, January 9–11, 2017.
Today I say “planned” because there is some chance that Mother Nature could mess up our plans.
Read more…
Babai’s Result: Still a Breakthrough
Even after today’s retraction of quasi-polynomial time for graph isomorphism
Cropped from source |
László Babai is famous for many things, and has made many seminal contributions to complexity theory. Last year he claimed that Graph Isomorphism (GI) is in quasi-polynomial time.
Today Laci posted a retraction of this claim, conceding that the proof has a flaw in the timing analysis, and Ken and I want to make a comment on what is up. Update 1/10: He has posted a 1/9 update reinstating the claim of quasi-polynomial time with a revised algorithm. As we’ve noted, he is currently speaking at Georgia Tech, and we hope to have more information soon.
Read more…
The Gift of Community
<!mor
Shared experience may matter as much as scientific cooperation
AIP source—see also interview |
Robert Marshak was on hand for Trinity, which was the first detonation of a nuclear weapon, ever. The test occurred at 5:29 am on July 16, 1945, as part of the Manhattan Project. Marshak was the son of parents who fled pogroms in Byelorussia. Witnessing the test, hearing the destruction of Hiroshima and Nagasaki, and knowing his family history led him to become active in advancing peace. He soon co-founded and chaired the Federation of Atomic Scientists and was active in several other organizations promoting scientific co-operation as a vehicle of world peace. In 1992 he won the inaugural award of the American Association for the Advancement of Science for Science Diplomacy.
Today, the fifth day of both Chanukah and Christmas, we reflect on the gift of international scientific community.
Read more…
Hunting Complexity in Zeta
A second look at Voronin’s amazing universality theorem
Anatoly Karatsuba and Sergei Voronin wrote a book on Bernhard Riemann’s zeta function. The book was translated into English by Neal Koblitz in 1992. Among its special content is expanded treatment of an amazing universality theorem about the classic zeta function proved by Voronin in 1975. We covered it four years ago.
Today Ken and I take a second look and explore a possible connection to complexity theory.
Read more…
Bletchley Park
Lessons from the Park that still apply today
Iain Standen is the CEO of the Bletchley Park Trust, which is responsible for the restoration of the Park. After the war, the Park was almost completely destroyed and forgotten—partially at least for security reasons. Luckily it was just barely saved and is now a wonderful place to visit and see how such a small place helped change history.
Today I would like to report on a recent trip to Bletchley Park.
Read more…
Magnus and the Turkey Grinder
With part II of our “When Data Serves Turkey” post
Baku Olympiad source—note similarity to this |
Magnus Carlsen last week retained his title of World Chess Champion. His match against challenger Sergey Karjakin had finished 6–6 after twelve games at “Standard” time controls, but he prevailed 3–1 in a four-game tiebreak series at “Rapid” time controls. Each game took an hour or hour-plus under a budget of 25 minutes plus 10 extra seconds for each move played.
Today we congratulate Carlsen and give the second half of our post on large data being anomalous. Read more…