Skip to content

Adding Dollar Signs

February 18, 2013

A few remarks on papers to appear at the 2013 Computational Complexity Conference


Chris Umans is the program chair of the upcoming Computational Complexity Conference, and has put together a terrific-looking program. He was aided by the rest of his program committee: Iftach Haitner, Troy Lee, Dana Moshkovitz, Jakob Nordström, Ryan O’Donnell, Ben Reichardt, Madhu Sudan, Amnon Ta-Shma, Jacobo Torán, and Emanuele Viola. We owe them for their hard work. Thanks.

Today I want to do something that we rarely do at GLL: report instantly on papers that have just been accepted at a conference.

I looked just now at the 29 papers accepted to CCC 2013. They all look intriguing, but I selected just a few for quick comments. Mostly I added the $’s that were missing so the abstracts would compile in LaTeX. I apologize to all whose abstracts are not included—see the full list of all papers here for the abstracts. Or see this Wordle for a “summary” of all the papers:


I also want to say that I surmise—with no inside knowledge—that lots of really cool papers were not accepted. I have been there myself, many times, so I know that some papers were unable to be accepted, yet have interesting results. My advice is hang in there and try again. I have reported before that the Nobel award winning paper on the polymerase chain reaction (PCR) technique for DNA sequencing by Kary Mullis was rejected by the journals Science and Nature before being published.

The Papers

Here are the papers, with additional comments on some. We will no doubt discuss some of these in the near future in further detail.

The Correct Exponent for the Gotsman-Linial Conjecture
Daniel Kane

On Rigid Matrices and U-Polynomials
Noga Alon, Gil Cohen I have thought too many hours about rigid matrices. This is another case of “almost all of them have the property, but good luck proving an explicit one does.” Very important problem.

Formulas are exponentially stronger than monotone circuits in non-commutative setting
Pavel Hrubes, Amir Yehudayoff

Just a Pebble Game
Siu Man Chan

Quantum XOR Games
Oded Regev, Thomas Vidick

Strong LTCs with inverse polylogarithmic rate and soundness
Michael Viderman Abstract: An error-correcting code {C \subseteq F^n} is called {(q,\epsilon)}-strong locally testable code (LTC) if there exists a randomized algorithm (tester) that makes at most {q} queries to the input word. This algorithm accepts all codewords with probability 1 and rejects all non-codewords {x\notin C} with probability at least {\epsilon \cdot \delta(x,C)}, where {\delta(x,C)} denotes the relative Hamming distance between the word {x} and the code {C}. The parameter {q} is called the query complexity and the parameter {\epsilon} is called soundness. A well-known open question in the area of LTCs asks whether exist strong LTCs with constant query complexity, constant soundness and inverse polylogarithmic rate. In this paper, we construct strong LTCs with query complexity 3, inverse poly-logarithmic soundness and inverse poly-logarithmic rate.

Two-message quantum interactive proofs and the quantum separability problem
Patrick Hayden, Kevin Milner, Mark Wilde

How Low Can Approximate Degree and Quantum Query Complexity be for Total Boolean Functions?
Andris Ambainis, Ronald de Wolf

Approaching the chasm at depth four
Ankit Gupta, Pritish Kamath, Neeraj Kayal, Ramprasad Saptharishi

Shared Randomness and Quantum Communication in the Multi-Party Model
Dmitry Gavinsky, Tsuyoshi Ito, Guoming Wang

On the Power of Non-Adaptive Learning Graphs
Aleksandrs Belovs, Ansis Rosmanis

Collapse of the Hierarchy of Constant-Depth Exact Quantum Circuits
Yasuhiro Takahashi, Seiichiro Tani

We study the quantum complexity class {\mathsf{QNC^0_f}} of quantum operations implementable exactly by constant-depth polynomial-size quantum circuits with unbounded fan-out gates. Our main result is that the quantum OR operation is in {\mathsf{QNC^0_f}}, which is an affirmative answer to the question of Høyer and Spalek…Lastly, we show that, if the quantum Fourier transform modulo a prime is in {\mathsf{QNC^0_f}}, there exists a polynomial-time exact classical algorithm for a discrete logarithm problem using a {\mathsf{QNC^0_f}} oracle. This implies that, under a plausible assumption, there exists a classically hard problem that is solvable exactly by a {\mathsf{QNC^0_f}} circuit with gates for the quantum Fourier transform.

A Derandomized Switching Lemma and an Improved Derandomization of {\mathsf{AC^0}}
Luca Trevisan, TongKe Xue

Random Arithmetic Formulas can be Reconstructed Efficiently
Ankit Gupta, Neeraj Kayal, Youming Qiao

Optimal Inapproximability for Scheduling Problems via Structural Hardness for Hypergraph Vertex Cover
Sushant Sachdeva, Rishi Saket

Lower Bounds for DNF-Refutations of a Relativized Weak Pigeonhole Principle
Albert Atserias, Moritz Müller, Sergi Oliva

Superlinear lower bounds for multipass graph processing
Venkatesan Guruswami, Krzysztof Onak

On the Parameterized and Approximation Hardness of Metric Dimension
Sepp Hartung, André Nichterlein

Covering CSP’s
Irit Dinur, Gillat Kol

The distinguishability of product distributions by read-once branching programs
John Steinberger

An {O({n^{{1\over 2}+\epsilon}})} Space Algorithm for Directed Planar Reachability with Polynomial Running Time
Tatsuya Imai, Kotaro Nakagawa, Aduri Pavan, N. V. Vinodchandran, Osamu Watanabe

Short lists with short programs in short time
Bruno Bauwens, Anton Makhlin, Nikolay Vereshchagin, Marius Zimand What a great title: the word “short” is used three times.

Abstract: Given a machine {U}, a {c}-short program for {x} is a string {p} such that {U(p)=x} and the length of {p} is bounded by {c + }(the length of a shortest program for {x}). We show that for any universal machine, it is possible to compute in polynomial time on input {x} a list of polynomial size guaranteed to contain an {O(log |x|)}-short program for {x}. We also show that there exist computable functions that map every {x} to a list of size {O(|x|^2)} containing an {O(1)}-short program for {x} and this is essentially optimal because we prove that such a list must have size {\Omega(|x|^2)}. Finally we show that for some machines, computable lists containing a shortest program must have length {\Omega(2^{|x|})}.

Constructing hard functions from learning algorithms
Adam Klivans, Pravesh Kothari, Igor Oliveira I have thought about this type of work over the years and look forward to reading the full paper.

Abstract: Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class of boolean circuits {C} contained in {\mathsf{P/poly}} of Boolean is exactly learnable with membership and equivalence queries in polynomial-time, then {\mathsf{EXP^{NP}}} is not contained in {C} (the class {\mathsf{EXP^{NP}}} was subsequently improved to {\mathsf{EXP}} by Hitchcock and Harkins). In this paper, we improve on these results. Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to “diagonalize”over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions.

Approximating Boolean functions with depth-2 circuits
Eric Blais, Li-Yang Tan This approximation result seems surprising to me.

Abstract: We study the complexity of approximating Boolean functions with DNFs and other depth-2 circuits, exploring two main directions: universal bounds on the approximability of all Boolean functions, and the approximability of the parity function. In the first direction, our main positive results are the first non-trivial universal upper bounds on approximability by DNFs:

{\bullet} Every Boolean function can be approximated by a DNF of size {O(2^n/\log n)}.

{\bullet} Every Boolean function can be approximated by a DNF of width {cn}, where {c < 1}

Towards a Reverse Newman’s Theorem in Interactive Information Complexity
Joshua Brody, Harry Buhrman, Michal Koucky, Bruno Loff, Florian Speelman, Nikolay Vereshchagin

LS+ Lower Bounds from Pairwise Independence
Madhur Tulsiani, Pratik Worah

On Medium-Uniformity and Circuit Lower Bounds
Rahul Santhanam, Ryan Williams Abstract: We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than {\mathsf{LOGTIME}}) but not infeasible. We prove several new unconditional lower bounds against medium uniform circuit classes, including:

{\bullet} … For all {k} there is a language {L_k \in P} that does not have {O(n^k)}-size circuits constructible in polynomial time…

Composition limits and separating examples for some Boolean function complexity measures
Justin Gilmer, Michael Saks, Srikanth Srinivasan

On the Lattice Smoothing Parameter Problem
Kai-Min Chung, Daniel Dadush, Feng-Hao Liu, Chris Peikert

Matrix Constructions

Ken adds a few remarks about de-randomizing matrix constructions. Say a matrix {A} is a “reduct” of a matrix {B} if whenever {A(i, j) \neq 0}, {A(i, j) = B(i, j)}. That is, {A} is obtained from {B} by zeroing out some entries. Now define a matrix {B} with positive entries to be “normal” if all of its reducts {A} have the property

\displaystyle \det(A) = 0 \iff perm(A) = 0.

Since {A} has nonnegative entries, of course a zero permanent implies a zero determinant, but the question is the converse. If we have just one {n \times n} matrix {B} that is normal, then we have an especially simple way of telling whether any {n}-by-{n} bipartite graph {G} has a perfect matching: multiply the adjacency matrix of {G} entry-wise by {B} to get a reduct, and take its determinant. That a random matrix {B} is normal is the basis for a famous randomized-{\mathsf{NC}} algorithm for perfect matching. But we don’t know how to efficiently construct even one normal matrix for each {n}—if we did then perfect matching would be in {\mathsf{NC}} which is still open.

Ken really wishes to draw attention to the following meta-question:

Is there a relationship between the construction problem for rigid matrices and the problem for normal matrices? Does one “reduce” to the other? Have reduction relationships been mapped out between these and other construction problems involving expanders or designs or codes or similar combinatorial objects?

Open Problems

Take a look at the accepted papers. Which are your favorites? What is known about Ken’s question?

7 Comments leave one →
  1. February 18, 2013 12:47 pm

    Landsberg/Taylor/Vishnoi The Geometry of Matrix Rigidity (GIT Report CC-03-54, 2003) is a readable survey that concludes with the acknowledgment:

    We would like to thank Prof. Richard J. Lipton for suggesting to study the problem of matrix rigidity.


  2. February 18, 2013 5:42 pm

    re kary mullis & dna computing. theres a cool (older) paper on dna computing to solve the post correspondence problem with real test tube experiments, which doesnt seem to have gotten much attn in the TCS community. & seems like it has a future esp since post correspondence problem is turing complete….

  3. February 18, 2013 5:45 pm

    oh and fyi, for anyone who hasnt seen it, stasys jukna has a lot of excellent coverage/survey of matrix rigidity in his new book on boolean function complexity. theres even a thm of razborov that ties it to circuit lower bounds (& thereby complexity class separations) etc…. so yeah matrix rigidity is one of the hardest problems at the frontiers, but it also seems to be fundamental in many ways, maybe some kind of not-yet-but-future rosetta stone….

  4. February 18, 2013 6:37 pm

    ok, one other comment [oops, hard to think of em all at once!]. it seems a little strange how little empirical experiments are used in TCS despite that they could provide some significant insight. somewhat recently looked into matrix rigidity with some simple ideas (plz upvote that!). there is a ton of empirical research into SAT but it looks like virtually none on matrix rigidity despite the latters huge significance to complexity theory. maybe an opening for an enterprising & innovative researcher. any takers? plz feel free to reply on my blog for further ideas…

  5. TCNE permalink
    February 23, 2013 2:16 pm

    NP = coNP

    Let M0 be a Turing machine computing SAT (i.e. determining SAT membership) with complexity C, and there does not exist another Turing machine that can compute SAT with complexity better than C. (i.e. C, whatever it is, is the lowest possible complexity for SAT)

    Let f be the function for the running time of M0 in the worst case scenario. There must exist function g of complexity C such that g > f for all natural number and there must exist Turing machine M1 computing g with complexity no more than C.

    Let k be g(|x|) for input x with length |x|.

    Build Turing machine M which consists of M1, M2 and M3 with the following configuration and behavior.

    M1 on completion of calculating g, writes 2k on the tape for M2 (all this will complete within complexity C) and transits to the starting states of M2 and M3 at the same time, i.e. it starts M2 and M3 at the same time. (M2 and M3 can, in their starting states, loop checking a register Q which M1 flips on completion for the start-of-computation signal).

    M2, a down counter (on 2k), will write 1 to a register/tape R which is initialized to 0 and halt when the counter of 2k hits 0 (seeing #, e.g., after 1^2k).

    M3 which is M1 extended with a check of R after each original step, will transit to a reject state and halt whenever seeing R flipped from 0 to 1, while retaining all original behavior of M1.

    Both M2 and M3 will halt and halt within complexity C. M accepts (rejects) iff M3 accepts (rejects).

    M is a Turing machine accepts AND rejects for SAT. In other words, it computes both membership and non-membership of SAT, or rather, both membership of SAT and membership of UNSAT, the coNP-complete complement of SAT.

    Since UNSAT membership can be determined (by the constructed M) in complexity C,the same as that for SAT, NP = coNP. Q.E.D.

    For a different pdf version (solution through 6 questions), go to:



    Am greatly encouraged by your advice: “My advice is hang in there and try again.”

    I tried many, many times. I just tried again to send my questions to you. Please let me know if you got my emails.

    Did you ever get the questions from me?

    BTW, jumping to P-NP is perhaps too fast. Let us try NP=coNP first.


    • rjlipton permalink*
      February 23, 2013 3:17 pm

      Never got questions

      On 2/23/13 2:16 PM, “Gödel’s Lost Letter and P=NP”

    • TCNE permalink
      February 23, 2013 3:44 pm


      So sorry. It must be MIMA. 🙂

      Seriously. Thank you ever so much for letting me know this. And I apologize for thinking that you almost purposedly refused to acknowledge the receipt. I really did send and re-send so many times the email with the questions which lead, imho, to NP = coNP.

      Anyway, I am not a conspiracy theorist. But this does seem weird, very weird. It seems my emails concerning P, NP. coNP, almost always get lost and do not get responses. That does not happen to my other emails. The same thing seems to happen with communication with another mathematician as we speak.

      If you can see my message here, the email path can be non-existent for a while. 🙂 If you still would like to see the questions, they are at the URL given in my last message. Please kindly let me know if you have a problem accessing the URL (

      BTW, if you happen to have an interest in reading the stuff concerning P != NP and see anything doubtful, please feel free to let me know. Only that I would like to deal with the non-classical computational model aspect separately, even though I believe I have taken care of it but just maybe too briefly.

      Once again, my heart-felt thanks!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s