Some papers from the accepted list of this year’s Computational Complexity Conference

 [ UB CSE ]

Alan Selman is a long-time friend of Ken and I, and is a long-time researcher in complexity theory. Alan was the first president of the organizing body for the Computational Complexity Conferences (CCC).

Today we salute the ${0b100010}$th edition of the conference and discuss some of the accepted papers.

The conference, and the governing body, have changed names over the years; by any name it remains an important conference. Alan chaired the first program committee with Steve Mahaney and edited the first proceedings, in 1986.

Ken recently saw Alan two weeks ago at the banquet for the symposium honoring Steve Cook at the University of Toronto. We will cover event, once exams are done. Ken saw another of the CCC past presidents there—if you wish to guess who, a hint is it was one of Cook’s past students.

• Dieter van Melkebeek, 2012-2018

• Peter Bro Miltersen, 2009-2012

• Pierre McKenzie, 2006-2009

• Lance Fortnow, 2000-2006

• Eric Allender, 1997-2000

• Steven Homer, 1994-1997

• Timothy Long, 1992-1994

• Stephen Mahaney, 1988-1992

• Alan Selman, 1985-1988

Although we do not usually do announcements, we note from the conference website:

Details of the local arrangements for CCC 2019 and the preceding events, including the DIMACS Day of Tutorials, are available. Early registration runs till June 26.

## Six Papers with Some Comments

Here are some papers that I, Dick, found interesting from the list of accepted papers. All accepted papers are interesting, of course. I selected six that were on topics that were directly connected with my interests.

Criticality of Regular Formulaspaper
Benjamin Rossman
I thought this was about regular expressions. Shows something about me. Here “regular” means the in-degree of gates being the same at each level of the circuit. This condition seems likely to be removable as Rossman conjectures, but I doubt it will be easy. The term “criticality” is a parameter that measures how much a random restriction reduces the size of a formula. Think switching lemma.

Typically-Correct Derandomization for Small Time and Spacepaper
William Hoza
I like the notion of typically-correct. Their algorithms work by treating the input as a source of randomness. This idea was pioneered by Oded Goldreich and Avi Wigderson. The title of their 2002 article “Derandomization that is rarely wrong from short advice that is typically good”, gives away how one can prove such results.

Optimal Short-Circuit Resilient Formulaspaper
Mark Braverman, Klim Efremenko, Ran Gelles, and Michael Yitayew
This is on a kind of fault-tolerance. They consider fault-tolerant boolean formulas in which the output of a faulty gate is stuck at one of the gate’s inputs. This is an interesting model of errors, and they show roughly: any formula can be converted into a formula that is not too much bigger and survives even if about ${1/5}$ of the gates are faulty. A surprise is that they use a method related to blockchains. Hmmmmm. Interesting.

Fourier and Circulant Matrices are Not Rigidpaper
Allen Liu and Zeev Dvir
A matrix is rigid if its rank cannot be reduced significantly by changing a small number of entries. As you probably know there are plenty of rigid matrices—take random ones—but no provable examples of explicit ones. Their beautiful results prove that specific families of matrices are not rigid. These families include ones that were long thought to be rigid. The highlight of this work could be that it suggests new families that may be rigid.

Average-Case Quantum Advantage with Shallow Circuitspaper
François Le Gall
A quest, the quest that tops all others—is the search for evidence that quantum computers are better than classic ones. Of course, this is nearly impossible, since P=PSPACE is an open problem. So one looks at special classes of computations. See here for how the quest for “quantum advantage” meets up with computational complexity.

Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems
paper
Lijie Chen, Dylan McKay, Cody Murray, and Ryan Williams
Of course I think this is an interesting paper. There is the famous H-score. Perhaps there could be a T-score. This would be the number of times your name is in the title of a published paper. Thus Ron Rivest, for example, has a huge T-score.

## Open Problems

May 25, 2019 1:34 pm

Our paper is now available at
https://eccc.weizmann.ac.il/report/2019/075

So now you can truly determine how interesting it is! Briefly, we show how the problem of proving circuit lower bounds for classes like P^(NP) is equivalent to proving a Karp-Lipton collapse of PH to P^(NP) sufficient for the lower bounds. There are quite a few other fun observations too.

May 26, 2019 1:45 pm

See this paper on the Equal Subset Sum problem.

Sent from my Sprint Samsung Galaxy S® 6 edge.

May 26, 2019 10:15 pm

Dear Craig Alan:

Thank you for this comment. The paper seems wonderful. One of my favorite tricks is the meet in the middle attack. I tried for—too long—to improve that for related problems. I really like that they can improve on this classic result.

I definitely am taking a serious look at this paper: Equal-Subset-Sum Faster Than the Meet-in-the-Middle

Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, Karol Wegrzycki

Best

May 27, 2019 8:41 am

Can I ask some enlightened CSs here for one weird question? What if one will “find out” that CLIQUE has no algorithm bounded by any quadratic polynomial that can decide it for all inputs?In other words, there is no TM that has finite set of instructions usage of which restricted by O(n^2) for input→ number of steps. How it was achieved? One introduced “non algebraic” optimality that lives outside of ZFC that has no algebraic form of description and reduced CLIQUE to that form. After that one shows that no TM will be able to navigate in that the “World of Clique” using finite number of instructions(and any number and any form of transitions between them), because any TM will get in the circle(never halt) or sticks in the wall of that world and will never be able to leave it or turn around. How does TM choose “direction”? It looks at the map with all “deterministic” transitions on it for all quadratic bounded polynomials and tries to apply any finite set of instructions(any order) and realizes that it cannot “reach” the goal without exponential “numbers of corrections”. It will make exponential number of wrong steps almost for all inputs no matter the way instructions will be aplied and no matter what will be inside of instructions. There will always be infinite set of inputs for which exponentially large number of corrections is needed. If you will “magically map” transitions on correct path you will get infinitely many independent circles(TM never halts in any of them and cannot jump from one circle to another). You may ask me about NP VS P/Poly, but it does not help either. We can “extract” spectrum of the NP-Complete problem and then “construct” infinitely large circuit based on it, but I have no idea what to do next with P/Poly, So, in general, I’m unsure about him at all.The only one thing I do know is that you cannot solve NP problems “faster”, only some inputs of them will be faster and some slower.
So here is the question. Will it imply P!=NP or it is not enough? I don’t want to prove it for all possible algorithms.
Also, I’m not a professional mathematician and I have no education at all. So, my proof is, probably, gibberish and nonsense. I don’t want to embarrass no one with another “proof from the amateur”. But I do want to know the truth, at least for myself.

May 27, 2019 9:48 am

Dear Ivan Gusev:

Question is not weird at all. If I understand you are trying to think about a quadratic lower bound on a TM. I assume that you mean multiple tape machines. For one tape usually things are quite different. Not sure I get what you mean by infinite circuits?

Perhaps let me know more details.

Best

May 27, 2019 10:26 am

I’m trying to make a map of all possible transitions with all possible actions bound to them. make one step right → read → write(constant alphabet) → one step left etc. And I can make any TM just by picking up from that map some strings of such actions knowing that they will always be bounded by some O(n^2). Then, I’m claiming that that map doesn’t have a finite number of combinations of such strings that can decide CLIQUE for all it’s inputs. I’m making it trough new type of TM that allows me to extract substrings(that can be extended to infinity preserving conditions of growth and decidability) from any other TM, then I’m making RAM-like(but different) machine with “address-free” tape that decides CLIQUE in the most optimal way and trying to extract such sub-algorithms(they do belong to map above) from that machine and at the end I see that any finite extraction does not allow me to compile my own algorithm and my “adress-free” TM cannot be transformed to any “multi-tape” TM if it can solve CLIQUE faster then brute-force. Also my RAM-Like machine with “address-free” tape is perfectly deterministic and analytic. So I can try to remap any map to any other map(like manifolds) and see what will be at the end. I’m just tired and don’t want to make “more advanced” map for all possible TM.