Skip to content

Selected Papers at CCC 2019

May 25, 2019


Some papers from the accepted list of this year’s Computational Complexity Conference

[ UB CSE ]

Alan Selman is a long-time friend of Ken and I, and is a long-time researcher in complexity theory. Alan was the first president of the organizing body for the Computational Complexity Conferences (CCC).

Today we salute the {0b100010}th edition of the conference and discuss some of the accepted papers.

The conference, and the governing body, have changed names over the years; by any name it remains an important conference. Alan chaired the first program committee with Steve Mahaney and edited the first proceedings, in 1986.

Ken recently saw Alan two weeks ago at the banquet for the symposium honoring Steve Cook at the University of Toronto. We will cover event, once exams are done. Ken saw another of the CCC past presidents there—if you wish to guess who, a hint is it was one of Cook’s past students.

  • Dieter van Melkebeek, 2012-2018

  • Peter Bro Miltersen, 2009-2012

  • Pierre McKenzie, 2006-2009

  • Lance Fortnow, 2000-2006

  • Eric Allender, 1997-2000

  • Steven Homer, 1994-1997

  • Timothy Long, 1992-1994

  • Stephen Mahaney, 1988-1992

  • Alan Selman, 1985-1988

Although we do not usually do announcements, we note from the conference website:

Details of the local arrangements for CCC 2019 and the preceding events, including the DIMACS Day of Tutorials, are available. Early registration runs till June 26.

Six Papers with Some Comments

Here are some papers that I, Dick, found interesting from the list of accepted papers. All accepted papers are interesting, of course. I selected six that were on topics that were directly connected with my interests.

Criticality of Regular Formulaspaper
Benjamin Rossman
I thought this was about regular expressions. Shows something about me. Here “regular” means the in-degree of gates being the same at each level of the circuit. This condition seems likely to be removable as Rossman conjectures, but I doubt it will be easy. The term “criticality” is a parameter that measures how much a random restriction reduces the size of a formula. Think switching lemma.

Typically-Correct Derandomization for Small Time and Spacepaper
William Hoza
I like the notion of typically-correct. Their algorithms work by treating the input as a source of randomness. This idea was pioneered by Oded Goldreich and Avi Wigderson. The title of their 2002 article “Derandomization that is rarely wrong from short advice that is typically good”, gives away how one can prove such results.

Optimal Short-Circuit Resilient Formulaspaper
Mark Braverman, Klim Efremenko, Ran Gelles, and Michael Yitayew
This is on a kind of fault-tolerance. They consider fault-tolerant boolean formulas in which the output of a faulty gate is stuck at one of the gate’s inputs. This is an interesting model of errors, and they show roughly: any formula can be converted into a formula that is not too much bigger and survives even if about {1/5} of the gates are faulty. A surprise is that they use a method related to blockchains. Hmmmmm. Interesting.

Fourier and Circulant Matrices are Not Rigidpaper
Allen Liu and Zeev Dvir
A matrix is rigid if its rank cannot be reduced significantly by changing a small number of entries. As you probably know there are plenty of rigid matrices—take random ones—but no provable examples of explicit ones. Their beautiful results prove that specific families of matrices are not rigid. These families include ones that were long thought to be rigid. The highlight of this work could be that it suggests new families that may be rigid.

Average-Case Quantum Advantage with Shallow Circuitspaper
François Le Gall
A quest, the quest that tops all others—is the search for evidence that quantum computers are better than classic ones. Of course, this is nearly impossible, since P=PSPACE is an open problem. So one looks at special classes of computations. See here for how the quest for “quantum advantage” meets up with computational complexity.

Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems
paper
Lijie Chen, Dylan McKay, Cody Murray, and Ryan Williams
Of course I think this is an interesting paper. There is the famous H-score. Perhaps there could be a T-score. This would be the number of times your name is in the title of a published paper. Thus Ron Rivest, for example, has a huge T-score.

Open Problems

What are your selected papers?

[Added link to Relations and Equivalences… paper]

Advertisements
8 Comments leave one →
  1. rrwilliams permalink
    May 25, 2019 1:34 pm

    Our paper is now available at
    https://eccc.weizmann.ac.il/report/2019/075

    So now you can truly determine how interesting it is! Briefly, we show how the problem of proving circuit lower bounds for classes like P^(NP) is equivalent to proving a Karp-Lipton collapse of PH to P^(NP) sufficient for the lower bounds. There are quite a few other fun observations too.

  2. Craig Alan permalink
    May 26, 2019 1:45 pm

    See this paper on the Equal Subset Sum problem.

    https://arxiv.org/abs/1905.02424

    Sent from my Sprint Samsung Galaxy S® 6 edge.

    • rjlipton permalink*
      May 26, 2019 10:15 pm

      Dear Craig Alan:

      Thank you for this comment. The paper seems wonderful. One of my favorite tricks is the meet in the middle attack. I tried for—too long—to improve that for related problems. I really like that they can improve on this classic result.

      I definitely am taking a serious look at this paper: Equal-Subset-Sum Faster Than the Meet-in-the-Middle

      Marcin Mucha, Jesper Nederlof, Jakub Pawlewicz, Karol Wegrzycki

      Best

  3. Ivan Gusev permalink
    May 27, 2019 8:41 am

    Can I ask some enlightened CSs here for one weird question? What if one will “find out” that CLIQUE has no algorithm bounded by any quadratic polynomial that can decide it for all inputs?In other words, there is no TM that has finite set of instructions usage of which restricted by O(n^2) for input→ number of steps. How it was achieved? One introduced “non algebraic” optimality that lives outside of ZFC that has no algebraic form of description and reduced CLIQUE to that form. After that one shows that no TM will be able to navigate in that the “World of Clique” using finite number of instructions(and any number and any form of transitions between them), because any TM will get in the circle(never halt) or sticks in the wall of that world and will never be able to leave it or turn around. How does TM choose “direction”? It looks at the map with all “deterministic” transitions on it for all quadratic bounded polynomials and tries to apply any finite set of instructions(any order) and realizes that it cannot “reach” the goal without exponential “numbers of corrections”. It will make exponential number of wrong steps almost for all inputs no matter the way instructions will be aplied and no matter what will be inside of instructions. There will always be infinite set of inputs for which exponentially large number of corrections is needed. If you will “magically map” transitions on correct path you will get infinitely many independent circles(TM never halts in any of them and cannot jump from one circle to another). You may ask me about NP VS P/Poly, but it does not help either. We can “extract” spectrum of the NP-Complete problem and then “construct” infinitely large circuit based on it, but I have no idea what to do next with P/Poly, So, in general, I’m unsure about him at all.The only one thing I do know is that you cannot solve NP problems “faster”, only some inputs of them will be faster and some slower.
    So here is the question. Will it imply P!=NP or it is not enough? I don’t want to prove it for all possible algorithms.
    Also, I’m not a professional mathematician and I have no education at all. So, my proof is, probably, gibberish and nonsense. I don’t want to embarrass no one with another “proof from the amateur”. But I do want to know the truth, at least for myself.

    • rjlipton permalink*
      May 27, 2019 9:48 am

      Dear Ivan Gusev:

      Question is not weird at all. If I understand you are trying to think about a quadratic lower bound on a TM. I assume that you mean multiple tape machines. For one tape usually things are quite different. Not sure I get what you mean by infinite circuits?

      Perhaps let me know more details.

      Best

      • Ivan Gusev permalink
        May 27, 2019 10:26 am

        I’m trying to make a map of all possible transitions with all possible actions bound to them. make one step right → read → write(constant alphabet) → one step left etc. And I can make any TM just by picking up from that map some strings of such actions knowing that they will always be bounded by some O(n^2). Then, I’m claiming that that map doesn’t have a finite number of combinations of such strings that can decide CLIQUE for all it’s inputs. I’m making it trough new type of TM that allows me to extract substrings(that can be extended to infinity preserving conditions of growth and decidability) from any other TM, then I’m making RAM-like(but different) machine with “address-free” tape that decides CLIQUE in the most optimal way and trying to extract such sub-algorithms(they do belong to map above) from that machine and at the end I see that any finite extraction does not allow me to compile my own algorithm and my “adress-free” TM cannot be transformed to any “multi-tape” TM if it can solve CLIQUE faster then brute-force. Also my RAM-Like machine with “address-free” tape is perfectly deterministic and analytic. So I can try to remap any map to any other map(like manifolds) and see what will be at the end. I’m just tired and don’t want to make “more advanced” map for all possible TM.

      • Ivan Gusev permalink
        May 27, 2019 11:01 am

        Maybe this is not a perfect example, but you can think about this in this way.

        Imagine a “genius writer” that can write any great novel, but one day he goes “high” and starts to write his new the most genius novel at any piece of his furniture or walls at random places. After he finishes, assuming he wrote all perfectly fine, comes his secretary that starts to look around and compile that novel from the room.We can add one more condition secretary DOES know everything, not only all geometry of the room but he can also read any text. He also knows everything that was in the mind of author. Assuming both writer and secretary are in P, can they obtain the full and correct text of novel in polynomial time or not? If writer was in NP, is his genius is obtainable by secretary? NP-nature is the genius of NP writer. Let’s assume that such secretary exist and he is in P, then we can derive contradiction. Obviously, one couple is not enough we need all of them. I took all possible O(n^2) bounded secretaries and their combinations for CLIQUE.

  4. Ivan Gusev permalink
    May 28, 2019 9:46 am

    Well, I don’t want to get banned here or just be annoying. So, it will be my final post. When I started to try to construct “map” for n^3 I have witnessed “Spooky Appearance” of RZF. I’m making “mother map” first, then I’m fulfilling all holes with correct transitions by symmetry for any Polynomially bounded machine of requaried power. And I saw that “mother map” has all its points + transitions lying in some surroundings of RZF(don’t know exact place in C, but all points in “mother map” have it’s own address brought by unique RZF value and its iterations on the way) and my complete map just extends it all to infinity. Also map automatically deletes all TM that can stuck on map(trying to break to the next level, for example n^3 → n^4) at ANY possible input and all of these TM that go into the circle while being stuck inside forever. After my map is complete, I’m taking NP-map(I maid it into the moment of being “transcended” and have crazy story how it appeared to me, but I checked it and it works) for CLIQUE and trying one by one check every TM from that map(n^2 or other power) for all inputs→outputs without worrying that something will go wrong. And I see — no, there is no TM that can do this. In other words, if P=NP then that algorithm has “nonconstructive tape”, and I’m trying to show that, such “tape model”(being “classic”) does not exist at all. The key here is to show it for all inputs→outputs. First making calculations → second making tape where they can possibly be committed. No Oracles are needed at all. I tried quickly to make picture of my assumption why RZF arose in my map at all and I immediately realized how silly it looks.The silly and messy picture is bellow. https://sun9-27.userapi.com/c849032/v849032480/19791c/dv5XM4Io6Sg.jpg
    Also, I spent my childhood around crazy cultists and was locked in the Siberia with them. So, I tried to learn “scientific mindset” and math not to fall into being one of them, but to prevail and leave this world of ignorance and insanity, even if no one will help. So, I don’t want to decive anyone, I’m just trying to dig for the truth, no offense to science is intended at all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s