Skip to content

Logic Meets Complexity Theory

March 21, 2010


A summary of the 2010 ASL conference

Bob Coecke is a lecturer from Oxford, who gave a series of three invited talks on quantum information theory at the 2010 ASL meeting. He is a terrific speaker, and has a way of using pictures to give the best explanations I have ever seen—or heard—of complex quantum concepts such as teleportation.

Today I will talk about his presentations, Alexander Razborov’s award talk, and our complexity session given in Razborov’s honor at the ASL conference. ASL is the Association for Symbolic Logic.

The ASL meeting was quite enjoyable—being in Washington D.C. during several days of beautiful weather did not hurt either. ASL is a relatively small meeting, but has a strong energy and excitement about it. I would definitely plan on going again, and anyone interested in hearing very strong talks on foundational issues should go to one of their meetings.

The ASL group does some things a bit differently from STOC and FOCS and other theory meetings. Their meeting has several invited and special one hour talks, as well as shorter 30 minute contributed talks. The latter are arranged in parallel sessions, but these sessions are very inclusive and essentially all get a chance to speak. ASL also does an interesting “trick” with their tutorials: the tutorial was broken into three one hour talks given over three consecutive days. I liked this approach very much.

A Picture is Worth a Thousand Qubits

Coecke uses pictures of a special kind to explain quantum information processing. This, in his hands, is in my opinion extremely insightful, and helped me really “get it.” I thought I would try to give an over view of what he does with his wonderful pictures. If you want to see the real thing go here for the full story. He uses pictures like these to explain quantum teleportation and other operations:

The intuition is each diagram represents a physical experiment. He has a small set of simple rules: diagrams can be changed into other ones by applying a rule. In this was he can show how a complex looking quantum setup is really equivalent to something often much simpler. This allows insight into why things work the way they do. The figure above starts out with a complex entanglement, and finally gets a simple transfer of bits. Very neat.

If any of you discover a caveman drawing looking like this: we have a problem.

The Gödel Lecture

Alexander Razborov was the {{21}^{\textrm{st}}} Gödel Speaker at this ASL meeting. He has won so many honors already, but it is quite fitting he was selected to be the 2010 Gödel Lecturer. The past lecturers include: Per Martin-Löf, Michael O. Rabin, and Harvey Friedman. Alexander was a wonderful choice for this year.

His talk was on the complexity of proof systems—one of his main loves—as he stated. Interestingly, he started by showing the original Gödel Lost Letter, in German. And he asked,

“Is it big enough? Can everyone read it? Let me make it bigger.”

Then, he showed an English version. After a good laugh, he went over the letter in some detail. He pointed out Gödel was interested in the difficulty of finding proofs in the predicate calculus, not in propositional calculus.

Next Alexander spoke about Sam Buss’s work on a logical theory that captures polynomial time. The theory is named {S_{2}^{1}}: Alexander said he would explain the name if anyone wished to hear, but there were no takers. I know a bit about this part of logic, and in my opinion it would be well served if it used better names for what are essentially subtheories of Peano Arithmetic. Cryptography picks such great names for its concepts, complexity theory does a good job with names, but the study of weak fragments of arithmetic does not.

Buss’s logic is essentially Peano Arithmetic with a weaker induction principle:

\displaystyle  A(0) \wedge \forall x (A(x/2) \rightarrow A(x)) \rightarrow \forall x A(x).

In this system a proof of any formula of

\displaystyle \forall x \ \exists y A(x,y)

implies there is a polynomial time computable function {f} so

\displaystyle  \forall x \ A(x,f(x))

is also provable. The intuition you should have is that the correctness of {A(m)} depends on the correctness of {A(m/2)} and this depends on {\dots} This fast decreasing sequence is the reason the logic is weaker than Peano—to prove this is hard, but this is intuitively why the induction is weaker.

Alexander pointed out if this theory could prove Fermat’s Little theorem, then there would be a factoring algorithm. I raised my hand and after being called on, asked Alexander one of those questions—a question you think you know the answer but want to check—could Sam’s theory prove there are an infinite number of primes? There was some discussion with Alex Wilkie, the session chair and the conclusion was it was not known. My question was motivated by a recent one of Ryan Williams: is there a deterministic polynomial time procedure to find a prime greater than {x}? If Sam’s theory could prove there are an infinite number of primes, I believe there would be such a deterministic function. Razborov then switched to talk about what he called the “golden age” of progress on lower bounds for circuits and proof systems—the 1980’s. As you might expect he then outlined his famous work on Natural proofs with Steven Rudich.

He then ended with an appeal for others to work on these foundational issues, and quite kindly offered to answer questions from anyone in the future. It is clear he is really dedicated to eventually understanding the great mystery of why proving tautologies is hard. A wonderful talk, filled with ideas, his personal insights, and hope for the future of the field.

Our Complexity Session

Our session was four 30 minute talks, each speaker could easily have used an hour to explain the details of their area. I will probably discuss several, if not all, of the talks in the future.

{\bullet} Jin-Yi Cai gave a talk on his beautiful theorem on characterizing the power of graph homomorphism. I will not define what they are, but will say such homomorphism can be used to capture many properties of graphs: for example 3-coloring. The area was started in 1967 by László Lovász, and is a natural idea: in many aspects of mathematics the right thing to study is not the “objects” but the mappings between the objects. Cai’s work is in this spirit and he has proved a kind of gap theorem:

Theorem: For every property of a certain type, either counting the number of configurations with the property is in polynomial time or is {\#}P hard.

I am not getting the statement exactly right, so look here for more details.

{\bullet} Ken Regan gave a highly speculative view of some new ideas he is working on in the area of arithmetic circuit lower bounds. Ken is an expert in this area and his idea is to add the function {x \rightarrow x^{m}} to arithmetic circuits for unit cost. His hope is allowing this powerful operation could shed new light on old lower bounds, and perhaps yield new results. The work is just starting and it could be quite fruitful, especially since his method appears to avoid the usual major barriers to lower bounds.

{\bullet} Ryan Williams gave a beautiful summary of the known lower bounds on SAT. The results he considered are all of the form: a Turing Machine for SAT that uses {n^{o(1)}} space requires at least {n^{c}} time. There have been a series of results, but he has the best with {c = 1.801}.

Lately Ryan has been working with Buss, the same Buss, on extending the value of {c} beyond {1.801}. Hopefully they will have some results soon. As someone who proved some of the earlier results, I always thought one day we might at least get {c \approx 2}. But, who knows.

{\bullet} Nisheeth Vishnoi gave a masterful view of approximation theory. At the 50,000 ft level, perhaps really the 50,000 mile level, I believe this is his key insight: there is a natural way to unify approximation upper and lower bounds into one coherent technology.

I will try to explain the most high level view of what he is doing. I definitely plan to discuss in much more detail in the near future. My first version was more like at the 50,000 million level—it was also wrong—so I will quote Nisheeth:

For a problem {\Pi} and an instance {J} for it, let {f} and {g} be two ways to measure cost of {J}. For instance, {f} could be the LP value of {J}, and {g} could be the cost of an integral assignment obtained from a particular way to round the LP solution of {J}.

Then, often, it can be shown that assuming the UGC (Unique Games Conjecture), one can construct a reduction based on {J} from Unique Games to {\Pi} which outputs instances {I} of {\Pi} such that it is hard to decide between:

\displaystyle  \begin{array}{rcl}  	\mathrm{YES:} \mathsf{cost}(I) &\le& f(J) \\ 	 \mathrm{NO:} \mathsf{cost}(I) &\ge& g(J). \end{array}

Note that {I} is the instance output by the reduction and {J} is the starting instance on which the reduction is based.

[Fixed this section.]

Open Problems

One open problem is to try and use Sam’s logic to solve the prime finding problem. Another is to try and improve the {1.801} bound on SAT of Ryan. These lower bounds are quite weak: the algorithms do not even have enough memory to write down one potential assignment to the variables. Yet it is the best we can do.

12 Comments leave one →
  1. March 21, 2010 9:03 pm

    Your link to the details of Jin-Yi Cai’s work is broken.

  2. Jacob permalink
    March 21, 2010 10:48 pm

    “Logic Meets Complexity Theory”

    Here I was thinking this post would cover descriptive complexity and finite model theory, oh well.

  3. March 22, 2010 1:13 am

    Isn’t there something odd here? If you weaken only the Induction Axiom Schema of first order PA as indicated, you are postulating that, for any well-formed arithmetical formula [f] of the weakened arithmetic:

    From [f(0)] and [(Ax)(f(x)->f(2x))], we may conclude [(Ax)f(x)].

    However, take [f(x)] as [(Ez)(z \leq x & x=2*z)].

    Now, we have [f(0)] and [(Ax)(f(x)->f(2x))], but we do not have [f(1)]!

    Hence, such an axiom system can have no sound interpretation over the domain of the natural numbers.

    • Boris permalink
      March 22, 2010 3:03 am

      In the original post, “x/2” means integer division by 2. For example, P(1), P(2), and P(3) are implied by P(0), P(1), and P(1), respectively.

    • rjlipton permalink*
      March 22, 2010 6:58 am

      I did not give a full definition. See Buss for that. The key is the A(x) can only use the operations +* and bounded quantifiers.

  4. March 23, 2010 7:41 am

    I have an ongoing tiny project of tracking uses of the phrases “proof technology”, “theorem technology”, etc. … and this post has a good one!

    Historically, this class of phrases has emerged in many disciplines, whenever methods that rely upon inspiration and intuition begin to be supplanted by methods that rely upon verification and validation; the former being a class that is (more-or-less) non-deterministic and non-constructive, the latter being a class that is (more-or-less) deterministic and constructive.

    The book A=B by Petkovsek, Wilf, and Zeilberger—with the famous introduction by Knuth—provides a paradigmatic example of a constructivist transition in proof theory.

    So does the increasing usages of phrases like “proof technology” mean that complexity theory/simulation theory is beginning a constructivist transition? Could we stop it, even if we wanted to? Would it be right or wrong of us, if we did wish to stop it … or accelerate it? Or is the question moot, because such a transition can never occur in complexity theory?

    What make this blog great IMHO, is that doesn’t shy away from raising these wonderfully interesting questions!

  5. March 24, 2010 2:00 am

    I did not give a full definition. See Buss for that.

    Thanks. Although Buss’ definition settles the point I raised, his notation, unfortunately, masks the interesting fact that:

    (a) his `weakened’ induction axiom, say, PA_IND(2) can, more generally, be expressed in first-order Peano Arithmetic PA (without introducing any new symbols) as follows:

    [{A(0) & (Ax)(f(x) -> (f(2*x) & f(2*x+1)))} -> (Ax)f(x)];

    (b) the above is a particular case of PA_IND(k):

    [{A(0) & (Ax)(f(x) -> (f(k*x) & f(k*x+1) & … & f(k*x+k-1)))} -> (Ax)f(x)];

    (c) [(Ax)f(x) -> {A(0) & (Ax)(f(x) -> f(x+1))}] is a theorem of PA;

    (d) [{A(0) & (Ax)(f(x) -> f(x+1))} -> {A(0) & (Ax)(f(x) -> (f(k*x) & f(k*x+1) & … & f(k*x+k-1)))}] is a theorem of PA.

    In other words, for any numeral [k], PA_IND(k) is equivalent in PA to the standard Induction Axiom of PA!

    The question arises: Does the introduction of bounded quantifiers yield any computational advantage?

    Now, Buss’ stated intent is to prove that:

    If [(Ax)(Ey)f(x, y)] is provable in a bounded arithmetic, then there should be an algorithm to find y as a function of x.

    In other words, Buss’ intent is to build a bridge between bounded Arithmetic and Computability so that a Pi_k formula, say [(Ax)f(x)], is provable in his bounded arithmetic if, and only if, there is an algorithm that, for any given numeral [n], decides the Delta-(k-1) formula [f(n)] as `true’.

    The question arises: Can such a bridge be built in PA?

    The difference – I suspect the only one – between bounded arithmetic and PA seems to be that, although we can presume in a bounded arithmetic that, from a proof of [(Ey)f(n, y)], we may always conclude there is some [m] such that [f(n, m)] is provable in the arithmetic, this is not a sound conclusion in PA.

    Reason: Since [(Ey)f(n, y)] is simply a shorthand for [~(Ay)~f(n, y)], such a presumption implies that Aristotle’s particularisation holds over the natural numbers under any sound interpretation of PA.

    To see that (as Brouwer steadfastly held) this may not always be the case, interpret [(Ax)f(x)] as ‘There is an algorithm that decides [f(n)] as`true’ for any given numeral [n]’.

    In such case, if [(Ax)(Ey)f(x, y)] is provable in PA, then we can only conclude that:

    There is an algorithm that, for any given numeral [n], decides that it is not the case that there is an algorithm that, for any given numeral [m], decides [~f(n, m)] as`true’.

    We cannot, however, conclude – as we can in a bounded arithmetic – that:

    There is an algorithm that, for any given numeral [n], decides that there is an algorithm that, for some numeral [m], decides [f(n, m)] as`true’.

    Reason: [(Ey)f(n, y)] may be a Halting-type formula for some numeral [n].

    This could be the case if [(Ax)(Ey)f(x, y)] were PA-unprovable, but [Ey)f(n, y)] PA-provable for any given numeral [n].

    Does this mean that we cannot build a bridge between Logic and Computability?

    To the contrary, I argue in the following paper (draft) that we can similarly show that:

    A Pi_k formula [(Ax)f(x)] is provable in PA if, and only if, there is an algorithm that, for any given numeral [n], decides the Delta_k formula [f(n)] as `true’.

    http://alixcomsi.com/27_Resolving_PvNP_Update.pdf

    Prima facie, we should, thus, be able to reflect all of Buss’ arguments and conclusions in PA without any loss of generality.

    If so, it should not matter much whether – or not – bounded arithmetic can prove that the primes are uncountable.

    Since I also argue in the above paper that that the above, algorithmic, interpretation of PA is both sound and finitary (read constructive), whilst the standard interpretation of PA is not (since I show that Aristotle’s particularisation does not hold over the natural numbers), the argument might interest John Sidle!

    Bhup

  6. March 27, 2010 5:41 am

    Richard,

    You might be interested in the topic of this post of mine, which is closely related to the theme of this post of yours (which is excellent and inspiring as usual).

  7. Anonymous Coward permalink
    April 6, 2010 2:56 am

    Cavemen didn’t know anything about quantum physics. They were obviously discussing category theory and adjoint functors.

    (This just reminded me of http://math.ucr.edu/home/baez/week83.html#tale)

    • April 13, 2010 8:50 am

      Yes, this sort of “bird-track” diagrams are all over the place if you know how to look for them, and are quite useful for simplifying reasoning about tensors, category theory, logic, etc. Not only Baez, but you can find them used by Penrose, Cvitanovic, and lots of others. They deserve to be more widely known!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s