Skip to content

Is Complexity Theory On The Brink?

November 19, 2010


A discussion of some of the great recent results of theory

Victor Klee was one of the greatest geometry experts ever, and is known for many wonderful results and problems. For example, he was the first to raise the question now called the “art gallery problem,” and also the question now called Klee’s measure problem. The latter is especially relevant, since it is a pure computational geometry problem, that is still not completely settled.

Today I want to share a feeling that I have, a feeling that is very positive, yet is just a feeling. I think that complexity theory is on the verge of major breakthroughs of all kinds. I feel this is already starting to happen, and I wish to share the reasons I feel this way.

Our field, like most areas of science, have times of huge advances and times when the advances are small. I think that we are in a time of potentially major advances across the entire spectrum of theory. It is an exciting time, and much of that excitement is in seeing problems solved or at least partially solved that were open for years and sometimes decades.

I think Klee would have liked one of the new results very much. I met him only once or twice, and never had a chance to work together. But I believe that he would be excited to see his famous lower bound on Simplex finally improved.

Three Big Results

I feel that there are many results already that are great advances, but I also feel that they point to even greater results in the near future. The next few weeks, months, and years could be a most exciting time for computational complexity.

{\bullet } Unique Games:

The result of Sanjeev Arora, Boaz Barak, and David Steurer on the Unique Games Conjecture is a clear example. Their beautiful result has proved that Unique Games can be approximately solved in time {2^{n^{\epsilon}}} for any {\epsilon>0}. Their result does leave an interesting open problem: Is the approximation problem for unique games in polynomial time?

{\bullet } Circuit Lower Bounds:

The new paper of Ryan Williams is another clear example. His beautiful result is that {\mathsf{NTIME}[2^n]} does not have non-uniform {\mathsf{ACC}^0} circuits of polynomial size. His result leaves open many problems including: Can the circuit class be improved to general polynomial size circuits?

I will not say much more about either of these, since they have both been discussed at length here and elsewhere. For example, I just discussed Ryan’s proof in some detail here.

{\bullet } Simplex: The result of Oliver Friedmann, Thomas Hansen, and Uri Zwick is another wonderful example. They have proved strong, almost exponential, lower bounds on randomized pivoting rules for the Simplex method. See Gil Kalai, who is an expert in this area, for comments on their result. Gil also is the discoverer of one of the best methods known for running Simplex: his method requires at most

\displaystyle  2^{ {\tilde O} ({\sqrt m})},

for {m} constraints.

Let me give a short history of the search for “good” pivoting rules for Simplex. The Simplex method is really a family of algorithms: at each step the algorithm essentially moves around the vertices of the polytope defined by the linear constraints. The key is the decision of where to move next: the pivot rule. In practice there are many simple rules that work extremely well; however, the complexity question is to find a rule that always works efficiently.

In 1977 Victor Klee and George Minty discovered a “difficult” linear program. Their program was based on a polytopes that are now called Klee-Minty cubes:

\displaystyle  \begin{array}{rcl}  	\min x_n : \\ 	0 & \le & x_1 \le 1, \\ 	\epsilon x_{i-1} & \le & x_i \le 1-\epsilon x_{i-1} \end{array}

for {2 \le i \le n} and {0 < \epsilon < 1/2}. Here is a picture for {\epsilon = 1/3}, it is from the paper by Bernd Gärtner, Martin Henk, and Günter Ziegler.

Klee and Minty proved that the naive pivot rule had a worst case exponential running time on these programs. Quickly other rules were suggested and for a long time various random rules were consider potential winners: that is many believed these rules took expected polynomial time.

I am definitely not an expert in this area, but I am amazed that Friedmann, Hansen, and Zwick were able to prove that various random pivoting rules do not run in expected polynomial time. Finding a worst-case example for any algorithm can be hard. Finding one for a randomized algorithm can be quite difficult: the example must cause the algorithm to fail on paths weighted by their probability, something which is often difficult to show.

Finally, they exploit a connection between linear programming and certain games that seems quite important. We have long known that games and linear programming are related, but that the relationship could be used to prove a lower bound seems quite surprising—at least to me. Their result does leave an interesting open problem: Is there a pivot rule that even is a reasonable conjectured efficient one?

One More

{\bullet } Learning Theory: The result of Amir Shpilka and Avishay Tal is my last example of a great new result. They prove new bounds on the Fourier degree of symmetric boolean functions.

The truth is that I am not quite unbiased. I was part of a team that had the previous best result. Mihail Kolountzakis, Evangelos Markakis, Aranyak Mehta, Nisheeth Vishnoi, and I studied: What is the smallest {t} such that every symmetric boolean function on {k} variables—that is not a constant or a parity function—has a non-zero Fourier coefficient of order at least {1} and at most {t}? Let {\tau(k)} be the smallest such {t}. Our main result was that {\tau(k) \le 4k/\log k} for symmetric boolean functions on {k} variables. The question arises in learning theory and in particular in the special case of the Junta problem for symmetric boolean functions.

Our proof involves careful analysis of the structure of a symmetric boolean function. Even though we could only prove {o(k)}, we did conjecture that the correct upper bound was much better—perhaps even {O(1)}. Shpilka and Tal prove a theorem that is much stronger than our result. Much. Further their proof is quite pretty and uses some quite interesting connections between this pure complexity question and the structure of prime numbers. Very neat.

They use a deep result from the distribution of primes. Define {\Gamma(m)} to be,

\displaystyle  \max \{ b-a \mid 1 \le a < b\le m \text{ and there is no prime in the interval} (a,b] \}.

It is known that {\Gamma(m) \le m^{.525}}; that is it is known to those who study the deep structure of the primes. It is believed that there are better bounds on {\Gamma(m)}, and it has been conjectured that {\Gamma(m)} is {O(\sqrt{m})} and perhaps even {O(\log^2 m)}.

Shpilka and Tal prove the following theorem:

Theorem: The function {\tau(k) \le O(k^{.525})} for symmetric boolean functions with {k} inputs.

This immediately gives new bounds on learning symmetric boolean functions—see their paper for details. Their theorem does leave an interesting open problem: What is the best bound on {\tau(k)}? Can it really be {O(1)}?

Open Problems

Do you agree that there are many new exciting results being discovered in complexity theory? I think so, and I am optimistic that the trend will continue. That more and more problems will be solved or at least chipped away some.

\displaystyle  \S

The answer to the two secrets in the last post were the digits of {\pi}, and each subsection’s label being a song title. Okay, I like to add some fun to the discussions.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,920 other followers