A Sampler From STOC 2010
Some highlights from STOC
Scott Aaronson is one, perhaps the one, of the best quantum computation experts in the world. He is great at explaining concepts in his beautiful papers, in his on-line writing, in his terrific blog, and in person.
Today I want to talk about the papers from the first day of STOC, the papers from Sunday’s sessions.
I spoke to Scott on Saturday at the opening reception, where he tried to explain to me the reason storage in a region of 3-space is bounded by the surface area of and not the volume of . This is the holographic principle, and should not be confused with Leslie Valiant’s work on holographic computation. Scott says the prime reason for this surprising limit is: if one tries to pack too many devices into a region , one would create a black hole.
I have strong biases, so here are some of the papers I like very much from the first day of talks. As a certificate that I really was at the Sunday session, at least some of it, I heard Ran Raz speak on “Tensor-Rank and Lower Bounds for Arithmetic Formulas.” This paper was scheduled to be given Tuesday, but there was a re-shuffle of the talks at the last minute.
Ran gave a wonderful and clear talk on a subject that could have been, in my opinion, difficult to follow. He defined the notion of tensor and tensor rank in a clear way, and then related them to the complexity of arithmetic formulas.
We all know what a matrix is: it is a by array of numbers. So what is a tensor? A tensor is simply a mapping
where is a set of values. Usually is a field, but it could be more general. Note, when this is nothing more than the usual definition of an by matrix. So far, all is simple.
The key to all Ran’s work is the classic notion of the rank of a tensor, which generalizes the notion of the rank of a matrix. While tensor rank is “just” a generalization of matrix rank, our understanding of the behavior of tensor rank is much less than our corresponding understanding of matrix rank.
The definition of tensor rank is based on one of the definitions of matrix rank. Suppose and are non-zero by vectors, then
where is the vector transpose, is the usual inner product of the vectors. It is a by matrix, which we usually identify as a scalar value. The expression
is the outer product of the vectors and is an by matrix—note the reverse order of the vectors. The non-zero vector is a rank matrix by definition. Then, the rank of a matrix is the least so that
where each is rank .
The reason matrices are so nice is the above definition of rank agrees with many other definitions. You can define matrix rank based on Gaussian Elimination, on determinants, on dimensions of spaces, and so on. The problem with tensor rank is it is a generalization of the above definition of matrix rank, but it has few of the beautiful other properties of matrix rank. The tensor rank of is the least so that
where all have tensor rank . A tensor has rank if it can be written as the tensor product of “vectors.” See this for more details.
Finally, Ran shows that an explicit tensor of high rank implies an explicit arithmetic formula of high complexity. Of course, see his paper for the exact statements. Again the fact that tensor rank has none of the cool properties makes finding explicit high rank tensors hard. Note, the corresponding problem is trivial for matrix rank: the by identity matrix has rank .
These papers are my personal favorites. Since I am doubtful of many of the believed lower bounds it is natural for me to like papers that attack problems whose current best bounds are exponential. Even a small chipping away at the “obvious best” algorithms is something I enjoy seeing.
Improving Exhaustive Search Implies Super-polynomial Lower Bounds Ryan Williams shows how even “small” improvements in exponential algorithms for a variety of problems would have great consequences. Here is an example result,
Theorem: If Circuit-SAT can be done in time where , then
An easy result is if P=NP, then . This follows by showing that if P equals NP, then by Dick Karp and my paper, EXP must have circuits of super-polynomial size. Ryan’s theorem can be viewed as a huge improvement over this simple insight. His assumption is a much weaker assumption than P=NP, and still implies that NEXP has big circuits.
On The Complexity Of Circuit Satisfiability Ramamohan Paturi and Pavel Pudlák study the exponential algorithms of many NP-complete problems. They look at algorithms of this type: they study randomized polynomial time algorithms for NP problems which have success probability for interesting constants . That is, they study randomized algorithms for NP which “offload” all the exponential complexity onto the success probability of the procedure. Such algorithms are known to exist for -SAT, where depends on . They show that if a similar type of algorithm existed for Circuit-SAT, then in fact Circuit-SAT can be solved in subexponential time along with other surprising results.
A Full Characterization of Quantum Advice Scott Aaronson and Andy Drucker’s paper—presented by his student Andy—generalizes results on classic advice and the polynomial hierarchy. One key result is
What makes this so interesting is the first advice bits are quantum, but the second advice bits are classic. They also have an interesting lemma that could be of use in other contexts. Take a look at their paper for the details.
BQP and The Polynomial Hierarchy Scott Aaronson studies perhaps one of the most important open questions in the complexity theory of quantum computing—where does BQP lie with respect to all the classic complexity classes. We still do not know if BQP is in the polynomial hierarchy. We do know that there are many “problems” that quantum provably does better than classic, but these “problems” are not decision problems. They either are promise type problems, or ask for values, or solve more complex types of questions. The real open question is: What is the power of BQP on pure decision problems? Scott relates this fundamental question to some neat open questions, which seem hard, but not impossible.
A Sparse Johnson-Lindenstrauss Transform Anirban Dasgupta, Ravi Kumar, and Tamas Sarlos study dimension reduction. They prove a sparse version of the classic JL result. I believe this is one of those results that could have a huge number of applications. I am trying to use it right now on an old problem.
The main open questions are to try and solve some of the questions raised in these papers:
- Can we find explicit tensors of high rank?
- Can we find algorithms for Ryan’s theorems?
- Can we solve Scott’s open problems?
- Can we find applications for the sparse JL transform?
P.S. Sorry Andy for the errors.