A Wonderful Riff On Rank
The approximate rank of matrices—a great idea
Santosh Vempala is one of the world experts on high-dimensional geometry, and has made many beautiful contributions to this and other parts of theory. We have presented some of his results and open problems concerning high dimension geometry before. Many intuitions we have, not Santosh, from geometry in two and three dimensions fail in high dimensions. For example, the ratio of the corner-to-corner diameter to the volume of the -cube goes to infinity as . Here is a page of visualizations maintained by David Eppstein.
Today I want to talk about a great new paper from Santosh that is joint work with Noga Alon.
They have begun a systematic study of the notion of approximate rank of a matrix. There are some surprising upper bounds. That is, some matrices have approximate rank that I find surprising, perhaps you will too. Even better, these upper bounds allow Santosh and Noga to prove some cool new theorems. One is an improvement to the best known results for finding Nash equilbria for two-player nonzero-sum games.
Santosh is co-teaching this term with Gregory Abowd our Computing For Good course—it was created by Santosh years ago. This term they are studying practical problems faced by communities that lack the type of computing power most of have, and consequently people in these communities have little benefit from computing.
Something to think about as we enter the American Thanksgiving week.
The rank of a matrix is one of the most important concepts in all of linear algebra. One way to see that it is critically important is to note that there are a multitude of ways to define the rank of a matrix. Here are just a couple:
Column Rank: This is the maximum number of linearly independent columns of the matrix. Recall vectors are linearly independent provided
implies that .
Row Rank: This is the maximum number of linearly independent rows of the matrix.
Dimension of Image: This is the dimension of the range of the operator defined by the matrix. Every matrix defines in a natural manner a linear operator and the range of the operator is a linear space.
Decomposition Rank: This the smallest so that the matrix can be written as where is by and is by .
Determinantal Rank: This is the size of the largest minor of the matrix that has a nonzero determinant. Here is an example:
The list could go on for longer, but I believe the point is clear: the rank of a matrix is a salient concept.
One of the fundamental insights in theory is if a concept is useful, then an approximate version of it may often be useful too. Sometimes in our world the approximate concept is even more useful: algorithms often need only an approximate value to work. This is a basic principle that is repeated throughout theory, so why not study the approximate rank of a matrix?
Indeed why not. This is exactly what Santosh and Noga have done. The – of a real matrix is defined as the minimum rank over matrices that approximate every entry of to within an additive . More precisely:
Note is a fancy way of saying that the matrices and differ in all entries by at most . As they reference in their paper, the concept is related to ones of matrix rigidity introduced by Les Valiant in the late 1970’s to study lower bounds on the complexity of matrix operators, though his focus is somewhat different.
The Identity Matrix
For the special case of the identity matrix the – has been studied before in a previous paper of Noga’s. There he shows that it is at least and at most . He uses this, and other results to give a number of applications—take a look at the paper to see them.
I find this surprising because the rows of the identity matrix are as “independent as possible”—in the sense they have no nonzero elements in common. This would seem to me to suggest that the approximate rank should be large. But my intuition is obviously very wrong, as the the approximate rank is exponentially smaller than the rank.
Santosh and Noga give many applications of their bounds on the approximate rank of various matrices. The following theorem is one of the most interesting in my opinion, which is totally biased. They prove:
Theorem: Let be the payoff matrices of a -player game. If is positive semidefinite, then an -Nash equilibrium of the game can be computed by a randomized algorithm using space and expected time
Recall some of the terms used here: A two-person game in general is defined by two matrices and . These are respectively the payoff matrices for the “row” and the “column” player. A matrix is positive semidefinite provided all its eigenvalues are positive.
This result generalizes a nice previous result of Ravi Kannan and Thorsten Theobald, who proved a similar bound in the case that has constant rank. Note that familiar zero-sum games are defined by , so this result is a natural extension of the well-understood zero-sum case.
The above theorem can also handle the case when the rank of is . Both of these results are related to a result of Evangelos Markakis and Aranyak Mehta and me. This shows that an -Nash for any -player game can be computed in time . It has been an important open question to determine whether this problem has an algorithm of complexity of .
I cannot avoid noting that may fail to be positive semidefinite, yet there are ways to modify the game matrices of a 2-player game and not change the game. Can these modifications serve to extend Santosh’s and Noga’s result? For example, if there is a convex combination of and that is positive semidefinite, then the above theorem applies.
I think this work on approximate rank could open many doors, and lead to many new results. Here is one example: Could approximate rank methods be used to prove some new results above graph isomorphism? I think there is a possible connection—more about this in the future.
One of the major mysteries in this area is to find the -rank of the upper triangular matrices typified by:
Santosh says that he is puzzled by this matrix. They prove that its approximate rank is of order at most . But this upper cannot be the best, or is it?