The Permanent of Stable Matrices with Few Negative Entries
An approximation method for the permanent of matrices with few negative entries
Subrahmanyam Kalyanasundaram—Subruk—is one of my wonderful current graduate students at Georgia Tech. He is working mostly in the area of computational complexity; for example, he is working on a draft of the promised paper on a faster method for the simulation of nondeterministic Turing machines.
One of the great things about Subruk is that he is the reason that this blog does not have as many typos as it should. He makes the last pass, usually, on each post, and finds all the tiny bugs that I have missed. Actually, he does much more: he often finds the not so tiny bugs that I have missed.
Today I want to give a short post on a problem that I raised a bit ago. It has to do with the approximation of the permanent of a matrix with negative entries. I think that Subruk has found an interesting path that could solve the problem.
Another reason to make this a shorter post than usual, is that I have been out of pocket for the last few days—a visit from my daughter and her family. So I hope that you understand if this post is a bit short.
Mostly it is a report on the work that Subruk has done on the problem, as mentioned in one of the previous posts, is to try and extend Mark Jerrum, Alistair Sinclair and Eric Vigoda’s (JSV) theorem on approximating the permanent of non-negative matrices, to matrices with negative entries.
Suppose is a matrix with negative entries. Recall JSV allows us to compute the permanent of a non negative matrix within a factor of for a constant . But, unless the permanent can be exactly computed in polynomial time, one cannot hope to approximately compute the permanent of a matrix with even one negative entry. So in order to prove any approximation with negative entries, we need a further restriction on the entries.
The new idea here is to break down into a number of non negative matrices, such that the permanent of is the sum/difference of the permanents of these matrices. This method used here is related directly to Herbert Ryser method for the evaluation of the permanent.
One Negative Entry
Let us first consider the case where the matrix has exactly one negative entry. We shall use to denote the permanent of the matrix . For the sake of simplicity of the argument, let us assume that the negative entry is . Since the value of the permanent is independent of the order of rows and columns, we can assume, without loss of generality, that the is the first entry—in the first row and first column of .
where each denotes a non negative entry of the matrix.
We can write the permanent of as a difference of permanents as follows. where and are as shown below.
Here is the same as but has the replaced with 0. is the same as but for the first row. The remains intact, but the rest of the row is replaced with 0’s. But note that both and are matrices with only nonnegative entries.
JSV can help us determine the values up to a factor . So we can determine to a multiplicative factor, as long as the values and are not close enough. This is the condition that we require.
Theorem: The permanent of a matrix , with exactly one negative entry, can be computed up to a factor , in polynomial time, provided .
Now, we shall show that the stability condition that we defined last time is related to this. Let us recall the condition. Let be the matrix where we replace the in with . The stability condition was that is stable provided
for small enough.
In terms of the matrices and , the above stability condition reduces to
This is the same as
which is the condition that we have on the above theorem. So we can restate the above theorem as,
Theorem: The permanent of a matrix , with exactly one negative entry, can be computed up to a factor , in polynomial time, provided that the stability condition is met.
Multiple Negative Entries
This can be done when the matrix has two entries as well. If the two ‘s are in the same row or column, we could take care of them using just two matrices as before. If the two ‘s are in different rows and columns, we can permute the rows and columns to get
We can define
As before, we get . Again we need a condition, that the two quantities and are not close to each other. That is,
We could extend this argument to the case where there are negative entries. In that case, we could write , where each summation contains terms. So we need a total of permanent computations by the JSV method. We need a condition similar to last time, that is
So we can conclude the following theorem.
Theorem: We can compute the permanent of a matrix with negative entries, in polynomial time, up to a factor of , as long as the above condition is met.
Can we do better than negative entries? Also, we need to complete the case of multiple entries. Right now, we have a condition for which the permanent can be computed. But it is not clear how this condition is connected to the stability condition of the previous post. Can we define the stability condition appropriately, so that it reduces to the condition given here?