Modulating the Permanent
When does it become hard to compute?
Thomas Muir coined the term “permanent” as a noun in his treatise on determinants in 1882. He took it from Augustin Cauchy’s distinction in 1815 between symmetric functions that alternate when rows of a matrix are interchanged versus ones that “stay permanent.” To emphasize that all terms of the permanent have positive sign, he modified the contemporary notation for the determinant of a matrix into
for the permanent. Perhaps we should be glad that this notation did not become permanent.
Today Ken and I wish to highlight some interesting results on computing the permanent modulo some integer value.
Recall the permanent of a square matrix is the function defined as follows by summing over all permutations of , that is, over all members of the symmetric group :
It looks simpler than the determinant formula,
but soon acquired a reputation as being ‘strangely unfriendly’ compared to the determinant. We owe to Les Valiant the brilliant explanation that computing the permanent exactly, even restricted to matrices with all entries 0 or 1, is likely to be very hard, whereas the determinant is easy to compute.
Muir is known for rigorizing a lemma by Arthur Cayley involving another matrix polynomial that at first looks hard to compute but turns out to be easy. The Pffafian of a matrix is defined by
This vanishes unless is skew-symmetric, meaning , whereupon Muir following Cayley proved the relation
The Pfaffian, and its use in the FKT algorithm for counting matchings in planar graphs, figures large in Valiant’s 2006 discovery that some generally-hard computations become easy modulo certain primes. All this is background to our query about which matrices might have easy permanent computations modulo which primes.
Herbert Ryser found an inclusion-exclusion method to compute the permanent in operations:
This was found in 1963 and still stands as the best exact method. Note that it is exponential but is still better than the naive method which would sum terms. David Glynn recently found a different formula giving the same order of performance.
Mark Jerrum, Alistair Sinclair, and Eric Vigoda found an approximation method for non-negative matrices that runs in probabilistic polynomial time. This award-winning result is based on delicate analysis of certain random walks. It fails for a matrix with even one negative term, since they show that such matrices can have permanents that are NP-hard to approximate.
Modulo 2, of course, the determinant and permanent of integer matrices are the same. It seems to be less well known that the permanent is easy modulo any power of 2. Modulo , the known time is , and this too was proved in Valiant’s famous paper. However, subsequent to that paper, computing the permanent modulo any other integer was shown to be NP-hard under randomized reductions.
But wait. There are some special cases modulo that we would like to point out that actually are easy to compute—that take only polynomial time.
Permanent Modulo 3
Grigory Kogan wrote a paper in FOCS 1996 that addresses this issue. It is titled “Computing Permanents over Fields of Characteristic 3: Where and Why It Becomes Difficult.” His main positive result was the following:
Theorem 1 Let be a field of characteristic . Let be a matrix over such that . Then can be computed in time .
Further, he gave a slightly worse polynomial time for computing when is a matrix of rank one. When has rank two, however, computing the permanent mod 3 remains randomly hard for , indeed complete for .
The details in the full version involve using mod 3 to regard certain matrices as skew-symmetric and hence work with their Pfaffians. The proof also uses extension fields in which exists, and the theorem holds over any such field.
We wonder what similar tricks might be available modulo other primes. One advantage of working modulo is that the permanent becomes randomly self-reducible with no loss in numerical accuracy.
Let’s look at using this idea to answer questions about the permanent of the famous Hadamard matrices. As we have remarked before, the original ones of order were previously defined by Joseph Sylvester:
Jacques Hadamard gave the functional equation for any matrix bearing his name,
which for is known to require to be a multiple of . Put with odd and . Whether such matrices exist is known when , when is a prime power, or when and is a prime power. The case avoids these since and , and remains unknown.
If then satisfies Kogan’s theorem. If then it appears we can still leverage the proof by working with instead. However—and this is by way of caution—if is a multiple of 3 then every is nilpotent mod 3, and it follows that . Nevertheless, all this means that we can compute the permanent of any Hadamard-type matrix modulo in polynomial time.
A further open question is whether there exists a Hadamard matrix with . This is open even for Sylvester’s matrices. This is known to need . Of course, to vanish, it must vanish mod 3. We wonder how far gaining knowledge about behavior modulo 3 and other primes might help with these problems.
Some of these papers treat related questions for other matrices of entries , perhaps with some ‘s and/or a normalizing constant factor. Greater reasons for interest in questions about permanents has come recently from boson sampling, for which we also recommend these online notes scribed by Ernesto Galvão of lectures given by Scott Aaronson in Rio de Janeiro a year ago. A main issue is whether the randomized equivalence of worst and average case for permanents mod can be carried over to the kinds of real-or-complex matrices that arise.
Can we do better? Can we compute the permanent for a larger class of orthogonal matrices?