Skip to content

Modulating the Permanent

December 21, 2014


When does it become hard to compute?

Muir

Thomas Muir coined the term “permanent” as a noun in his treatise on determinants in 1882. He took it from Augustin Cauchy’s distinction in 1815 between symmetric functions that alternate when rows of a matrix are interchanged versus ones that “stay permanent.” To emphasize that all terms of the permanent have positive sign, he modified the contemporary notation {\left| A \right|} for the determinant of a matrix {A} into

\displaystyle  \overset{+}{|} A \overset{+}{|}

for the permanent. Perhaps we should be glad that this notation did not become permanent.

Today Ken and I wish to highlight some interesting results on computing the permanent modulo some integer value.

Recall the permanent of a square matrix {A} is the function defined as follows by summing over all permutations of {\{1,\dots,n\}}, that is, over all members of the symmetric group {S_n}:

\displaystyle  \mathrm{perm}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}.

It looks simpler than the determinant formula,

\displaystyle  \det(A) = \sum_{\sigma \in S_n} \mathrm{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma_i},

but soon acquired a reputation as being ‘strangely unfriendly’ compared to the determinant. We owe to Les Valiant the brilliant explanation that computing the permanent exactly, even restricted to matrices with all entries 0 or 1, is likely to be very hard, whereas the determinant is easy to compute.

Muir is known for rigorizing a lemma by Arthur Cayley involving another matrix polynomial that at first looks hard to compute but turns out to be easy. The Pffafian of a {2n \times 2n} matrix {A} is defined by

\displaystyle  \mathrm{pf}(A)= \frac{1}{2^n n!}\sum_{\sigma\in S_{2n}} \mathrm{sgn}(\sigma) \prod_{i=1}^n a_{\sigma(2i-1),\sigma(2i)}.

This vanishes unless {A} is skew-symmetric, meaning {A^T = -A}, whereupon Muir following Cayley proved the relation

\displaystyle  \mathrm{pf}^2(A) = \det(A).

The Pfaffian, and its use in the FKT algorithm for counting matchings in planar graphs, figures large in Valiant’s 2006 discovery that some generally-hard computations become easy modulo certain primes. All this is background to our query about which matrices might have easy permanent computations modulo which primes.

Permanent Computations

Herbert Ryser found an inclusion-exclusion method to compute the permanent in {O(2^{n}n)} operations:

\displaystyle  \mathrm{perm} (A) = (-1)^n \sum_{S\subseteq\{1,\dots,n\}} (-1)^{|S|} \prod_{i=1}^n \sum_{j\in S} a_{ij}

This was found in 1963 and still stands as the best exact method. Note that it is exponential but is still better than the naive method which would sum {n!} terms. David Glynn recently found a different formula giving the same order of performance.

Mark Jerrum, Alistair Sinclair, and Eric Vigoda found an approximation method for non-negative matrices that runs in probabilistic polynomial time. This award-winning result is based on delicate analysis of certain random walks. It fails for a matrix with even one negative term, since they show that such matrices can have permanents that are NP-hard to approximate.

Modulo 2, of course, the determinant and permanent of integer matrices are the same. It seems to be less well known that the permanent is easy modulo any power of 2. Modulo {2^k}, the known time is {O(n^{4k-3})}, and this too was proved in Valiant’s famous paper. However, subsequent to that paper, computing the permanent modulo any other integer was shown to be NP-hard under randomized reductions.

But wait. There are some special cases modulo {3} that we would like to point out that actually are easy to compute—that take only polynomial time.

Permanent Modulo 3

Grigory Kogan wrote a paper in FOCS 1996 that addresses this issue. It is titled “Computing Permanents over Fields of Characteristic 3: Where and Why It Becomes Difficult.” His main positive result was the following:

Theorem 1 Let {F} be a field of characteristic {3}. Let {U} be a {n \times n} matrix over {F} such that {UU^{T}=I_{n}}. Then {\mathrm{perm}(U)} can be computed in time {O(n^{4})}.

Further, he gave a slightly worse polynomial time for computing {\mathrm{perm}(U)} when {UU^{T} - I_{n}} is a matrix of rank one. When {UU^{T} - I_{n}} has rank two, however, computing the permanent mod 3 remains randomly hard for {\mathsf{NP}}, indeed complete for {\mathsf{Mod3P}}.

The details in the full version involve using mod 3 to regard certain matrices as skew-symmetric and hence work with their Pfaffians. The proof also uses extension fields in which {\sqrt{2} = \sqrt{-1}} exists, and the theorem holds over any such field.

We wonder what similar tricks might be available modulo other primes. One advantage of working modulo {p} is that the permanent becomes randomly self-reducible with no loss in numerical accuracy.

Possible Ideas

Let’s look at using this idea to answer questions about the permanent of the famous Hadamard matrices. As we have remarked before, the original ones of order {n = 2^k} were previously defined by Joseph Sylvester:

\displaystyle  \begin{array}{rcl}  H_1 &=& \begin{bmatrix} 1 \end{bmatrix},\\ H_2 &=& \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix},\\ H_{2^k} &=& \begin{bmatrix} H_{2^{k-1}} & H_{2^{k-1}}\\ H_{2^{k-1}} & -H_{2^{k-1}}\end{bmatrix}. \end{array}

Jacques Hadamard gave the functional equation for any matrix bearing his name,

\displaystyle  HH^T = nI,

which for {n \geq 4} is known to require {n} to be a multiple of {4}. Put {n = 2^k m} with {m} odd and {k \geq 2}. Whether such matrices exist is known when {m^2 < 2^k}, when {n - 1} is a prime power, or when {k=2} and {2m - 1} is a prime power. The case {n = 668 = 4\cdot 167} avoids these since {667 = 23\cdot 29} and {333 = 9 \cdot 37}, and remains unknown.

If {n \equiv 1 \bmod{3}} then {H} satisfies Kogan’s theorem. If {n \equiv 2} then it appears we can still leverage the proof by working with {U = \sqrt{-1} H} instead. However—and this is by way of caution—if {n} is a multiple of 3 then every {H} is nilpotent mod 3, and it follows that {\mathrm{perm}(A) \equiv 0 \bmod{3}}. Nevertheless, all this means that we can compute the permanent of any Hadamard-type matrix modulo {3} in polynomial time.

A further open question is whether there exists a Hadamard matrix {H} with {\mathrm{perm}(H) = 0}. This is open even for Sylvester’s matrices. This is known to need {n \geq 32}. Of course, to vanish, it must vanish mod 3. We wonder how far gaining knowledge about behavior modulo 3 and other primes might help with these problems.

Some of these papers treat related questions for other matrices of entries {\pm 1}, perhaps with some {0}‘s and/or a normalizing constant factor. Greater reasons for interest in questions about permanents has come recently from boson sampling, for which we also recommend these online notes scribed by Ernesto Galvão of lectures given by Scott Aaronson in Rio de Janeiro a year ago. A main issue is whether the randomized equivalence of worst and average case for permanents mod {p} can be carried over to the kinds of real-or-complex matrices that arise.

Open Problems

Can we do better? Can we compute the permanent for a larger class of orthogonal matrices?

Prover’s Block

December 16, 2014


What to do when afraid to see if what you want is true

BerglerFace
Cropped from Canadian Bergler Society source

Edmund Bergler coined the term in 1947, the great writers Francis Fitzgerald—F. Scott to most—and Joseph Conrad among many others suffered from it, as did the great cartoonist Charles Schulz. The problem is writer’s block.

Today Ken and I want to write about something that I wonder if any of you have ever had.
Read more…

Chats

December 13, 2014


Error correction for chatting

HaeuplerMSFT

Bernhard Haeupler is an Assistant Professor in Computer Science at CMU. He previously spent a year as a postdoc at Microsoft Research, Silicon Valley, and a year visiting Bob Tarjan at Princeton. He is currently hard at work on the Program Committee of STOC 2015.

Today we wish to talk about the problem of error correction for chatting.
Read more…

Quantum Algorithms Via Linear Algebra

December 6, 2014


Announcing publication of our textbook with MIT Press

OLYMPUS DIGITAL CAMERA
By permission of Nataly Meerson, artist : source

Richard Feynman had a knack for new ways of seeing. His Feynman diagrams not only enabled visualizing subatomic processes, they also rigorously encapsulated an alternative formalism that cross-validated the equations and procedures of quantum field theory. His 1948 path-integral formulation sprang out of work by Paul Dirac that re-interpreted a continuous Lagrangian operator as a matrix multiplication. Fast forward to his 1985 article “Quantum Mechanical Computers” (a followup to his 1981/82 keynote speech “Simulating Physics With Computers”) and there are only matrices and circuit diagrams to be seen.

Today, December 5 as Dick and I write, is the US publication day of our textbook with MIT Press, titled Quantum Algorithms Via Linear Algebra: A Primer. It is also available from Amazon. Both places offer it for less than two adult IMAX tickets to see “Interstellar.” Publication abroad is on 1/1/15.
Read more…

Susan Horwitz 1955–2014

December 2, 2014


Susan and a paradigm shift in software engineering

horwitz

Susan Horwitz was—it is hard to type “was”—a computer scientist who did important work in the area of software engineering. She passed away this summer on June 11th.

Today Ken and I wish to talk about Susan’s work.
Read more…

Cornell CS at 50

November 26, 2014


Plus a long-promised discussion on diagonalization

kozen_1227028300
TRUST security source

Dexter Kozen has been on the faculty of computer science at Cornell for almost 30 of the department’s 50 years. He first came to Cornell 40 years ago as a graduate student and finished a PhD under Juris Hartmanis in just over 2 years. He was named to the Joseph Newton Pew, Jr., professorship 20 years ago, and celebrated his 60th birthday in 2012. Besides many research achievements he co-authored an award-winning book on Dynamic Logic.

Today we salute the 50th anniversary of Cornell’s department and keep a promise we made 5 years ago to talk about a diagonalization theorem by Dexter. It yields what may be an interesting finite puzzle.
Read more…

Two Versus Three

November 19, 2014


The role of 2 and 3 in mathematics

Farrar
Electrika source

Margaret Farrar was the first crossword puzzle editor for The New York Times. Ken fondly recalls seeing her name while watching his father do the daily and weekly NYT puzzles—they were under Farrar’s byline as editor until 1969 when she retired from the Times. More than a maker of countless puzzles, she also created many of the meta-rules for crossword puzzles, which are still used today in modern puzzle design.

Today Ken and I wish to discuss a light topic: how 2 and 3 are different in many parts of theory and mathematics.
Read more…

Follow

Get every new post delivered to your Inbox.

Join 2,145 other followers