## Modulating the Permanent

* When does it become hard to compute? *

Thomas Muir coined the term “permanent” as a noun in his treatise on determinants in 1882. He took it from Augustin Cauchy’s distinction in 1815 between symmetric functions that alternate when rows of a matrix are interchanged versus ones that “stay permanent.” To emphasize that all terms of the permanent have positive sign, he modified the contemporary notation for the determinant of a matrix into

for the permanent. Perhaps we should be glad that this notation did not become permanent.

Today Ken and I wish to highlight some interesting results on computing the permanent *modulo* some integer value.

Recall the permanent of a square matrix is the function defined as follows by summing over all permutations of , that is, over all members of the symmetric group :

It looks simpler than the determinant formula,

but soon acquired a reputation as being ‘strangely unfriendly’ compared to the determinant. We owe to Les Valiant the brilliant explanation that computing the permanent exactly, even restricted to matrices with all entries 0 or 1, is likely to be very hard, whereas the determinant is easy to compute.

Muir is known for rigorizing a lemma by Arthur Cayley involving another matrix polynomial that at first looks hard to compute but turns out to be easy. The Pffafian of a matrix is defined by

This vanishes unless is *skew-symmetric*, meaning , whereupon Muir following Cayley proved the relation

The Pfaffian, and its use in the FKT algorithm for counting matchings in planar graphs, figures large in Valiant’s 2006 discovery that some generally-hard computations become easy modulo certain primes. All this is background to our query about *which* matrices might have easy permanent computations modulo *which* primes.

## Permanent Computations

Herbert Ryser found an inclusion-exclusion method to compute the permanent in operations:

This was found in 1963 and still stands as the best exact method. Note that it is exponential but is still better than the naive method which would sum terms. David Glynn recently found a different formula giving the same order of performance.

Mark Jerrum, Alistair Sinclair, and Eric Vigoda found an approximation method for non-negative matrices that runs in probabilistic polynomial time. This award-winning result is based on delicate analysis of certain random walks. It fails for a matrix with even one negative term, since they show that such matrices can have permanents that are NP-hard to approximate.

Modulo 2, of course, the determinant and permanent of integer matrices are the same. It seems to be less well known that the permanent is easy modulo any power of 2. Modulo , the known time is , and this too was proved in Valiant’s famous paper. However, subsequent to that paper, computing the permanent modulo any other integer was shown to be NP-hard under randomized reductions.

But wait. There are some special cases modulo that we would like to point out that actually are easy to compute—that take only polynomial time.

## Permanent Modulo 3

Grigory Kogan wrote a paper in FOCS 1996 that addresses this issue. It is titled “Computing Permanents over Fields of Characteristic 3: Where and Why It Becomes Difficult.” His main positive result was the following:

Theorem 1Let be a field of characteristic . Let be a matrix over such that . Then can be computed in time .

Further, he gave a slightly worse polynomial time for computing when is a matrix of rank one. When has rank two, however, computing the permanent mod 3 remains randomly hard for , indeed complete for .

The details in the full version involve using mod 3 to regard certain matrices as skew-symmetric and hence work with their Pfaffians. The proof also uses extension fields in which exists, and the theorem holds over any such field.

We wonder what similar tricks might be available modulo other primes. One advantage of working modulo is that the permanent becomes randomly self-reducible with no loss in numerical accuracy.

## Possible Ideas

Let’s look at using this idea to answer questions about the permanent of the famous Hadamard matrices. As we have remarked before, the original ones of order were previously defined by Joseph Sylvester:

Jacques Hadamard gave the functional equation for any matrix bearing his name,

which for is known to require to be a multiple of . Put with odd and . Whether such matrices exist is known when , when is a prime power, or when and is a prime power. The case avoids these since and , and remains unknown.

If then satisfies Kogan’s theorem. If then it appears we can still leverage the proof by working with instead. However—and this is by way of caution—if is a multiple of 3 then every is nilpotent mod 3, and it follows that . Nevertheless, all this means that we can compute the permanent of any Hadamard-type matrix modulo in polynomial time.

A further open question is whether there exists a Hadamard matrix with . This is open even for Sylvester’s matrices. This is known to need . Of course, to vanish, it must vanish mod 3. We wonder how far gaining knowledge about behavior modulo 3 and other primes might help with these problems.

Some of these papers treat related questions for other matrices of entries , perhaps with some ‘s and/or a normalizing constant factor. Greater reasons for interest in questions about permanents has come recently from boson sampling, for which we also recommend these online notes scribed by Ernesto Galvão of lectures given by Scott Aaronson in Rio de Janeiro a year ago. A main issue is whether the randomized equivalence of worst and average case for permanents mod can be carried over to the kinds of real-or-complex matrices that arise.

## Open Problems

Can we do better? Can we compute the permanent for a larger class of orthogonal matrices?

## Prover’s Block

* What to do when afraid to see if what you want is true *

Cropped from Canadian Bergler Society source |

Edmund Bergler coined the term in 1947, the great writers Francis Fitzgerald—F. Scott to most—and Joseph Conrad among many others suffered from it, as did the great cartoonist Charles Schulz. The problem is writer’s block.

Today Ken and I want to write about something that I wonder if any of you have ever had.

Read more…

## Chats

* Error correction for chatting *

Bernhard Haeupler is an Assistant Professor in Computer Science at CMU. He previously spent a year as a postdoc at Microsoft Research, Silicon Valley, and a year visiting Bob Tarjan at Princeton. He is currently hard at work on the Program Committee of STOC 2015.

Today we wish to talk about the problem of error correction for chatting.

Read more…

## Quantum Algorithms Via Linear Algebra

* Announcing publication of our textbook with MIT Press *

By permission of Nataly Meerson, artist : source |

Richard Feynman had a knack for new ways of seeing. His Feynman diagrams not only enabled visualizing subatomic processes, they also rigorously encapsulated an alternative formalism that cross-validated the equations and procedures of quantum field theory. His 1948 *path-integral* formulation sprang out of work by Paul Dirac that re-interpreted a continuous Lagrangian operator as a *matrix multiplication*. Fast forward to his 1985 article “Quantum Mechanical Computers” (a followup to his 1981/82 keynote speech “Simulating Physics With Computers”) and there are only matrices and circuit diagrams to be seen.

Today, December 5 as Dick and I write, is the US publication day of our textbook with MIT Press, titled *Quantum Algorithms Via Linear Algebra: A Primer*. It is also available from Amazon. Both places offer it for less than two adult IMAX tickets to see “Interstellar.” Publication abroad is on 1/1/15.

Read more…

## Susan Horwitz 1955–2014

* Susan and a paradigm shift in software engineering *

Susan Horwitz was—it is hard to type “was”—a computer scientist who did important work in the area of software engineering. She passed away this summer on June 11th.

Today Ken and I wish to talk about Susan’s work.

Read more…

## Cornell CS at 50

* Plus a long-promised discussion on diagonalization *

TRUST security source |

Dexter Kozen has been on the faculty of computer science at Cornell for almost 30 of the department’s 50 years. He first came to Cornell 40 years ago as a graduate student and finished a PhD under Juris Hartmanis in just over 2 years. He was named to the Joseph Newton Pew, Jr., professorship 20 years ago, and celebrated his 60th birthday in 2012. Besides many research achievements he co-authored an award-winning book on Dynamic Logic.

Today we salute the 50th anniversary of Cornell’s department and keep a promise we made 5 years ago to talk about a diagonalization theorem by Dexter. It yields what may be an interesting finite puzzle.

Read more…

## Two Versus Three

* The role of 2 and 3 in mathematics *

Electrika source |

Margaret Farrar was the first crossword puzzle editor for The New York Times. Ken fondly recalls seeing her name while watching his father do the daily and weekly NYT puzzles—they were under Farrar’s byline as editor until 1969 when she retired from the Times. More than a maker of countless puzzles, she also created many of the meta-rules for crossword puzzles, which are still used today in modern puzzle design.

Today Ken and I wish to discuss a light topic: how 2 and 3 are different in many parts of theory and mathematics.

Read more…