Rabin Meets Lagrange
On Rabin’s recent talks at Tech
Jeffrey Shallit is a computational number theorist, with many wonderful results. He is also well known for his work as an advocate for civil liberties on the Internet, and also against intelligent design (ID). Indeed he was at one point slated to testify in the 2005 Dover case until a stand-down by the ID side accomplished his purpose. Ken once used his survey with Wesley Elsberry in a graduate seminar on cellular automata and various forms of complexity, not to say anything about ID, but just as a source of well-written definitions and relevant examples.
Today I would like to talk about on Michael Rabin’s talks at Tech this week and their connection to Jeff.
Both of Rabin’s talks were great, no surprise. They were based on his recent paper with Silvio Micali in February’s issue of the CACM. Rather than talk about details I will focus on one aspect that used a “trick” that I particularly liked, and thought I would share it with you. The trick is related to an old paper with Jeff.
The paper appears on the ACM website with the first author named “J.O. Rabin.” Those are Jeff’s initials, with the ‘O’ interestingly standing for “Outlaw.” Michael also has ‘O’ as his middle initial, so it was an easier mistake to make. O well.
Secret Straight-line Computations
Rabin showed, roughly, how to perform straight-line computation in a manner that keeps the values hidden from others. His computations allowed addition, multiplication, and comparison. The last is what I found quite interesting, since hiding ring operations is not too difficult, but hiding a highly non-linear operation like comparison seems quite different. Even multiplication while non-linear is bi-linear, and has a lot of linear structure that can be exploited.
The basic approach is to use the split value representation of a number. A large prime is fixed, and all values are represented by some pair so that . If the split is done randomly, then knowing one of the pair gives no information at all about the value of . This idea has been used before, of course, in many crypto protocols, often in multi-party protocols. Rabin’s work uses this method to make auctions secret. and does the same for electronic elections. See their paper for the details of what type of security he and Micali are able to achieve.
Let’s go back to the comparison operation. In order to perform it using split pairs, Rabin needs to use Lagrange’s Four Square Theorem, which I state in a moment. The reason is that he can reduce comparison to the checking that some is in a certain range, but he needs that the value of not be too large. Lagrange’s theorem allows him to do that.
Joseph Lagrange in 1770 proved the wonderful theorem:
Theorem: Every natural number is the sum of four squares.
Thus given there are so that
Note, some or all of can be zero.
There are two key ideas in Lagrange’s proof of his theorem. The first is that if and can both be written as a sum of four squares, then so can their product.
The second is that, therefore, one needs only to prove that every prime number can be written as a sum of four squares. But this can be done using properties of primes, and we are done.
Effective Lagrange’s Theorem
Not quite. Over three hundred years later—1770 to 1985—the rules have changed. We now are interested not just in the existence of a representation of a number as the sum of four squares, but are interested in finding it efficiently. In this light Lagrange’s proof breaks down immediately, since he works on primes and therefore needs the factorization of in order to find its representation as four squares. This means that his proof cannot directly lead to an efficient algorithm—one must avoid factorization.
Enter our ‘O’ duo. In their paper from 1985 they devised a randomized algorithm that runs in polynomial time and finds a four square representation of a number . They actually given several proofs of this statement, including additional results about other representation theorems that they can make effective. Jeff himself commented in a MathOverflow item on possible extensions, and their paper itself was outlined on StackExchange last September.
One method uses the following neat idea: Suppose that you want to represent as the sum of four squares. Pick random squares and and assume that
where is a prime that is congruent to modulo . The key is that such primes are always the sum of two squares. So they build a subroutine that can solve this problem in randomized polynomial time: they have a polynomial time procedure to find such a representation of provided it is a prime. As usual the procedure could be lucky and still work if is composite. But if the procedure fails they pick new and and try again. A deep and amazing theorem of Yuri Linnik shows that every has many representations of the form (*). This leads to the main result.
A Weak Lagrange Theorem
Suppose that we want to write as the sum of a few squares, but perhaps more than four. The ‘few’ must be small because it affects the cost of other computations, and also we want finding the few to be efficient. We can use the Rabin-Shallit method which is more than fast enough. But there is a very simple direct method that I thought I would share.
Let’s suppose that and apply the “greedy” method—it is always a good idea to try the simplest ideas first.
Set . Let . Then
Then repeat this on until it is small enough to do by table lookup. The number of iterations is bounded by a double logarithm in . So this yields an expression for as a sum of at most squares. For the range used in Rabin’s applications this value is seven rather than four.
I like the weak effective Lagrange Theorem. If anyone knows a reference for it I would like to know. Perhaps it can be used in some other algorithms because it is extremely fast. For an -bit number it finds the representation in time where is the time to perform multiplication, while Rabin-Shallit uses .