The Other Pi Day
It’s 3/14/15, do you know how much your Π costs?
Larry Shaw apparently created the concept of Pi Day in 1988. He was then a physicist who worked at the San Francisco Exploratorium. He and his colleagues initially celebrated by marching around in circles, and then eating pies—that is fruit pies. As Homer Simpson would say: hmm.
Today Ken and I want to add to some of the fun of Pi Day, and come back to a different Pi that has occupied us.
The extra excitement is that this year the date is 3/14/15. This uses the American “month first” convention. The international “date first” convention would go 31.4.15 but April does not have 31 days. The “year first” convention will have to wait another 1,126 years to hit 3141-5-9, and even so they pad with 0’s which makes it impossible. So month-first it has to be. We can get more digits by raising a toast at exactly 9:26:53 in the morning or evening.
Recall can be defined in many ways. One is that it is the reciprocal of the number
This is due to Srinivasa Ramanujan.
The classic definition is based on the fact that the circumference of a circle is a constant function of its radius, and half of that constant is . This must be proved and is not trivial. It was known to Euclid—see here for a detailed discussion.
The Other Pi
The symbol Π is like
One is a capital and one is a product symbol: can you tell which is which? Perhaps you can but they are pretty close, and their derivation is the same. So let’s talk about the product symbol during this wonderful Pi Day.
The product is quite basic to mathematics and theory too. We note that
can be used to define various important functions. But the simplest such function is already an important open problem. That’s right: simply multiplying two integers, .
What is the complexity of integer product? There are multiple views on multiplication, going back to when it was first formulated as a problem.
The Original View of Multiplication
In 1960 the great Andrey Kolmogorov conjectured that integer multiplication required quadratic time. That is, the simple high school method was optimal. This conjecture was plausible, especially back then when the field of computer theory was just beginning. So he organized a seminar at Moscow State University to study his conjecture with the hope to prove it. Within a week one of the students, Anatoly Karatsuba, found a clever algorithm that ran in time order-of
for -bit number multiplication.
The idea is to let and write and . Then we have
where and and . The multiplications by and are just bit-shifts to known locations independent of the values of and , so they don’t affect the time much. It is neat that and need just one recursive multiplication of -bit numbers each, but the two such multiplications for would remove all the advantage and still give time.
What Karatsuba noted instead was that
This needs just one more multiplication. End of conjecture. Boom: the conjectured lower bound was wrong. Kolmogorov explained the beautiful result at the next meeting, and then terminated the seminar.
Pretty cool to terminate a seminar—no?
SODA is the premier conference on algorithms, which has the wonderful properties of being held in January and usually at a warm locale.
The SODA view is to optimize the algorithm. The simplest meaning of this is to optimize the asymptotic running time. Andrei Toom, 5 years younger than Karatsuba was, took up the task and in a 1963 paper gave a hierarchy of improved exponents approaching but not reaching 1. The next step, called “Toom-3,” breaks into 3 equal pieces not 2, and by reducing 9 recursive multiplications to 5, achieves running time order-of
Steve Cook streamlined the description and the analysis in his doctoral thesis in 1966 and often gets joint credit.
If you break into pieces the running time is proportional to , whose exponent behaves roughly like and approaches . For any fixed the term is a constant, and one could be tempted to ignore it. However, as Wikipedia’s article avers,
[T]he function unfortunately grows very rapidly.
One can try to improve the algorithm by making depend on in the recursion, but as the article notes in the next sentence, this is an open research problem.
The barrier was broken by Arnold Schönhage and Volker Strassen in 1971. This achieved a running time of the form
with . The constant in the is high but not astronomical, and their algorithm has been programmed to give superior performance to Cook-Toom for over 30,000 bits.
The latest major part of the history is that Martin Fürer in 2007 improved to . This looks like more than but becomes less as grows. Alas the is so high that the algorithm is galactic. Work last year by David Harvey, Joris van der Hoeven, and Grégoire Lecerf improving the constant in the exponent has not changed that.
Fürer’s paper appeared in STOC not SODA, but I have a different picture in my mind when I think of STOC.
STOC is, of course, one of the top theory conferences, which is held in late spring or early summer and where it is held follows a complex secret formula. I have no idea what that formula is, but according to Wikipedia:
STOC is traditionally held in a different location each year.
Well not exactly: what does “different” mean here? STOC has repeated sites, just never at the same site two years in a row. Ken attended STOC in Montreal twice in fairly close succession: 1994 and 2002.
Okay let’s look at multiplication from a STOC view, that is from a more foundational view. We have already discussed the relationship between the cost of multiplication on Turing Machines and the famous conjecture of Juris Hartmanis that for any algebraic irrational number , the first digits of cannot be computed on a multitape Turing machine in time—and could even require time.
Part of the significance of this question is that it impacts the separation between and then integer multiplication is almost linear time. We know they are different, but only by a hair—less than even. The idea is that if and are really that close, then a common alternation method used to check integer multiplication can be simulated deterministically.
The method using alternation is to guess and check it by selecting a prime of about log bits. Then we must check . This is easy if one can compute an bit number mod fast. This can be done by a block structure trick and using alternation again to only have to check that one block is okay. We haven’t worked this all out, but clearly integer multiplication is involved in a fundamental way to complexity class relations. Another use of similar structure ideas is in this paper on checking integer multiplication by Dima Grigoriev and Gérald Tenenbaum.
View From the Tropics?
Ken has found a different way this week to escape to a warm locale, by workshop rather than by conference. There is in fact a new branch of algebra and geometry named for the tropics, in which the binary minimum (or maximum) function plays the role of and replaces multiplication. Then the integer product problem goes away, perhaps.
Ken has a different idea, however. Perhaps we can avoid doing multiplication altogether. We don’t even need to use integers. Let stand for any “decoding” function from strings to integers, which need not be 1-to-1: an integer might have many string codes. The idea is to ask:
Are there linear-time computable binary functions , , and such that for some decoding function —of any complexity or maybe not even computable–and all strings :
The last clause allows us to simulate a comparison function, so that the strings emulate a discretely ordered ring whose operations are all in linear time. Is there such a fish? We don’t care so much about the complexity of since it would only need to be applied once after a plethora of efficient and operations, and in view of might not need to be computed at all.
What is the complexity of ? The operation, not the number.
[fixed radius/circumference in intro]