It’s 3/14/15, do you know how much your Π costs?

Larry Shaw apparently created the concept of Pi Day in 1988. He was then a physicist who worked at the San Francisco Exploratorium. He and his colleagues initially celebrated by marching around in circles, and then eating pies—that is fruit pies. As Homer Simpson would say: hmm.

Today Ken and I want to add to some of the fun of Pi Day, and come back to a different Pi that has occupied us.

See here and here for many of the more exotic celebrations that revolve around Pi Day. Of course Pi Day, or ${\pi}$ day, is based on the famous number $\displaystyle {\bf 3.1415}9265358979323846264338327950288419716939937510\dots$

The extra excitement is that this year the date is 3/14/15. This uses the American “month first” convention. The international “date first” convention would go 31.4.15 but April does not have 31 days. The “year first” convention will have to wait another 1,126 years to hit 3141-5-9, and even so they pad with 0’s which makes it impossible. So month-first it has to be. We can get more digits by raising a toast at exactly 9:26:53 in the morning or evening.

Recall ${\pi}$ can be defined in many ways. One is that it is the reciprocal of the number $\displaystyle \frac{2 \sqrt 2}{9801} \sum_{k=0}^\infty \frac{(4k)!(1103+26390k)}{k!^4(396^{4k})}.$

This is due to Srinivasa Ramanujan.

The classic definition is based on the fact that the circumference of a circle is a constant function of its radius, and half of that constant is ${\pi}$. This must be proved and is not trivial. It was known to Euclid—see here for a detailed discussion.

## The Other Pi

The symbol Π is like $\displaystyle \prod$

One is a capital ${\pi}$ and one is a product symbol: can you tell which is which? Perhaps you can but they are pretty close, and their derivation is the same. So let’s talk about the product symbol during this wonderful Pi Day.

The product is quite basic to mathematics and theory too. We note that $\displaystyle \prod_{k=1}^{n} f(x,k)$

can be used to define various important functions. But the simplest such function is already an important open problem. That’s right: simply multiplying two integers, ${x\cdot y}$.

What is the complexity of integer product? There are multiple views on multiplication, going back to when it was first formulated as a problem.

## The Original View of Multiplication

In 1960 the great Andrey Kolmogorov conjectured that integer multiplication required quadratic time. That is, the simple high school method was optimal. This conjecture was plausible, especially back then when the field of computer theory was just beginning. So he organized a seminar at Moscow State University to study his conjecture with the hope to prove it. Within a week one of the students, Anatoly Karatsuba, found a clever algorithm that ran in time order-of $\displaystyle n^{\log_{2}3}\approx n^{1.585},$

for ${n}$-bit number multiplication.

The idea is to let ${k = \lceil n/2\rceil}$ and write ${x = x_1 2^k + x_0}$ and ${y = y_1 2^k + y_0}$. Then we have $\displaystyle xy = 2^{2k} A + 2^k B + C$

where ${A = x_1 y_1}$ and ${C = x_0 y_0}$ and ${B = x_1 y_0 + x_0 y_1}$. The multiplications by ${2^k}$ and ${2^{2k}}$ are just bit-shifts to known locations independent of the values of ${x}$ and ${y}$, so they don’t affect the time much. It is neat that ${A}$ and ${C}$ need just one recursive multiplication of ${k}$-bit numbers each, but the two such multiplications for ${B}$ would remove all the advantage and still give ${\Theta(n^2)}$ time.

What Karatsuba noted instead was that $\displaystyle B = (x_0 + x_1)\cdot (y_0 + y_1) - A - C.$

This needs just one more multiplication. End of conjecture. Boom: the conjectured lower bound was wrong. Kolmogorov explained the beautiful result at the next meeting, and then terminated the seminar.

Pretty cool to terminate a seminar—no?

## SODA View

SODA is the premier conference on algorithms, which has the wonderful properties of being held in January and usually at a warm locale.

The SODA view is to optimize the algorithm. The simplest meaning of this is to optimize the asymptotic running time. Andrei Toom, 5 years younger than Karatsuba was, took up the task and in a 1963 paper gave a hierarchy of improved exponents approaching but not reaching 1. The next step, called “Toom-3,” breaks into 3 equal pieces not 2, and by reducing 9 recursive multiplications to 5, achieves running time order-of $\displaystyle n^{\log_{3}5}\approx n^{1.465}.$

Steve Cook streamlined the description and the analysis in his doctoral thesis in 1966 and often gets joint credit.

If you break into ${r}$ pieces the running time is proportional to ${c(r) n^{\log_r(2r - 1)}}$, whose exponent behaves roughly like ${2^{1/r}}$ and approaches ${1}$. For any fixed ${r}$ the ${c(r)}$ term is a constant, and one could be tempted to ignore it. However, as Wikipedia’s article avers,

[T]he function ${c}$ unfortunately grows very rapidly.

One can try to improve the algorithm by making ${r}$ depend on ${n}$ in the recursion, but as the article notes in the next sentence, this is an open research problem.

The ${n^{1+\epsilon}}$ barrier was broken by Arnold Schönhage and Volker Strassen in 1971. This achieved a running time of the form $\displaystyle O(n \log n \cdot d(n))$

with ${d(n) = \log\log n}$. The constant in the ${O}$ is high but not astronomical, and their algorithm has been programmed to give superior performance to Cook-Toom for ${n}$ over 30,000 bits.

The latest major part of the history is that Martin Fürer in 2007 improved ${d(n)}$ to ${2^{O(\log^* n)}}$. This looks like more than ${\log\log n}$ but becomes less as ${n}$ grows. Alas the ${n}$ is so high that the algorithm is galactic. Work last year by David Harvey, Joris van der Hoeven, and Grégoire Lecerf improving the constant in the exponent has not changed that.

Fürer’s paper appeared in STOC not SODA, but I have a different picture in my mind when I think of STOC.

## STOC View

STOC is, of course, one of the top theory conferences, which is held in late spring or early summer and where it is held follows a complex secret formula. I have no idea what that formula is, but according to Wikipedia:

STOC is traditionally held in a different location each year.

Well not exactly: what does “different” mean here? STOC has repeated sites, just never at the same site two years in a row. Ken attended STOC in Montreal twice in fairly close succession: 1994 and 2002.

Okay let’s look at multiplication from a STOC view, that is from a more foundational view. We have already discussed the relationship between the cost of multiplication on Turing Machines and the famous conjecture of Juris Hartmanis that for any algebraic irrational number ${a}$, the first ${n}$ digits of ${a}$ cannot be computed on a multitape Turing machine in ${O(n)}$ time—and could even require ${\Omega(n\log n)}$ time.

Part of the significance of this question is that it impacts the separation between ${\mathsf{NTIME}(n)}$ and ${\mathsf{DTIME}(n)}$ then integer multiplication is almost linear time. We know they are different, but only by a hair—less than ${e(n) = \log^* n}$ even. The idea is that if ${\mathsf{NTIME}(n)}$ and ${\mathsf{DTIME}(n e(n))}$ are really that close, then a common alternation method used to check integer multiplication can be simulated deterministically.

The method using alternation is to guess ${z = xy}$ and check it by selecting a prime ${p}$ of about log bits. Then we must check ${xy \equiv z \bmod p}$. This is easy if one can compute an ${n}$ bit number mod ${p}$ fast. This can be done by a block structure trick and using alternation again to only have to check that one block is okay. We haven’t worked this all out, but clearly integer multiplication is involved in a fundamental way to complexity class relations. Another use of similar structure ideas is in this paper on checking integer multiplication by Dima Grigoriev and Gérald Tenenbaum.

## View From the Tropics?

Ken has found a different way this week to escape to a warm locale, by workshop rather than by conference. There is in fact a new branch of algebra and geometry named for the tropics, in which the binary minimum (or maximum) function plays the role of ${+}$ and ${+}$ replaces multiplication. Then the integer product problem goes away, perhaps.

Ken has a different idea, however. Perhaps we can avoid doing multiplication altogether. We don’t even need to use integers. Let ${d: \Sigma^* \rightarrow \mathbb{N}}$ stand for any “decoding” function from strings to integers, which need not be 1-to-1: an integer might have many string codes. The idea is to ask:

Are there linear-time computable binary functions ${a(x,y)}$, ${m(x,y)}$, and ${g(x,y)}$ such that for some decoding function ${d}$—of any complexity or maybe not even computable–and all strings ${x,y}$: $\displaystyle \begin{array}{rcl} d(a(x,y)) &=& d(x) + d(y)\\ d(m(x,y)) &=& d(x)*d(y)\\ g(x,y) = 1 &\Leftrightarrow& d(x) > d(y)? \end{array}$

The last clause allows us to simulate a comparison function, so that the strings emulate a discretely ordered ring whose operations are all in linear time. Is there such a fish? We don’t care so much about the complexity of ${d}$ since it would only need to be applied once after a plethora of efficient ${a}$ and ${m}$ operations, and in view of ${g}$ might not need to be computed at all.

## Open Problems

What is the complexity of ${\Pi}$? The operation, not the number.

1. March 14, 2015 11:41 am

Actually, the circumference of a circle is π times its DIAMETER.

• March 14, 2015 5:07 pm

Oops, thanks; fixed. One of us is a 6.28-pi-tarian anyway.

• March 14, 2015 10:05 pm

There are some who call me… Tau.

2. March 14, 2015 12:52 pm

I said that 5 hours ago, why does Google give you credit for it.

3. March 14, 2015 9:06 pm

Donald Knuth is known to use the number Pi as a version numbering system for his TeX typesetting program. Besides, a paragraph of his book “The Art of Computer Programming” happens to be entitled “How Fast Can We Multiply”… Just a coincidence?

4. March 15, 2015 1:05 am

“Tropical Semirings” and their algebraic generalizations were studied by the Hungarian-born Brazilian computer scientist Imre Simon. French algebraists/formal language folks then decided on the “tropical” name (even though Imre lived and worked in Sao Paulo, which is on the Tropic of Capricorn…)

The first major result in the area, proved by Imre may appeal to computer scientists: the question is, given a regular set R, is there a number k, and a regular set S, such that
R* = (R + \lambda)^k.
[lambda denotes the empty string, + denotes union]
In other words, R* is the infinite union lambda + R + R^2 + …. R^n + ….

can we decide whether it can be described as a finite union?

The surprisingly difficult affirmative answer was proven by Imre in FOCS 1978 two different ways: algebraic and combinatorial. The algebraic proof is basically of the form “when is an object finitely generated?”.] in the context of these weird semirings. Once a difficult question is solved, it is the nature of Math for people to try to explore the subarea, so today we have not only tropical semirings, but tropical geometry, etc….

• March 15, 2015 1:06 am

Sorry for the typo:

I meant

R* = (S + \lambda)^k

5. 