# One Man’s Floor Is Another Man’s Ceiling

* Factoring again, always factoring… *

Peter Borwein is among those who carry forward the title mission of Alan Turing’s famous paper, “On Computable Numbers…” With David Bailey and Simon Plouffe, he found a new kind of fast formula for computing individual digits of without having to calculate all preceding ones, as we covered here and here. His brother Jonathan Borwein is likewise among the “fathers” of experimental mathematics as a discipline. Peter is currently the Founding Director of the IRMACS Centre at Simon Fraser University.

Today we discuss a recent survey paper with his co-author Joe Hobart on the power of division.

There are other talented pairs of brothers in mathematics and computer science theory. The Lenstras—Jan, Hendrik, Arjen, and Andries—account for six pairs all by themselves. I (Dick) have been a colleague of Michael Fischer, whose late brother Patrick is being honored with a symposium this coming November 5th at the University of Michigan. The Chudnovsky brothers, Gregory and David, also do experimental mathematics. Gregory Chudnovsky won a MacArthur “genius grant” in 1981, and this reminds us to congratulate graph theorist Maria Chudnovsky (alongside Daniel Spielman) on her just-announced MacArthur award, but she is not related. My colleague Dana Randall is the sister of physicist Lisa Randall, but I don’t know any comparable sister combinations within theory.

The survey paper by Borwein and Hobart is in the August/September issue of the American Mathematical Monthly, and is titled “The Extraordinary Power Of Division in Straight Line Programs.”

The point of this note is to advertise [a] lovely circle of ideas..that if it is possible to divide quickly, then it is possible to factor quickly.

Releasing the power of division, however, requires teaming it with the humble floor function.

## The Floor Function

Ken and I both remember courses in high school and/or college in which the integer floor of a real number was written . This notation goes back, of course, to Carl Friedrich Gauss. But it wasn’t named for Gauss, and perhaps that made it liable to replacement by someone with a better idea. Kenneth Iverson split Gauss’ brackets by introducing separate **floor** and **ceiling** functions:

He introduced those symbols into his programming language APL, and the worth of good notation proved out in their now-universal adoption. To quote the singer-songwriter Paul Simon:

There’s been some hard feelings here

About some words that were said

Been some hard feelings here

And what is more

There’s been a bloody purple nose

And some bloody purple clothes

That were messing up the lobby floor

It’s just apartment house rules

So all you ‘partment house fools

Remember: one man’s ceiling

Is another man’s floor

One man’s ceiling

Is another man’s floor.

## A Smale Problem

In 1998 Steven Smale published his list of great problems for the next century—this century. After leading off with the Riemann Hypothesis, Poincaré Conjecture, and P vs. NP, he stated:

Problem 4:Integer zeros of a polynomial of one variable.

Given a polynomial with integer coefficients, define to be the number of steps needed to create a formula for starting with and , where each step is addition, subtraction, or multiplication of two previously-created formulas. A sequence of such steps is called a **straight-line program** for . Let be the number of zeroes of that are integers—could be none, could be many. The problem is simple:

Is there a fixed such that for all , ?

If has no integer zeroes this says nothing, but if has many integer zeroes, then this says is hard to compute. An example of such an is the factorial polynomial,

In terms of the number of bits of , this has exponentially many integer zeroes, so an affirmative answer to the problem implies an exponential lower bound on the size of straight-line programs for this . This might not be surprising, but then Smale goes on to state a simpler, concrete problem.

Is there a constant such that for all -bit numbers , ?

A strong negative answer would be such that , giving an exponential lower bound in terms of the number of bits.

Here for an integer means the number of steps required by a straight-line program that starts with just , no variables. With Michael Shub, Smale showed that if there is no sequence of integers that “help” in the sense of , then the problem analogous to in their model with Lenore Blum of computation over is intractable, giving a -style consequence for that model.

## Information Overload and Extraction

In one sense the algebraic models are weak, and in another they are super powerful. They do not allow you to treat numbers as strings of bits, so as to encode arbitrary Boolean functions in them. They do not give you arbitrary coefficients on the polynomials , but only ones you can build up from by , which is why even computing integers is a challenge.

However, they allow storing values with unlimited precision, and repeated squaring can create some huge values. If we square a -digit number , and do it times, then we have the number which has digits. There is no way a “real” -time program can enumerate all those digits, although a straight-line program is allowed to have them. The question is, how may we access them?

We have often on this blog considered exponential-size structures that are *succinct*, meaning there is a polynomial-size object that allows extracting any specified entry of the structure. For instance, the structure can be a long string or number and the object can be a small circuit such that for all , returns the th bit. We could modify this idea to create a circuit returning the bits between and , provided . (We index the least significant bit by .)

We do not know whether short straight-line programs preserve succinctness. Besides the connection to factoring, this could even be related to issues about the complexity of integer multiplication that we raised here. But division and the floor function do the same job nicely. Namely, to get the bits of a big number in the place through the place, which we denote by , compute:

The mod trick here is general: to compute mod do .

## Factoring Via Factorials

The next trick noted by Borwein and Hobart is to compute a binomial coefficient by powering and bit-extraction.

Theorem 1In standard binary notation,

What happens is that the products of align the ‘s in columns so that their sums give each binomial coefficient, and the offset of from the right locates the sum for . Even if is a big number like , this can be done in steps by repeated squaring.

In fact, raising to the -th power when is not a power of involves the problem of finding an efficient straight-line program to compute itself. But in any event, when is exponential in the complexity parameter , the process is no worse than polynomial in , even though the theorem creates a number with about digits. The range of bits extracted—usually with some leading s—still has exponential size, however.

If we can manipulate those bits, however, then we can factor. The trick is the following:

Theorem 2If is even, , then .

By recursing from to , or when is odd, doing and recursing from to , we can write a standard straight-line program that computes in time , where and is the time to compute . If , as above when division and floor are allowed to extract bits, then has a -sized straight-line program.

Finally, to factor an -bit Blum integer of the form with primes , we can use Newton’s method to compute close enough to to assure , so that . In the absence of a direct comparison for -bit numbers like , Euclid’s algorithm can still be implemented using division and floor, as shown in the paper. The result is a -sized straight-line program that does factoring.

## Wiz or Swiz?

I am stopping in England on my way to Doha, and Ken gave me some prep about the natives. The British have a word “swiz” which means something that doesn’t turn out as advertised. Division and floor are easy functions, so saying that allowing them in programs means we can factor *means we can factor*, no? The “swindle” is that and are exponential-sized numbers, which the straight-line programs are not being fairly charged for.

Still, we may not need to compute **all parts** of these numbers. Right off the bat, we only need to compute , and then still yields . While computing , we can try to keep things reduced mod . We have blogged recently about cases where a “lazy” approach can be more efficient. Perhaps a breakthrough in the “easy” problem of integer multiplication can find new regularities in powering that help. If the straight-line model used by Shub and Smale can simulate integer division, even just for restricted cases, all these factors come into play.

## Open Problems

Borwein and Hobart themselves ask,

Given a straight line program for (…), is it possible to find a nontrivial factorization of just given the information in the program?

The paper gives other open problems, of which the last is whether the known polynomial-time algorithms for factoring polynomials can be implemented by comparably efficient straight-line programs.

[clarified that paper is a survey]

I haven’t been able to access to the paper you are discussing. Do you know of any freely available version of it? Thank you in advance!

Correct me if I am wrong but it seems to me that these are exactly the same set of ideas in Adi Shamir’s work on factoring integers in O(log n) arithmetic steps available here: http://www.sciencedirect.com/science/article/pii/0020019079900875

I haven’t read the paper, and don’t have access to a copy, but I think Adi Shamir proved a very related result in

A. Shamir, “Factoring Numbers in O(log n) Arithmetic Steps”, ;Inf. Process. Lett., 1979, pp.28-31.

Borwein and Hobart are doing a survey of the area. The result showing that factoring in this model is, as Borwein and Hobart note in the paper, due to Shamir (as Boaz suspects).

Ad above comments, thanks—I’ve edited the intro to clarify that it is a survey paper. Alas we ourselves were unable to view it online—Dick used his hardcopy. I converted the details from base 10 to base 2 and added more attention to the size of ranges being indexed.

What if the complexity of every problem was eventually explained by quantum mechanics? This would give rise to a nice theory – low probability being converted into high complexity – and also more substance to the idea that computer science lies at the intersection of math and physics. It sounds good to me, however weird it may seem.

Alternatively. Given an $n$ bit floating point number with $m$ bit fractional portion, what is the shortest straight line program and its length that could get the ceil or floor. What if one knows $0$ is the leading fractional bit portion?

Are floor and ceil valid straight line operations in general?