Skip to content

One Man’s Floor Is Another Man’s Ceiling

October 16, 2012


Factoring again, always factoring…

Peter Borwein is among those who carry forward the title mission of Alan Turing’s famous paper, “On Computable Numbers…” With David Bailey and Simon Plouffe, he found a new kind of fast formula for computing individual digits of {\pi} without having to calculate all preceding ones, as we covered here and here. His brother Jonathan Borwein is likewise among the “fathers” of experimental mathematics as a discipline. Peter is currently the Founding Director of the IRMACS Centre at Simon Fraser University.

Today we discuss a recent survey paper with his co-author Joe Hobart on the power of division.

There are other talented pairs of brothers in mathematics and computer science theory. The Lenstras—Jan, Hendrik, Arjen, and Andries—account for six pairs all by themselves. I (Dick) have been a colleague of Michael Fischer, whose late brother Patrick is being honored with a symposium this coming November 5th at the University of Michigan. The Chudnovsky brothers, Gregory and David, also do experimental mathematics. Gregory Chudnovsky won a MacArthur “genius grant” in 1981, and this reminds us to congratulate graph theorist Maria Chudnovsky (alongside Daniel Spielman) on her just-announced MacArthur award, but she is not related. My colleague Dana Randall is the sister of physicist Lisa Randall, but I don’t know any comparable sister combinations within theory.

The survey paper by Borwein and Hobart is in the August/September issue of the American Mathematical Monthly, and is titled “The Extraordinary Power Of Division in Straight Line Programs.”

The point of this note is to advertise [a] lovely circle of ideas..that if it is possible to divide quickly, then it is possible to factor quickly.

Releasing the power of division, however, requires teaming it with the humble floor function.

The Floor Function

Ken and I both remember courses in high school and/or college in which the integer floor of a real number {x} was written {[x]}. This notation goes back, of course, to Carl Friedrich Gauss. But it wasn’t named for Gauss, and perhaps that made it liable to replacement by someone with a better idea. Kenneth Iverson split Gauss’ brackets by introducing separate floor and ceiling functions:

\displaystyle \lfloor\pi\rfloor = 3;\quad \lceil\pi\rceil = 4.

He introduced those symbols into his programming language APL, and the worth of good notation proved out in their now-universal adoption. To quote the singer-songwriter Paul Simon:


There’s been some hard feelings here
About some words that were said
Been some hard feelings here
And what is more
There’s been a bloody purple nose
And some bloody purple clothes
That were messing up the lobby floor
It’s just apartment house rules
So all you ‘partment house fools
Remember: one man’s ceiling
Is another man’s floor
One man’s ceiling
Is another man’s floor.

A Smale Problem

In 1998 Steven Smale published his list of great problems for the next century—this century. After leading off with the Riemann Hypothesis, Poincaré Conjecture, and P vs. NP, he stated:

Problem 4: Integer zeros of a polynomial of one variable.

Given a polynomial {f(x)} with integer coefficients, define {\tau(f)} to be the number of steps needed to create a formula for {f} starting with {1} and {x}, where each step is addition, subtraction, or multiplication of two previously-created formulas. A sequence of such steps is called a straight-line program for {f}. Let {Z(f)} be the number of zeroes of {f} that are integers—could be none, could be many. The problem is simple:

Is there a fixed {\epsilon > 0} such that for all {f}, {\tau(f) = \Omega(Z(f)^{\epsilon})}?

If {f} has no integer zeroes this says nothing, but if {f} has many integer zeroes, then this says {f} is hard to compute. An example of such an {f} is the factorial polynomial,

\displaystyle f(x) = (x-1)(x-2)(x-3)\cdots(x-N).

In terms of the number {n} of bits of {N}, this has exponentially many integer zeroes, so an affirmative answer to the problem implies an exponential lower bound on the size of straight-line programs for this {f}. This might not be surprising, but then Smale goes on to state a simpler, concrete problem.

Is there a constant {c > 0} such that for all {n}-bit numbers {N}, {\tau(N!) \leq n^c}?

A strong negative answer would be {\epsilon > 0} such that {\tau(N!) > N^{\epsilon}}, giving an exponential lower bound in terms of the number of bits.

Here {\tau(k)} for an integer {k} means the number of steps required by a straight-line program that starts with just {1}, no variables. With Michael Shub, Smale showed that if there is no sequence {i_N} of integers that “help” in the sense of {\tau(i_N N!) \leq n^c}, then the problem analogous to {\mathsf{SAT}} in their model with Lenore Blum of computation over {\mathbb{C}} is intractable, giving a {\mathsf{P \neq NP}}-style consequence for that model.

Information Overload and Extraction

In one sense the algebraic models are weak, and in another they are super powerful. They do not allow you to treat numbers as strings of bits, so as to encode arbitrary Boolean functions in them. They do not give you arbitrary coefficients on the polynomials {f(x)}, but only ones you can build up from {1} by {+,-,*}, which is why even computing integers is a challenge.

However, they allow storing values with unlimited precision, and repeated squaring can create some huge values. If we square a {k}-digit number {r}, and do it {n} times, then we have the number {R = r^{2^n}} which has {k2^n} digits. There is no way a “real” {\mathsf{poly}(n)}-time program can enumerate all those digits, although a straight-line program is allowed to have them. The question is, how may we access them?

We have often on this blog considered exponential-size structures that are succinct, meaning there is a polynomial-size object that allows extracting any specified entry {j} of the structure. For instance, the structure can be a long string or number and the object can be a small circuit {C} such that for all {j}, {C(j)} returns the {j}th bit. We could modify this idea to create a circuit {C(i,j)} returning the bits between {i} and {j}, provided {j - i = \mathsf{poly}(n)}. (We index the least significant bit by {0}.)

We do not know whether short straight-line programs preserve succinctness. Besides the connection to factoring, this could even be related to issues about the complexity of integer multiplication that we raised here. But division and the floor function do the same job nicely. Namely, to get the bits of a big number {R} in the {2^i} place through the {2^j} place, which we denote by {r = R[i...j]}, compute:

\displaystyle  \begin{array}{rcl}  \ell &=& \lfloor R/2^i\rfloor\\ r &=& \ell - \lfloor\ell/2^{j+1}\rfloor*2^{j+1}. \end{array}

The mod trick here is general: to compute {R} mod {N} do {R - \lfloor{R/N}\rfloor*N}.

Factoring Via Factorials

The next trick noted by Borwein and Hobart is to compute a binomial coefficient by powering and bit-extraction.

Theorem 1 In standard binary notation,

\displaystyle \binom{m}{k} = R[mk+1...mk+1+m]\quad\text{where}\quad R = (1 + 2^m)^m.

What happens is that the products of {10000...0001} align the {1}‘s in columns so that their sums give each binomial coefficient, and the offset of {mk+1} from the right locates the sum for {\binom{m}{k}}. Even if {m} is a big number like {2^n}, this can be done in {\mathsf{poly}(n)} steps by repeated squaring.

In fact, raising to the {m}-th power when {m} is not a power of {2} involves the problem of finding an efficient straight-line program to compute {m} itself. But in any event, when {m} is exponential in the complexity parameter {n}, the process is no worse than polynomial in {n}, even though the theorem creates a number with about {m^2} digits. The range of {m} bits extracted—usually with some leading {0}s—still has exponential size, however.

If we can manipulate those {m} bits, however, then we can factor. The trick is the following:

Theorem 2 If {m} is even, {m = 2k}, then {m! = \binom{m}{k}(k!)^2}.

By recursing from {m} to {m/2}, or when {m} is odd, doing {m! = m(m-1)!} and recursing from {m-1} to {(m-1)/2}, we can write a standard straight-line program that computes {m!} in time {O(nt)}, where {n = O(\log m)} and {t} is the time to compute {\binom{m}{m/2}}. If {t = \mathsf{poly(n)}}, as above when division and floor are allowed to extract bits, then {m!} has a {\mathsf{poly}(n)}-sized straight-line program.

Finally, to factor an {n}-bit Blum integer of the form {N = pq} with primes {p < q}, we can use Newton’s method to compute {m} close enough to {\sqrt{N}} to assure {p \leq m < q}, so that {\text{\it gcd}(m!,N) = p}. In the absence of a direct comparison for {2^n}-bit numbers like {m!}, Euclid’s algorithm can still be implemented using division and floor, as shown in the paper. The result is a {\mathsf{poly}(n)}-sized straight-line program that does factoring.

Wiz or Swiz?

I am stopping in England on my way to Doha, and Ken gave me some prep about the natives. The British have a word “swiz” which means something that doesn’t turn out as advertised. Division and floor are easy functions, so saying that allowing them in programs means we can factor means we can factor, no? The “swindle” is that {m!} and {\binom{m}{m/2}} are exponential-sized numbers, which the straight-line programs are not being fairly charged for.

Still, we may not need to compute all parts of these numbers. Right off the bat, we only need to compute {M = m!\text{~mod~}N}, and then {\text{\it gcd}(M,N)} still yields {p}. While computing {m!}, we can try to keep things reduced mod {N}. We have blogged recently about cases where a “lazy” approach can be more efficient. Perhaps a breakthrough in the “easy” problem of integer multiplication can find new regularities in powering that help. If the straight-line model used by Shub and Smale can simulate integer division, even just for restricted cases, all these factors come into play.

Open Problems

Borwein and Hobart themselves ask,

Given a straight line program for {n} (…), is it possible to find a nontrivial factorization of {n} just given the information in the program?

The paper gives other open problems, of which the last is whether the known polynomial-time algorithms for factoring polynomials can be implemented by comparably efficient straight-line programs.

[clarified that paper is a survey]

About these ads
9 Comments leave one →
  1. October 16, 2012 4:13 pm

    I haven’t been able to access to the paper you are discussing. Do you know of any freely available version of it? Thank you in advance!

  2. Neeraj Kayal permalink
    October 16, 2012 4:24 pm

    Correct me if I am wrong but it seems to me that these are exactly the same set of ideas in Adi Shamir’s work on factoring integers in O(log n) arithmetic steps available here: http://www.sciencedirect.com/science/article/pii/0020019079900875

  3. Boaz Barak permalink
    October 16, 2012 5:25 pm

    I haven’t read the paper, and don’t have access to a copy, but I think Adi Shamir proved a very related result in

    A. Shamir, “Factoring Numbers in O(log n) Arithmetic Steps”, ;Inf. Process. Lett., 1979, pp.28-31.

  4. Michael Forbes permalink
    October 16, 2012 5:53 pm

    Borwein and Hobart are doing a survey of the area. The result showing that factoring in this model is, as Borwein and Hobart note in the paper, due to Shamir (as Boaz suspects).

  5. October 16, 2012 11:24 pm

    Ad above comments, thanks—I’ve edited the intro to clarify that it is a survey paper. Alas we ourselves were unable to view it online—Dick used his hardcopy. I converted the details from base 10 to base 2 and added more attention to the size of ranges being indexed.

  6. Serge permalink
    October 17, 2012 9:01 am

    What if the complexity of every problem was eventually explained by quantum mechanics? This would give rise to a nice theory – low probability being converted into high complexity – and also more substance to the idea that computer science lies at the intersection of math and physics. It sounds good to me, however weird it may seem.

    • On Morgenstern's model permalink
      June 22, 2013 1:59 am

      Alternatively. Given an $n$ bit floating point number with $m$ bit fractional portion, what is the shortest straight line program and its length that could get the ceil or floor. What if one knows $0$ is the leading fractional bit portion?

  7. On Morgenstern's model permalink
    June 22, 2013 1:57 am

    Are floor and ceil valid straight line operations in general?

Trackbacks

  1. Short and Tweet « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,885 other followers