Skip to content

Making An Algorithm An Algorithm—BBP

July 14, 2010


Finding the bits of \pi in logspace

Chee Yap is one of the top experts in various aspects of computational geometry. He has worked on problems as diverse as upper and lower bounds for comparison problems, to algorithms in geometric complexity, to methods to control rounding errors in geometric computations. One of his specialties is in the area of numerical computation from a complexity viewpoint. He was a Ph.D. student of mine many years ago, so I may be biased in the following discussion.

Today I want to talk about a recent paper of Chee on a question I raised earlier here. He has solved the problem and I am quite happy to see that these discussions have helped raise some open problems that are getting solved.

The LEDA project was started by Kurt Mehlhorn years ago to implement geometric algorithms and data structures. It has grown into a major software system that is used by many in research, in education, and in companies.

Kurt told me he almost immediately ran into a serious problem with their first implementation project, which was to implement an algorithm for finding the convex hull in two dimensions. This algorithm takes any set of {n} points in the plane and finds their bounding convex hull in time {O(n \log n)}. The algorithm is due to Ron Graham and is really a type of sorting algorithm. Kurt’s group tested the algorithm on some examples. It worked, and they moved on to more complex algorithms.

Shortly after someone used their algorithm as a subroutine in a more complex computation—let’s call it C. They quickly noticed that C did not always work; sometimes it gave the right answer, but sometimes it did not.

What was going on? Kurt’s group looked into the problem and discovered an issue with their implementation of the convex hull: sometimes the “convex hull” returned was not convex. This is of course impossible—the hull by definition is convex. This meant that their implementation was incorrect. They quickly discovered the problem: certain examples caused the implementation of the algorithm to round incorrectly. Graham’s algorithm was fine, but it assumed infinite precision. They redesigned the implementation and fixed the bug. Now C worked fine.

Kurt said that this accuracy problem became a major issue for the whole LEDA project. They have to be sure that their implementations work given they only have finite precision. They use a variety of tricks: sometimes they use high precision, sometimes they check the answers, sometime they change the algorithms.

Let’s now turn to the problem Chee worked on, which is related to the difficulty that Kurt faced in LEDA.

The BBP Algorithm

In 1995, David Bailey, Peter Borwein, and Simon Plouffe discovered a beautiful algorithm–now called BBP–that computes the {n^{th}} bit of the number {\pi} without first computing any of the previous bits. From the viewpoint of complexity theory they showed that the problem of computing the {n^{th}} bit of {\pi} is in {\mathsf{SC}}. The latter is “Steve’s Class” named after Steve Cook and consists of problems computable in polynomial time and poly-logspace.

The BBP result was quite unexpected—see my previous discussion on their great algorithm.

The BBP Algorithm Is Not An Algorithm

Their algorithm is wonderful. Or is it? The following remarkable quote is from Wikipedia:

“There is a vanishingly small possibility that a particular computation will be akin to failing to add a small number (e.g. {1}) to the number {999999999999999} and that the error will propagate to the most significant digit, but being near this situation is obvious in the final value produced.”

Excuse me, this is nonsense. The statement “vanishingly small possibility” is a statement of probability theory. There is nothing random here at all. I agree that it seems that the problem may never arise in practice, but that is different. The last point of the quote is correct. But that means that the BBP is not really an algorithm.

For a theorist an algorithm must have two basic properties: it must always compute the correct answer and it must have a provable running time. I am leaving aside randomized algorithms, which have the above properties but with only probabilistic assurances. If your method works on “most” inputs or runs “fast” in practice, then it may be extremely useful, but it is not an algorithm is the strict sense. Thus the BBP algorithm is not an algorithm in our sense.

Yap’s Theorem

Chee has improved the BBP result in two fundamental ways. He shows that {\pi} can be computed in not just {\mathsf{SC}} but in logspace. Further he shows that his algorithm really works—not that it just works in practice. Note, Chee’s algorithm is different from BBP’s algorithm, and it remains open if BBP’s exact algorithm works as claimed.

What is neat is Chee uses the notion of irrationality measure to prove his algorithm works, which is an idea I suggested in the original discussion.

This is an inequality that bounds how closely the number can be approximated by rational numbers with small denominators. The rational numbers are dense, so given any irrational number {\alpha} there is a rational number {p/q} so that

\displaystyle  | \alpha -p/q|

is as small as desired. However, not all irrational numbers are created equal when it comes to how small this approximation can be. If

\displaystyle  | \alpha - p/q| > 1/q^3

for all {p} and {q}, this means to get a close approximation to {\alpha} one must have a large {q}.

The standard definition is that {\alpha} has irrationality measure {\kappa>0} provided

\displaystyle  | \alpha - p/q | \le 1/q^{\kappa}

has only a finite number of solutions {p} and {q}. We would prefer to have the above hold for all {p,q}. But we usually cannot prove this for all, so we settle in the definition for almost all—that is for all {q} large enough.

Theorem: Any number with a positive irrationality measure and a BPP series has an algorithm that computes its {n^{th}} bit in polynomial time and space {O(\log n )}.

See Yap’s paper for the details.

Open Problems

What numbers can be computed in logspace? Can Chee’s theorem be generalized? His paper solves an open problem from this blog and I hope there will be many more in the future.

24 Comments leave one →
  1. Brenton Bostick permalink
    July 14, 2010 2:04 pm

    The irrationality of \pi implies that any run of digits will eventually stop, correct?

    BBP is guaranteed to give correct answers in finite time, we just do not have a bound on how long it will take.

    From a computability standpoint, I am comfortable with calling it an “algorithm”, but I also consider the process that generates Goodstein sequences an algorithm!

    • rjlipton permalink*
      July 14, 2010 2:21 pm

      Yes that sounds right. But there is no bound so would not be in poly time.

  2. Sniffnoy permalink
    July 14, 2010 4:55 pm

    One nitpick: You’ve written “positive irrationality measure” instead of “finite irrationality measure” in the statement of the theorem.

  3. Anonymous permalink
    July 15, 2010 12:26 am

    A slightly tangential question. Can this be considered an ‘open’ problem? I am not an expert in this field by any means, but have some general familiarity in theory; never came across this as an ‘Open’ problem. While that’s probably just my ignorance, the general question is -what can be considered an ‘Open’ problem ? Any qualitative metric on how much interest/activity it needs to generate, or how many experts need to try-and-fail – perhaps ? 🙂

    • rjlipton permalink*
      July 15, 2010 8:53 am

      A good question. It was not considered open. I noticed there was a problem with BBP and raised the problem. So I guess it was a hidden open problem.

  4. Japeth permalink
    August 24, 2012 3:46 am

    I’m not sure this shows that the algorithm is not an algorithm. I’ve seen this exact problem in other algorithms, and the solution is just to look for runs of 9’s early (before they could change the digit you are working in), and calculate more bits until the run of 9’s is over and you know a carry can no longer change your results. This does throw a spanner in the works of one of the useful parts of this algorithm, if you do have to deal with a run of 9s that’s any significant length, you must use bignums. Does that mean it’s not really SC?

  5. Axel permalink
    January 14, 2013 4:59 am

    What about a link between the measure of irrationality of a transcendent number and the possibility of expressing its digits under shape BBP ?

    Borwein 1987 IM(e)=2

    and Brent-Salamin AGM 1/1 et 1/ sqr (2) ?

  6. Axel permalink
    February 1, 2013 11:33 am

    Chee Yap : π is in SC (2,16) and k=1 Lspace because µ(π)≥2

    Borwein : µ(e) = 2 in P

    µ(ℚ[√]) = 2 Theodorus spiral

    Wiles’s proof of Fermat’s Last Theorem

    ∑n1!= ∑n2 = 153 n1= 5 = 4+0! n2 =17 = 16+1

    Radix e π

    1/2 ====> Gamma (1/2) = √ π p 3 modulo 4 Gaussian primes
    p 0 or 1 modulo 4 no

    (4,3,5) (16,1,17) “Vitruve” QM and HM Consistency

    QM(3,4) / HM (3,4) = 96,89 % RE Turing degree of the halting problem.

    Primes : π(n) = n/Ln 3,11… %

  7. Axel permalink
    February 5, 2013 10:33 am

    Zeta (x) / zeta (1-x) = 2^x /pi^(1-x) . sin (pi.x/2) . gamma (1-x)

    x = 1/2 gamma (1-x) / pi^(1-x) = Zeta (x) / Zeta (1-x) = 2.x . sin (pi.x/2) = 1

    pi is BBP 1-degree
    pi^2 is BBP 2-degree Zéta (2)
    zeta (3) is BBP 3 degree

    SC k=1 LSPACE

    zeta -2 4 -6 …
    gamma -1, -3, -5

    p. 3 mod 4

    gamma (1/2) = pi^1/2

  8. Axel permalink
    February 5, 2013 10:53 am

    if 1/2 then zeta (x) / zeta (x-1) = 1 = Tan (pi.x/2)

    – mu (e and x^y) = 2
    – pi is BBP deg 1
    – pi ^2 is BBP deg 2
    – pi ^3 and Apery is BBP deg 3

  9. Axel permalink
    February 13, 2013 1:18 pm

    Continued Fraction [] / Engel set ][ :

    2^1/2 = [1;2,2,2…] ===> ]1;2,2,2…[ = 2
    5^1/2 = [2;4,4,4…] ===> ]2;4,4,4…[ = 2/3

    (-1+13^2)/2 = [1;3,3,3…] ===> ]1;3,3,3..[ = 3/2

    (1+5^1/2)/2 = [1;1,1,1…] ===> ?
    1+2^1/2 = [2;2,2,2…] ===> ]2;2,2,2…[ = 1
    (3+13^1/2) = [3;3,3,3…] ===> ]3;3,3,3…[ = 1/2
    1+ 5^1/2 = [4;4,4,4…] ===> ]4;4,4,4…[ = 1/3

    Pi/2 = [1;1/1;1/2;1/3…] with 1/1=]2;2,2,2..[ ; 1/2=]3;3,3,3…[ etc… bit is SC k=1 (LSpace)

    e = ]1;1,2,3,…[ with 1=]2;2,2,2…[ ; 2 = ]1;2,2,2…[ ; 3 = ]1,1,2,2,2…[ mu(e) = 2

    NC = AC = TC

    3^2+4^2=5^2 3*4 = 12 (Fermat-Wiles)

    HM(3,4)/QM(3,4) = 3,02… % Turing bound of the Halting problem ?

    Zéta/Gamma ?

    A- pi hot line ! SC2 k=1

    1/2
    tan 1/2 (pi/2) = 1
    2^1/2 ===> 2

    B- e hot line ! mu=1

    1/2
    tan (pi^1/2)^2 = 0
    gamma (0 + 1/2) . gamma (1 – 1/2) = pi
    (3+2)^1/2 ===> 2/3

    C- SC / NC Halting problem hot line ! mu (2^1/2) = 2

    1/2
    tan 1/2 pi = ind.

    2^1/2 = [1;2,2,2…] ===> ]1;2,2,2…[ = 2
    5^1/2 = [2;4,4,4…] ===> ]2;4,4,4…[ = 2/3

    2 = (1+i)(1-i)
    5 = (2+i)(2-i)

    0/1 mod 4
    3 mod.4

    (1^2+1^2).mod (3+4) = 2^1/2 mod (3+4)

    e-bits are 3,02 % faster than pi-bits !

    mu(pi) = 2 * 1,0302…

    v(p 3 mod 4) is 3,02 % faster than v(p 0-1 mod 4) !

  10. Axel permalink
    February 3, 2014 5:10 am

    Vitruvian man : 8/5 with (3,4,5) and (1,4, 17^1/2) 1!+…+5! = 1+…+17 =153 (Theodorus spiral on C : 17^1/2 i and 5)

    u(e)=2
    pi SC Log
    Wiles-Fermat

    u(pi) = u(e) + (1 – u(e)(Q(3,4) / H(3,4)) with Q = quadratic mean and H = Harmonic mean ?

    ==> Godel incompletness “1.1” = “real is Q(3,4) / H(3,4) computable” ?

  11. Golem permalink
    May 15, 2014 2:06 am

    pi n digit : n+lb (n) and for Gelfond-Schneider constant e^pi ?

  12. François permalink
    February 14, 2015 7:37 am

    “a positive irrationality measure” ?

    What about a irrationality measure = 0 like algebric irrationnals or e ?

  13. July 16, 2019 1:11 pm

    Two comments to try and keep this discussion real: (1) The ‘gotcha’ with Plouffe’s pi algorithm It’s not about strings of 9s, but since the result is hexadecimal, rather it’s about strings of fs, as in ffffffff… (strings of 1s, if you like to think of it that way) (2) Moreover, assuming pi to be normal, all strings of 1s are of finite length, and so if the 2nd sum in each of the four terms is continued past where the string of 1s ends (assuming for the moment that we are computing in an ideal machine that has infinite word length), this will take finite time. In summary, this shows that Plouffe’s pi algorithm *is* indeed an algorithm, and because of the infinite top limit on the second part of the 0 to infinity split sum, it is a species of algorithm known collectively as infinite algorithms. Thus it is a true algorithm, and also thanks to Plouffe, who alone discovered both the formula and the algorithm, there is no finite hex digit number it cannot compute in finite time using finite precision arithmetic (we just keep on adding precision until we get past the 1s (which we are allowed to do in an infinite algorithm). The time to calculate any particular hex digit will always be the same for a given code and hardware.

Trackbacks

  1. It’s Still the Slime Mold Story « Gödel’s Lost Letter and P=NP
  2. Transcendental Aspects of Turing Machines « Gödel’s Lost Letter and P=NP
  3. One Man’s Floor Is Another Man’s Ceiling « Gödel’s Lost Letter and P=NP
  4. Irrationality of $pi$ from the spigot algorithm? - MathHub
  5. Bailey–Borwein–Plouffe Formula – A2M1N
  6. The Value of Pi | Gödel's Lost Letter and P=NP
  7. The Value of Pi | Gödel's Lost Letter and P=NP
  8. Are These the Last Digits of Pi? | Gödel's Lost Letter and P=NP
  9. Is Pi Informative? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading