Skip to content

Miracle Numbers Such as “56″

October 19, 2010

A polynomial size constant depth circuit for testing if a group is solvable

Richard Feynman was one of the greatest physicists of all time, and was probably one of the most interesting. He shared the 1965 Nobel Prize in Physics with Julian Schwinger and Sin-Itiro Tomonaga, for their work on QED—quantum electrodynamics. Feynman used his now famous Feynman diagrams to understand how electrons and photons interacted, instead of using complex calculations. He is of course known to the general public for his tremendously popular series of books and textbooks—such as Surely You’re Joking, Mr. Feynman! He could make hard concepts accessible in way that no one else could, and he played the bongos:

Below is one of his diagrams.

Today I want to talk about some strange numbers that arise in mathematics and theory. These numbers are usually connected to results that seem like miracles. I will also present a new polynomial time algorithm whose exponent is very large—another strange result.

Such numbers occur in physics too. One such miracle number is approximately {1/137} and is called the fine structure constant in physics. In the right units the constant is

\displaystyle  \alpha = \frac{e^2}{\hbar c}

where {e} is the elementary charge, {\hbar} is the reduced Planck’s constant, and {c} is the speed of light in a vacuum. Feynman spoke about it this way:

Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly!

Let’s look at some miracle numbers.

Some Numerology

I will call a number a miracle number if there is some result attached to it that is surprising, unexpected, or just strange.

{\bullet } The number two. Okay this is not exactly the fine structure constant. But there are many examples in mathematics where {2} is special. I guess being the lone even prime number makes you special. A simple example of this that plays an important role in complexity theory is: For any square matrix {A} over a field of characteristic two,

\displaystyle \mathsf{det}(A) = \mathsf{perm}(A).

This relies on the trivial but important fact that in such a field

\displaystyle  -1 = 1.

{\bullet } The number one hundred and sixty three. This number has many miracles associated with it. There is the famous polynomial, due to Leonhard Euler,

\displaystyle  n^2 + n + 41,

which is prime for {n=0,\dots,39}. Note, any polynomial with a constant coefficient {c>1}, is periodic modulo {c}. So having one be prime for {40} values with a constant coefficient of {41} is pretty impressive. There is a deep reason for this polynomial’s behavior, which is related to a notion of Kurt Heegner. The key is that

\displaystyle  163 = 4\cdot 41 -1

is a Heegner Number—see this for more details.

The number {163} also was used by the renowned Martin Gardner who claimed that

\displaystyle  e^{\pi \sqrt{163}}

was an integer in an April Fool’s column. It is not. It is equal to

\displaystyle  262,537,412,640,768,743.999 999 999 999 25 \dots

Pretty close, but not an integer.

{\bullet} The number one thousand seven hundred and twenty nine. The story of Godfrey Hardy and Srinivasa Ramanujan and the taxicab number 1,729 is well known, in regard to its being the smallest number that can be written as the sum of two integer cubes in two different ways ({1^3 + 12^3 = 9^3 + 10^3}). But this is not its only notable property.

In the digits of {\pi}, a sequence of ten consecutive places that have all the digits occurs fairly quickly, at the end of

\displaystyle  3.14159265358979323846264338327950288419716939937510582097494\mathbf{4592307816}

If you look for the first such sequence in the digits of {e}, however, you will not find one until {\dots} the 1,729th decimal place.

The number 1,729 is also remarkable as the third Carmichael number, after 561 and 1,105. These and other remarkable facts about 1,729—including one also involving Feynman—can be found here.

One might rebut the idea of “miracle numbers” by asserting that by looking hard enough one can find remarkable facts about any number. On the other hand, from the probabilistic viewpoint one might expect the distribution of short, easy-to-state facts to be “lumpy,” favoring some numbers while shunning others. If the number of facts worth noting is about {\sqrt{N}} to {N} where {N} is the largest integer under consideration, then this is like a form of the “birthday paradox.” Of course, the distribution of facts about integers is not really random—so all we can do is appreciate those “miracles” that we observe. One more such observation is what prompted this post:

{\bullet } The number fifty six. This number arises in a relatively recent result on group theory. The following wonderful result of John Wilson was proved in 2005:

Theorem: Let {G} be a finite group. Then {G} is soluble if and only if it satisfies the following: that no non-trivial element {g} is a product of {56} commutators {[x, y]} with entries {x, y} conjugate to {g}.

His proof uses John Thompson’s classification of the minimal simple groups. This is a deep result, but is less than the full classification theorem for simple groups. See the paper of Ari Koponen and Kerkko Luosto for related results on other properties of groups

A Result

John Wilson’s theorem allows us to prove the following theorem:

Theorem: Given a finite group, there is a uniform {\mathsf{AC^0}} algorithm that determines whether the group defines a solvable group or not.

Recall the notion of solvable—some like Wilson say soluble. Consider a group {G}. The commutator subgroup {G'} of {G} is the subgroup generated by all the commutators of the group: elements of the form

\displaystyle  [x,y] = x y x^{-1} y^{-1}.

{G} is solvable if the action of taking successive commutator subgroups results in the trivial subgroup in a finite number of steps.

Checking a group to see if it is solvable is clearly computable in polynomial time. Form all the commutators and build up the subgroup they generate. If this is {G} reject; if it is the trivial group accept. Otherwise, continue. This easily is seen to run in polynomial time, but it is not obvious how to do it in small space. Let alone in {\mathsf{AC^0}}. But, as we will see, that is possible.

The First-Order Connection

The title of Wilson’s paper references the consequence that solvable groups have a finite definition in first-order logic. A group can be represented as a relational structure {(G,M)} where {M} is the graph of the group operation: {M(x,y,z)} means {x\cdot y = z}. The structure defines a group if and only if it satisfies the following axioms:

  1. {(\forall x)(\forall y)(\exists! z): M(x,y,z)} (multiplication is a function and the group is closed under it).
  2. {(\exists! e)(\forall y): M(e,y,y) \wedge M(y,e,y)} (identity).
  3. {(\forall t,u,v,w,x,y,z) : (M(x,y,t) \wedge M(t,z,u) \wedge M(y,z,v) \wedge M(t,v,w)) \longrightarrow w = u} (associativity).
  4. {(\forall x)(\exists y) : (M(x,y,e) \wedge M(y,x,e))} (inverses).

One can use axiom 2. to quantify the reference to {e} in axiom 4., or even combine all four axioms into one statement. What Wilson did is find one more axiom that is satisfied by, and only by, those groups that are solvable.

There is a well-known general theorem, by David Mix Barrington, Neil Immerman, and Howard Straubing building on work by Miklos Ajtai and others, to the effect that a language of finite relational structures is first-order definable if and only if it is in {\mathsf{AC^0}}. This would prove our result about solvable groups. However, the theorem itself does not immediately answer some important questions:

  • (a) How must we represent the languages of relational structures, in order to use the theorem?
  • (b) How big are the {\mathsf{AC^0}} circuits that we get—what degree polynomial bounds their size?

Regarding (a), note how cumbersome our associativity axiom is. We would rather use a functional symbol {\bullet} for the group operation and write the axiom simply as {(\forall x,y,z): (x \bullet y) \bullet z = x \bullet (y \bullet z)}. For logical meaning this makes no difference, but for complexity and the mechanics of proofs the representation is surprisingly important.

How To Represent A Group

A way to represent a finite group is as a multiplication table. Such a table is called a Cayley table, after Arthur Cayley. Here is a simple example, the table of addition modulo {4}:

A Cayley table can be written as a string {t} of values in row-major order, say separated by commas. Given {x} and {y}, one must look up the entry in the table corresponding to {x\cdot y}.

The main alternative is to present the group as a series of triples

\displaystyle  (x,y,z)

where {x \cdot y = z}, basically writing out the relation {M} above. The key to this encoding is that an {\mathsf{AC^0}} circuit can (more-)easily check whether or not {x \cdot y = z} in polynomial time and constant depth.

I believe that this encoding facilitates the construction of circuits that come from John Wilson’s theorem.

The Proof

The proof that {\mathsf{AC^0}} can determine whether a group is solvable is now clear. We “simply” have to test the assertion in Wilson’s theorem. Hence we build a circuit that fans out over all {g} in the group, and further over all sequences of 112 elements among those that are conjugate to {g}, and test whether the 56 commutators made from the latter multiply out to {g}. Since the number of conjugates may be linear in the size {n} of the group, we may be dealing with {n^{112}} such sequences. Taking in {g} gives order-of {n^{113}} cases to check. The size of the sub-circuit for each case adds 1 or 2 to the overall exponent. Thus the final algorithm is constant depth and is polynomial, but not any polynomial I would like to use.

Wilson notes in his paper that “56” might be reducible after all, and that this would follow if certain combinatorial problems arising in his proof can be solved. Independent of this, we can ask whether the degree of the polynomial size of {\mathsf{AC^0}} circuits deciding solvability can be reduced from 113-or-so to a reasonable number, like 3 or 4.

Open Problems

What applications of solvability being in {\mathsf{AC^0}} can we find in complexity theory? Can we reduce the degree of the polynomial bounding the circuit size?

What are some other great numbers with miracle results attached to them?

[fixed typo]

About these ads
41 Comments leave one →
  1. steve uurtamo permalink
    October 19, 2010 4:47 pm

    i think that 4 and 5 have the most impressive miracle facts about them.

    the two miracle facts about 5 are (separately) the structure of S5, and that 5 pieces suffice for the banach-tarski paradox.

    the miracle fact about 4 is that diophantine equations can cross over into undecidability at degree 4. they are all decidable at degree 2, and nobody knows about degree 3. but for now, 4 is miracle enough simply for being so low. 9 is interesting as well because it’s the least number of variables known to be required (although the degree jumps in this case).

    although all of these numbers are small, they’re super special in ways that can’t change (except perhaps for 4 and 9).


    • Josh permalink
      October 19, 2010 6:38 pm

      Interestingly, a similar phenomenon happens for the classification of manifolds with the exact same numbers. Namely, 2-dimensional manifolds are easily classified. And if the manifolds are given by triangulations, its very easy to determine if two are homeomorphic: determine genus, orientability, and number of boundary components. Four-dimensional manifolds have no computable classification, that is, telling whether two triangulations of 4D manifolds are homeomorphic is an uncomputable problem. But for 3D manifolds the problem seems difficult but at least possible, but not too much is known.

  2. Farbod Shokrieh permalink
    October 19, 2010 5:41 pm

    Number 56 appears in the theory of Lie groups, e.g. the dimension of the fundamental representation of E_7 is 56.
    I wonder whether there is a connection here…

  3. Albert Atserias permalink
    October 19, 2010 6:19 pm

    I didn’t know about Wilson’s result and its consequence for deciding if a given group is solvable. But after reading your post, I think I can claim that the degree of the polynomial that bounds the size of the {\rm AC}^0 circuit can be made small (something like 5 or 6 and perhaps smaller), at the cost of making the depth a small multiple of 56.

    One way to see this is by first realizing an analogy with the problem of telling if a directed graph, given by its adjacency matrix E(x,y), has a path of length 56. Obviously this problem is in {\rm AC}^0: for every possible sequence of 56 vertices we check if every vertex in the sequence except the first is a successor of the previous one, and take the big disjunction. This gives a depth-2 circuit of size O(n^{56}). The key point is that one can do much better at the cost of increasing the depth from 2 to 112. Here is how: Let P_i(x,y) be the predicate telling whether there is a path of length i from x to y in the digraph. The natural recursive way of expressing this is as follows: P_1(x,y) = E(x,y) and P_{i+1}(x,y) = \exists z (E(x,z) \wedge P_{i}(z,y)). When you replace every existential quantifier by a big disjunction over the vertices of the graph and unfold the recursion, what you get is a circuit of depth 2i and size O(n \cdot i). The proof is complete by taking the circuit that takes a big disjunction of the P_{56}(x,y)-circuits as x and y range over all pairs of vertices. This has size O(n^3) and depth 112.

    Now back to the solvable groups problem. From Wilson’s Theorem we only need to check if there exists a non-trivial g and a sequence of 56 pairs (x,y) such that each x and y is a conjugate of g and the product of the [x,y]‘s is g. Let’s generalize and define the predicate Q_k(g,h) which is true if there exists a sequence of k pairs (x_1,y_1),\ldots,(x_k,y_k) such that each x_i and y_i is a conjugate of g and the product h \cdot \prod_{i=1}^k [x_i,y_i] is g. We can express this recursively as follows: Q_0(g,h) = (g = h) and Q_{k+1}(g,h) = \exists x \exists y (C(x,g) \wedge C(y,g) \wedge Q_k(g,h \cdot [x,y])), where C(a,b) is the predicate that tells if a is a conjugate of b. We can write C(a,b) by existentially quantifying a z and its inverse z^{-1}, checking that indeed z \cdot z^{-1} = 1, and checking that z \cdot a \cdot z^{-1} = b. We can also write Q_k(g,h\cdot[x y]) by quantifying the inverses x^{-1} and y^{-1} of x and y and the product h' = h \cdot x \cdot y \cdot x^{-1} \cdot y^{-1}, and then requiring Q_k(g,h'). Finally, when you replace every existential quantifier by a big disjunction over the elements of the group and unfold the recursion, what you get is a circuit of depth O(k) and size O(k n^5) for Q_k(g,h). Now, the group is non-solvable if there exists a non-trivial g such that Q_{56}(g,1). The resulting circuit has depth a small multiple of 56 and size O(n^6).

    For those familiar with bounded variable logics in finite model theory, all this could be explained in one sentence: “Wilson’s property can be written in first-order logic with 6 re-usable variables”. This is so because it is known that $k$-variable first-order logic translates to constant-depth circuits of size $O(n^k)$ (where $n$ is the cardinality of the universe of the structure).

    • Albert Atserias permalink
      October 20, 2010 5:08 am

      On second reading, perhaps I should have been more explicit when I say “unfold the recursion”. In the first case, paths-of-length-k, by unfold the recursion I mean that we build the circuit for each P_{i+1}(x,y) by layers, using the circuits for P_i from the layer below it. Thus, at layer i+1 we have n^2 big disjunctions (indexed by pairs x,y) over all possible z, each fed by a small conjunction that puts E(x,z) and P_i(z,y) from the layer below together. The size s_i of the first i layers is n^2 \cdot n + s_{i-1}, with s_1 \leq n^2. Overall this gives s_{56} = O(n^3) as claimed (but the previous derivation was not quite right).

      In the solvable group case, again we build each Q_{k+1}(g, h) by layers, using the circuits for Q_k from the layer below it. At layer k+1 we have n^2 big disjunctions (indexed by pairs g, h) over all possible pairs x, y, each fed by a small conjunction that puts C(x), C(y) and Q_k(g, h \cdot [x,y]) together. The recurrence for the size of the first k layers is now something like s_k = O(n^2 \cdot n^2) + s_{k-1}, which gives s_{56} = O(n^4).

      Just a bit hard to visualize, but nothing non-standard.

    • rjlipton permalink*
      October 20, 2010 7:05 am

      Albert Atserias,

      I will think about this, but seems correct. The depth goes up but the size is much smaller.


    • October 20, 2010 1:41 pm

      Albert, thank you! While looking for a link to a paper by Neil Immerman on the FO #-of-variables connection, I was reminded of a recent STOC 2008 paper by Benjamin Rossman, which Neil had mentioned in his reply to Vinay Deolalikar presented Aug 12 on the blog, which has a lot of references in turn:

      Rossman’s paper.

      Re-using variables is a curious design pattern from the algorithmic rather than logical point of view—I wonder what general circuit constructions and tradeoffs can be improved by this idea.

      • Albert Atserias permalink
        October 21, 2010 2:07 am

        Yes, Rossman’s result was motivated by an unsolved problem in finite model theory and he solved it precisely by exploiting this connection between number of variables and circuit size.

        > Re-using variables is a curious design pattern from the algorithmic rather than logical point of view—I wonder what general circuit constructions and tradeoffs can be improved by this idea.

        I agree that re-using variables sounds algorithmic. But please note that the number of (re-usable) variables in a first-order formula \phi coincides with the maximum number of free-variables of the subformulas of \phi when you don’t re-use variables (and use a different one at each quantification point). Some people call this “the width” of the first-order formula. Putting it all together, a first-order formula of width k translates into a bounded-depth circuit of size O(n^k) (where n is the size of the universe of the structure). And yes: this width is tightly related to Robertson and Seymour tree-width, in case you wonder (for more on this see the paper Dalmau, Kolaitis and Vardi “Constraint Satisfaction, Bounded Treewidth, and Finite-Variable Logics”)

  4. October 19, 2010 6:47 pm

    Last year I gave a lengthy lecture on the number three, my favorite number.

    First, as the readers of this blog know, deciding the satisfiability of CNF formulas with 2 variables per clause is complete for NL and can be done quickly – yet, deciding the satisfiability of CNF formulas with 3 variables per clause is complete for NP, and describes the power of theorem proving in existential second order logic! The small step from 2 to 3 is a huge transition. As Neil might have said, “That’s one small step for clause size; one giant leap for computational complexity.”

    This isn’t the only place three shows up as a threshold. Random walks in two dimensions are recurrent – in three they are transient! Shizuo Kakutani succinctly puts it: “A drunk man will find his way home, but a drunk bird may get lost forever.” So too, does Kepler’s 2-body problem escape a closed form solution when generalized to the three-body or n-body problem. The Pythagorean identity and the 3,4,5 family of triangles (and of course others) offer solutions to a particular Diophantine equation of the second degree. Generalizing this problem to the third degree and higher provides us the humbling spectre of FLT which escaped our understanding for so very long.

    About one year ago I spoke with Professor Morgan, who was once the head of undergraduate mathematics at MIT, has now found himself surrounded by great faculty at Williams University and is one of the world’s frontier experts on bubbles. He is known for his work proving the long standing “Two Bubble Conjecture,” which posits that the lowest energy configuration of two bubbles making contact is exactly what configuration nature finds itself in (with the smaller of two bubbles bowing slightly into the larger bubble). I remember asking Professor Morgan about the “Three Bubble Conjecture,” at which point he threw his hands up in an animated way and voiced, “Ahh! It’s hopeless!”

    Three is a threshold. It is a wonderful number. I was surprised to see it skipped in your list.

    • Hajduk permalink
      October 21, 2010 8:22 am

      Not to forget the problem of graph coloring with the same threshold …

  5. anon permalink
    October 19, 2010 7:28 pm

    I’m slightly confused here: is the running time measured in the size of the group, or the logarithmic of the group order (as is the usual convention in, e.g., cryptography). It seems that here you allow the former.

  6. anon permalink
    October 19, 2010 7:33 pm

    I’m confused: do you measure running time in terms of the group order, or the logarithm of the group order (as is standard, e.g., in cryptography)?

    • rjlipton permalink*
      October 21, 2010 12:37 pm


      We allow tables so its polynomial in the size of the group, not in the log. Its the best we can do.

  7. October 19, 2010 8:27 pm

    miracle number is approximately and is called the fine structure constant in physics

  8. October 19, 2010 10:33 pm

    \alpha, the fine structure constant, that you wrote down is not 137 but is (approximately) 1/137. The fact that it is much less than one is one of the reasons the perturbation series that Feynnan diagrams represent is such a good approximation.

    • rjlipton permalink*
      October 20, 2010 7:03 am


      Thanks. Of course 1/137.

  9. October 20, 2010 1:54 am

    > 1,729 is well known, in regard to its being the smallest number that can be written as the sum of two integer cubes in two different ways.

    Minor correction:

    1729 is the smallest natural number that can be written as sum of two positive integer cubes; otherwise
    the answer would be 91.

    91 = 6^3 + (−5)^3 = 4^3 + 3^3

  10. Anonymous permalink
    October 20, 2010 11:23 am

    Number 4 and 20.

    Following is an abstract from this paper “″ that had received a lot of attention (I believe it was published in nature, but I am not sure).

    Replication of DNA and synthesis of proteins are studied from the view-point of quantum database search. Identification of a base-pairing with a quantum query gives a natural (and first ever) explanation of why living organisms have 4 nucleotide bases and 20 amino acids. It is amazing that these numbers arise as solutions to an optimisation problem. Components of the DNA structure which implement Grover’s algorithm are identified, and a physical scenario is presented for the execution of the quantum algorithm. It is proposed that enzymes play a crucial role in maintaining quantum coherence of the process. Experimental tests that can verify this scenario are pointed out.

    • October 21, 2010 3:22 am

      If this is not kookery this is indeed amazing, could someone knowledgeable in quantum computing check this paper and give his opinion?
      (or dismiss it out of hand :-) )

      • Mitchell Porter permalink
        October 21, 2010 7:07 am

        His second hypothesis in Section VI – “Enzymes are able to create superposed states of chemically distinct molecules” – is too much to believe. Normally, a qudit (generalization of a qubit, with d basis states) consists of a materially isolated system with d energy levels. But Patel’s “basis states” are supposed to be the DNA bases (AGCT), which contain different numbers of atoms! So, one has to suppose that the actual atoms present are a superset (set-theoretic union) of all those required for each individual base type, and the A state (for example) is not just “adenine”, but “adenine plus the unused atoms from the superset”.

        Apriori it’s wildly implausible that you could have an enzymatic system capable of coherently interacting with that superset of atoms so as to create the necessary base+leftover states, and I imagine that looking at the known details of DNA and tRNA interactions in the nucleus would make it concretely implausible too. Patel’s idea is that nucleotides get paired up appropriately by a quantum search through AGCT superpositions, but I would think thermal interaction is sufficient and also the only available option in any case, and that he just got misled by some numerology to do with Grover’s algorithm.

      • Anonymous permalink
        October 21, 2010 1:33 pm

        I am not sure about the state of cceptance of this work, but I had heard from some of my physicist friends that the author is quite respected.

      • Anonymous permalink
        October 21, 2010 1:33 pm

        I am not sure about the state of acceptance of this work, but I had heard from some of my physicist friends that the author is quite respected.

  11. SteveS permalink
    October 20, 2010 5:57 pm

    I think my favorite is the point (x, y) = (294, 5040) on the elliptic curve y2 = x3-36*x; or, put another way, the fact that the sum 12+22+…+242 = 702; or, put another way, the fact that the vector (1, 2, 3, …, 24, 70) is null in R24,1, the coincidence that (arguably) makes the Leech Lattice possible. (For more, of course, consult the excellent Sphere Packings, Lattices, and Groups from Conway and Sloane)

    • SteveS permalink
      October 20, 2010 5:58 pm

      Bah – those are supposed to be exponents, of course; y^2 = x^3-36*x, and 1^2+2^2+…+24^2 = 70^2.

  12. October 20, 2010 5:59 pm

    How accurately is the fine-structure constant known? Given that there are clever algorithms that take real numbers and tell you if they can be very closely approximated by simple expressions involving other numbers, I would imagine that someone has plugged in the fine-structure constant, just in case it turns out to be unexpectedly close to e\pi/1170 or something like that.

    • October 20, 2010 11:11 pm

      I tried entering “0.00729735253″ into No luck.

    • rjlipton permalink*
      October 21, 2010 12:44 pm


      I think they must have, but I will ask around. Great idea.

    • October 22, 2010 6:33 am

      In practice, the fine-structure constant is determined by experiments that measure a dimensionless ratio called “g-2″, which is (essentially) the ratio of the magnetic moment of a physical electron to the magnetic moment that it would have if it were not “dressed” by interactions with other quantum fields.

      For electrons, the value of g-2 is dominated by interactions with the electromagnetic field, but at the level of precision attained in present experiments, interactions with other fields (quarks and gluons) have to be taken into account as finite corrections. The calculation of these corrections is hugely complicated, for the physical reason that the quantum fields associated to an electron, at very short length scales, are dynamical mix of a dozen or so interacting fields.

      The point is that even if some elegant underlying theory predicted a “bare” fine structure constant of precisely 1/137, in practice it is not feasible to turn off all the other interactions that substantially modify the observed value of g-2.

      High-precision g-2 experiments are wonderfully sophisticated, and involve months-long quantum measurements on isolated single electrons that are held in a vacuum chamber. Here in Seattle, I have often visited the laboratory of Bob van Dyck and Hans Dehmelt, where these single-electron experiments were pioneered, for the practical reason that our quantum spin microscopes (dynamically speaking) are a practical embodiment, for biomedical imaging purposes, of their high-precision g-2 experiments. In essence, their noise is our signal, and vice versa.

      For these reasons, it is unlikely that a simple look-up of numerical values of g-2 will provide much fundamental insight … unless it is accompanied by some concrete mathematical ideas … although it’s worth trying nonetheless … mostly because its fun.

    • Greg permalink
      October 23, 2010 12:11 pm

      Another formula to calculate the fine-structure constant is:

      e^2 \over (4\pi\epsilon_0){\hbar}c

      which already incorporates both \pi and e .

      A more formidable obstacle toward calculating the fine-structure constant as mathematical expression – is that the value of the FS constant might not in fact be – constant – but could be changing (very slowly) over time.

  13. October 20, 2010 6:28 pm

    I’m sure you must have heard this terrible math joke, but just in case someone hasn’t. (I first heard this joke in a talk by Ron Graham. Credit where credit is due.)

    Did you know that all numbers are interesting? What’s that? You don’t believe me? Well I have a proof. Suppose not every number is interesting. Then let n be the smallest uninteresting number. That’s a rather interesting property isn’t it?

    • Sniffnoy permalink
      October 21, 2010 1:59 am

      Apply this to ordinals naively and you get a form of the Berry Paradox!

    • Cmdr. Awesome permalink
      December 1, 2010 9:35 am

      The sad thing is I immediately started to ask myself “Hmmm…would proof by contradiction or induction work better for this?”

  14. Tony Sterrett permalink
    October 21, 2010 2:18 pm

    Please don’t forget to include 108. The Integer is manifested in the Universe in several ways both Sacred and profane. According to experts (Wikipedia of course):

    It is the hyper-factorial of 3.

    108 is a refactorable number.

    The interior angles of a regular pentagon measure 108 degrees each.

    There are 108 free polynomials of order 7.

    In base 10, it is a Harshad number and a self number.

    108 is the sum of 9 adjacent numbers: 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 = 108

    Is a component in the (Golden ratio)

    In ancient India, Vedic seers had calculated the following distances which modern scientific measurements have reconfirmed:

    * The distance between the Earth and Moon is 108 times the diameter of the Moon (true)

    * The distance between the Earth and Sun is 108 times the diameter of the Sun (true)

    * The diameter of the Sun is 108 times the diameter of the Earth (close – the number is closer to 109.2 )

    Is holy in a number of ways to both Buddhist and Hindus.

    profane (Latin: “in front of”, “outside the temple”
    Sacred (inside the temple ?)


  15. dmfdmf permalink
    October 22, 2010 2:29 am

    I was once thinking about Einstein’s thought experiment of the elevator accelerating in deep space to defend his equivalence principle. While Einstein was making a different point regarding mass, it occurred to me that the only way to experimentally determine if your elevator was on the surface of the earth (1g field) or accelerating in deep space at 1g was to wait. If you are accelerating at 1g then eventually you would reach the speed of light… so before that happens your acceleration would have to slow which would be detectable. So I did the math and calculated how long does it take to reach the speed of light if you could accelerate at a constant 1g. I turns out to be about 96.94% of a year. (Here is the calculation at Wolfram Alpa Anyway, this seemed like rather astonishing number to me. Maybe there is nothing to this and its just a fluke but I always hoped some one would find a reason behind this ratio that is *almost* 1 year.

    • Ørjan Johansen permalink
      October 22, 2010 8:48 am

      It’s not that simple, because you need to adjust for the fact that velocities add differently in relativity. Using special relativity, when you accelerate at constant a for a time T both measured in your (accelerating) frame of reference, you end up with a velocity c tanh (aT/c) relative to your starting speed. This approaches c in the limit as T goes to infinity. See

      • dmfdmf permalink
        October 22, 2010 11:17 am

        Thanks Orjan. I am aware of the way that velocities are added in Relativity and using that theory the answer is infinite time to reach speed c. However, I thought it was clear from the context that I was intentially taking a naive (or pre-Relativity) approach since the elevator experiment was used to justify Relativity, so wanted to avoid any question begging. This is probably not the blog to get into a discussion about physics but I just wanted to share what was a “miracle number” to me that I came across many years ago.

  16. October 22, 2010 7:25 am

    Is Feymann one of greatest physicist?But he was really a kind of interesting.

    An interesting number should leap out from law of physics naturally.So,….

  17. El Gooog permalink
    October 25, 2010 11:14 pm

    e^{\pi \sqrt{163}} is amazing.

    “I guess being the lone even prime number makes you special.” – I’ve always felt uneasy about statements about 2 being the only even prime number. Let’s define threeven to mean divisible by 3. Then being the lone threeven prime number also makes you special!

  18. Mark A Thomas permalink
    March 31, 2011 6:17 pm

    It may be a bit late to comment on this but here is something that it is hard to deny that it is not interesting. It requires two miracle numbers e^{pi sqrt163} and the famous 70^2 = 1^2+2^2+3^2…+22^2+23^2+24^2 which from a Lorenztian 26d space leads to the null vector construction of the 24d Leech lattice (or something like that). This involves square forms: e^(2pi sqrt163)70^2 = 337736875876935471466319632506024463200.00000080231935662524957710441240659… which is a very large ‘near integer’. This is a pure mathematical form which is very nearly equal to a physics form, e^(2pi sqrt163)70^2 ~ hc/piGm^2 where m = neutron mass and using Codata 2006, hc/piGm^2 = 3.37700(34)…x 10^38 (which is dimensionless) Wow! This is exceptionally close. Further more I did say square forms are involved as it can also be expressed as e^(2pi sqrt163)70^2 ~ 2Mpl^2/m^2 where Mpl = Planck mass and m = neutron mass. There seems to be some things going for this which makes it less likely to be a coincidence. The square forms make it less arbitrary. The precision to Codata 2006 is razor sharp. Also, the the target space of the calculation is a very large number as opposed to a small number so why 3.3377…x 10^38 which is so close to the physics form value. Then finally, it has computional power as its inverse can be utilised as the dimensionless weak gravitational coupling constant. You can reference this at OEIS A161771, A160515, A162916.


  1. Top Posts —
  2. An Annoying Open Problem « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 1,225 other followers