# Miracle Numbers Such as “56”

* A polynomial size constant depth circuit for testing if a group is solvable *

Richard Feynman was one of the greatest physicists of all time, and was probably one of the most interesting. He shared the 1965 Nobel Prize in Physics with Julian Schwinger and Sin-Itiro Tomonaga, for their work on QED—quantum electrodynamics. Feynman used his now famous Feynman diagrams to understand how electrons and photons interacted, instead of using complex calculations. He is of course known to the general public for his tremendously popular series of books and textbooks—such as Surely You’re Joking, Mr. Feynman! He could make hard concepts accessible in way that no one else could, and he played the bongos:

Below is one of his diagrams.

Today I want to talk about some strange numbers that arise in mathematics and theory. These numbers are usually connected to results that seem like miracles. I will also present a new polynomial time algorithm whose exponent is very large—another strange result.

Such numbers occur in physics too. One such miracle number is approximately and is called the fine structure constant in physics. In the right units the constant is

where is the elementary charge, is the reduced Planck’s constant, and is the speed of light in a vacuum. Feynman spoke about it this way:

Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly!

Let’s look at some miracle numbers.

** Some Numerology **

I will call a number a miracle number if there is some result attached to it that is surprising, unexpected, or just strange.

**The number two.** Okay this is not exactly the fine structure constant. But there are many examples in mathematics where is special. I guess being the lone even prime number makes you special. A simple example of this that plays an important role in complexity theory is: For any square matrix over a field of characteristic two,

This relies on the trivial but important fact that in such a field

**The number one hundred and sixty three.** This number has many miracles associated with it. There is the famous polynomial, due to Leonhard Euler,

which is prime for . Note, any polynomial with a constant coefficient , is periodic modulo . So having one be prime for values with a constant coefficient of is pretty impressive. There is a deep reason for this polynomial’s behavior, which is related to a notion of Kurt Heegner. The key is that

is a Heegner Number—see this for more details.

The number also was used by the renowned Martin Gardner who claimed that

was an integer in an April Fool’s column. It is not. It is equal to

Pretty close, but not an integer.

**The number one thousand seven hundred and twenty nine.** The story of Godfrey Hardy and Srinivasa Ramanujan and the taxicab number 1,729 is well known, in regard to its being the smallest number that can be written as the sum of two integer cubes in two different ways (). But this is not its only notable property.

In the digits of , a sequence of ten consecutive places that have all the digits occurs fairly quickly, at the end of

If you look for the first such sequence in the digits of , however, you will not find one until the 1,729th decimal place.

The number 1,729 is also remarkable as the third Carmichael number, after 561 and 1,105. These and other remarkable facts about 1,729—including one also involving Feynman—can be found here.

One might rebut the idea of “miracle numbers” by asserting that by looking hard enough one can find remarkable facts about any number. On the other hand, from the probabilistic viewpoint one might expect the distribution of short, easy-to-state facts to be “lumpy,” favoring some numbers while shunning others. If the number of facts worth noting is about to where is the largest integer under consideration, then this is like a form of the “birthday paradox.” Of course, the distribution of facts about integers is not really random—so all we can do is appreciate those “miracles” that we observe. One more such observation is what prompted this post:

**The number fifty six.** This number arises in a relatively recent result on group theory. The following wonderful result of John Wilson was proved in 2005:

Theorem:Let be a finite group. Then is soluble if and only if it satisfies the following: that no non-trivial element is a product of commutators with entries conjugate to .

His proof uses John Thompson’s classification of the minimal simple groups. This is a deep result, but is less than the full classification theorem for simple groups. See the paper of Ari Koponen and Kerkko Luosto for related results on other properties of groups

** A Result **

John Wilson’s theorem allows us to prove the following theorem:

Theorem:Given a finite group, there is a uniform algorithm that determines whether the group defines a solvable group or not.

Recall the notion of *solvable*—some like Wilson say soluble. Consider a group . The commutator subgroup of is the subgroup generated by all the commutators of the group: elements of the form

is solvable if the action of taking successive commutator subgroups results in the trivial subgroup in a finite number of steps.

Checking a group to see if it is solvable is clearly computable in polynomial time. Form all the commutators and build up the subgroup they generate. If this is reject; if it is the trivial group accept. Otherwise, continue. This easily is seen to run in polynomial time, but it is not obvious how to do it in small space. Let alone in . But, as we will see, that is possible.

** The First-Order Connection **

The title of Wilson’s paper references the consequence that solvable groups have a finite definition in first-order logic. A group can be represented as a *relational structure* where is the graph of the group operation: means . The structure defines a group if and only if it *satisfies* the following axioms:

- (multiplication is a function and the group is closed under it).
- (identity).
- (associativity).
- (inverses).

One can use axiom 2. to quantify the reference to in axiom 4., or even combine all four axioms into one statement. What Wilson did is find one more axiom that is satisfied by, and only by, those groups that are solvable.

There is a well-known general theorem, by David Mix Barrington, Neil Immerman, and Howard Straubing building on work by Miklos Ajtai and others, to the effect that a language of finite relational structures is first-order definable if and only if it is in . This would prove our result about solvable groups. However, the theorem itself does not immediately answer some important questions:

- (a) How must we represent the languages of relational structures, in order to use the theorem?
- (b) How big are the circuits that we get—what degree polynomial bounds their size?

Regarding (a), note how cumbersome our associativity axiom is. We would rather use a functional symbol for the group operation and write the axiom simply as . For logical *meaning* this makes no difference, but for complexity and the mechanics of proofs the representation is surprisingly important.

** How To Represent A Group **

A way to represent a finite group is as a multiplication table. Such a table is called a Cayley table, after Arthur Cayley. Here is a simple example, the table of addition modulo :

A Cayley table can be written as a string of values in row-major order, say separated by commas. Given and , one must look up the entry in the table corresponding to .

The main alternative is to present the group as a series of triples

where , basically writing out the relation above. The key to this encoding is that an circuit can (more-)easily *check* whether or not in polynomial time and constant depth.

I believe that this encoding facilitates the construction of circuits that come from John Wilson’s theorem.

** The Proof **

The proof that can determine whether a group is solvable is now clear. We “simply” have to test the assertion in Wilson’s theorem. Hence we build a circuit that fans out over all in the group, and further over all sequences of 112 elements among those that are conjugate to , and test whether the 56 commutators made from the latter multiply out to . Since the number of conjugates may be linear in the size of the group, we may be dealing with such sequences. Taking in gives order-of cases to check. The size of the sub-circuit for each case adds 1 or 2 to the overall exponent. Thus the final algorithm is constant depth and is polynomial, but not any polynomial I would like to use.

Wilson notes in his paper that “56” might be reducible after all, and that this would follow if certain combinatorial problems arising in his proof can be solved. Independent of this, we can ask whether the degree of the polynomial size of circuits deciding solvability can be reduced from 113-or-so to a reasonable number, like 3 or 4.

** Open Problems **

What applications of solvability being in can we find in complexity theory? Can we reduce the degree of the polynomial bounding the circuit size?

What are some other great numbers with miracle results attached to them?

[fixed typo]

i think that 4 and 5 have the most impressive miracle facts about them.

the two miracle facts about 5 are (separately) the structure of S5, and that 5 pieces suffice for the banach-tarski paradox.

the miracle fact about 4 is that diophantine equations can cross over into undecidability at degree 4. they are all decidable at degree 2, and nobody knows about degree 3. but for now, 4 is miracle enough simply for being so low. 9 is interesting as well because it’s the least number of variables known to be required (although the degree jumps in this case).

although all of these numbers are small, they’re super special in ways that can’t change (except perhaps for 4 and 9).

s.

Interestingly, a similar phenomenon happens for the classification of manifolds with the exact same numbers. Namely, 2-dimensional manifolds are easily classified. And if the manifolds are given by triangulations, its very easy to determine if two are homeomorphic: determine genus, orientability, and number of boundary components. Four-dimensional manifolds have no computable classification, that is, telling whether two triangulations of 4D manifolds are homeomorphic is an uncomputable problem. But for 3D manifolds the problem seems difficult but at least possible, but not too much is known.

Number 56 appears in the theory of Lie groups, e.g. the dimension of the fundamental representation of E_7 is 56.

http://en.wikipedia.org/wiki/E7_%28mathematics%29

I wonder whether there is a connection here…

I didn’t know about Wilson’s result and its consequence for deciding if a given group is solvable. But after reading your post, I think I can claim that the degree of the polynomial that bounds the size of the circuit can be made small (something like 5 or 6 and perhaps smaller), at the cost of making the depth a small multiple of 56.

One way to see this is by first realizing an analogy with the problem of telling if a directed graph, given by its adjacency matrix , has a path of length . Obviously this problem is in : for every possible sequence of vertices we check if every vertex in the sequence except the first is a successor of the previous one, and take the big disjunction. This gives a depth-2 circuit of size . The key point is that one can do much better at the cost of increasing the depth from to . Here is how: Let be the predicate telling whether there is a path of length from to in the digraph. The natural recursive way of expressing this is as follows: and . When you replace every existential quantifier by a big disjunction over the vertices of the graph and unfold the recursion, what you get is a circuit of depth and size . The proof is complete by taking the circuit that takes a big disjunction of the -circuits as and range over all pairs of vertices. This has size and depth .

Now back to the solvable groups problem. From Wilson’s Theorem we only need to check if there exists a non-trivial and a sequence of pairs such that each and is a conjugate of and the product of the ‘s is . Let’s generalize and define the predicate which is true if there exists a sequence of pairs such that each and is a conjugate of and the product is . We can express this recursively as follows: and , where is the predicate that tells if is a conjugate of . We can write by existentially quantifying a and its inverse , checking that indeed , and checking that . We can also write by quantifying the inverses and of and and the product , and then requiring . Finally, when you replace every existential quantifier by a big disjunction over the elements of the group and unfold the recursion, what you get is a circuit of depth and size for . Now, the group is non-solvable if there exists a non-trivial such that . The resulting circuit has depth a small multiple of and size .

For those familiar with bounded variable logics in finite model theory, all this could be explained in one sentence: “Wilson’s property can be written in first-order logic with 6 re-usable variables”. This is so because it is known that $k$-variable first-order logic translates to constant-depth circuits of size $O(n^k)$ (where $n$ is the cardinality of the universe of the structure).

On second reading, perhaps I should have been more explicit when I say “unfold the recursion”. In the first case, paths-of-length-, by unfold the recursion I mean that we build the circuit for each by layers, using the circuits for from the layer below it. Thus, at layer we have big disjunctions (indexed by pairs ) over all possible , each fed by a small conjunction that puts and from the layer below together. The size of the first layers is , with . Overall this gives as claimed (but the previous derivation was not quite right).

In the solvable group case, again we build each by layers, using the circuits for from the layer below it. At layer we have big disjunctions (indexed by pairs ) over all possible pairs , each fed by a small conjunction that puts , and together. The recurrence for the size of the first layers is now something like , which gives .

Just a bit hard to visualize, but nothing non-standard.

Albert Atserias,

I will think about this, but seems correct. The depth goes up but the size is much smaller.

Neat

Albert, thank you! While looking for a link to a paper by Neil Immerman on the FO #-of-variables connection, I was reminded of a recent STOC 2008 paper by Benjamin Rossman, which Neil had mentioned in his reply to Vinay Deolalikar presented Aug 12 on the blog, which has a lot of references in turn:

Rossman’s paper.

Re-using variables is a curious design pattern from the algorithmic rather than logical point of view—I wonder what general circuit constructions and tradeoffs can be improved by this idea.

Yes, Rossman’s result was motivated by an unsolved problem in finite model theory and he solved it precisely by exploiting this connection between number of variables and circuit size.

> Re-using variables is a curious design pattern from the algorithmic rather than logical point of view—I wonder what general circuit constructions and tradeoffs can be improved by this idea.

I agree that re-using variables sounds algorithmic. But please note that the number of (re-usable) variables in a first-order formula coincides with the maximum number of free-variables of the subformulas of when you don’t re-use variables (and use a different one at each quantification point). Some people call this “the width” of the first-order formula. Putting it all together, a first-order formula of width translates into a bounded-depth circuit of size (where is the size of the universe of the structure). And yes: this width is tightly related to Robertson and Seymour tree-width, in case you wonder (for more on this see the paper Dalmau, Kolaitis and Vardi “Constraint Satisfaction, Bounded Treewidth, and Finite-Variable Logics”)

Last year I gave a lengthy lecture on the number three, my favorite number.

First, as the readers of this blog know, deciding the satisfiability of CNF formulas with 2 variables per clause is complete for NL and can be done quickly – yet, deciding the satisfiability of CNF formulas with 3 variables per clause is complete for NP, and describes the power of theorem proving in existential second order logic! The small step from 2 to 3 is a huge transition. As Neil might have said, “That’s one small step for clause size; one giant leap for computational complexity.”

This isn’t the only place three shows up as a threshold. Random walks in two dimensions are recurrent – in three they are transient! Shizuo Kakutani succinctly puts it: “A drunk man will find his way home, but a drunk bird may get lost forever.” So too, does Kepler’s 2-body problem escape a closed form solution when generalized to the three-body or n-body problem. The Pythagorean identity and the 3,4,5 family of triangles (and of course others) offer solutions to a particular Diophantine equation of the second degree. Generalizing this problem to the third degree and higher provides us the humbling spectre of FLT which escaped our understanding for so very long.

About one year ago I spoke with Professor Morgan, who was once the head of undergraduate mathematics at MIT, has now found himself surrounded by great faculty at Williams University and is one of the world’s frontier experts on bubbles. He is known for his work proving the long standing “Two Bubble Conjecture,” which posits that the lowest energy configuration of two bubbles making contact is exactly what configuration nature finds itself in (with the smaller of two bubbles bowing slightly into the larger bubble). I remember asking Professor Morgan about the “Three Bubble Conjecture,” at which point he threw his hands up in an animated way and voiced, “Ahh! It’s hopeless!”

Three is a threshold. It is a wonderful number. I was surprised to see it skipped in your list.

Not to forget the problem of graph coloring with the same threshold …

I’m slightly confused here: is the running time measured in the size of the group, or the

logarithmicof the group order (as is the usual convention in, e.g., cryptography). It seems that here you allow the former.I’m confused: do you measure running time in terms of the group order, or the logarithm of the group order (as is standard, e.g., in cryptography)?

Anon,

We allow tables so its polynomial in the size of the group, not in the log. Its the best we can do.

miracle number is approximately and is called the fine structure constant in physics

\alpha, the fine structure constant, that you wrote down is not 137 but is (approximately) 1/137. The fact that it is much less than one is one of the reasons the perturbation series that Feynnan diagrams represent is such a good approximation.

Dave,

Thanks. Of course 1/137.

> 1,729 is well known, in regard to its being the smallest number that can be written as the sum of two integer cubes in two different ways.

Minor correction:

1729 is the smallest natural number that can be written as sum of two

positiveinteger cubes; otherwisethe answer would be 91.

91 = 6^3 + (−5)^3 = 4^3 + 3^3

Number 4 and 20.

Following is an abstract from this paper “http://arxiv.org/abs/quant-ph/0002037” that had received a lot of attention (I believe it was published in nature, but I am not sure).

Replication of DNA and synthesis of proteins are studied from the view-point of quantum database search. Identification of a base-pairing with a quantum query gives a natural (and first ever) explanation of why living organisms have 4 nucleotide bases and 20 amino acids. It is amazing that these numbers arise as solutions to an optimisation problem. Components of the DNA structure which implement Grover’s algorithm are identified, and a physical scenario is presented for the execution of the quantum algorithm. It is proposed that enzymes play a crucial role in maintaining quantum coherence of the process. Experimental tests that can verify this scenario are pointed out.

If this is not kookery this is indeed amazing, could someone knowledgeable in quantum computing check this paper and give his opinion?

(or dismiss it out of hand )

His second hypothesis in Section VI – “Enzymes are able to create superposed states of chemically distinct molecules” – is too much to believe. Normally, a qudit (generalization of a qubit, with d basis states) consists of a materially isolated system with d energy levels. But Patel’s “basis states” are supposed to be the DNA bases (AGCT), which contain different numbers of atoms! So, one has to suppose that the actual atoms present are a superset (set-theoretic union) of all those required for each individual base type, and the A state (for example) is not just “adenine”, but “adenine plus the unused atoms from the superset”.

Apriori it’s wildly implausible that you could have an enzymatic system capable of coherently interacting with that superset of atoms so as to create the necessary base+leftover states, and I imagine that looking at the known details of DNA and tRNA interactions in the nucleus would make it concretely implausible too. Patel’s idea is that nucleotides get paired up appropriately by a quantum search through AGCT superpositions, but I would think thermal interaction is sufficient and also the only available option in any case, and that he just got misled by some numerology to do with Grover’s algorithm.

I am not sure about the state of cceptance of this work, but I had heard from some of my physicist friends that the author is quite respected.

I am not sure about the state of acceptance of this work, but I had heard from some of my physicist friends that the author is quite respected.

I think my favorite is the point (x, y) = (294, 5040) on the elliptic curve y2 = x3-36*x; or, put another way, the fact that the sum 12+22+…+242 = 702; or, put another way, the fact that the vector (1, 2, 3, …, 24, 70) is null in R24,1, the coincidence that (arguably) makes the Leech Lattice possible. (For more, of course, consult the excellent

Sphere Packings, Lattices, and Groupsfrom Conway and Sloane)Bah – those are supposed to be exponents, of course; y^2 = x^3-36*x, and 1^2+2^2+…+24^2 = 70^2.

How accurately is the fine-structure constant known? Given that there are clever algorithms that take real numbers and tell you if they can be very closely approximated by simple expressions involving other numbers, I would imagine that someone has plugged in the fine-structure constant, just in case it turns out to be unexpectedly close to or something like that.

I tried entering “0.00729735253” into http://ddrive.cs.dal.ca/~isc/standard.html. No luck.

gowers,

I think they must have, but I will ask around. Great idea.

In practice, the fine-structure constant is determined by experiments that measure a dimensionless ratio called “

g-2″, which is (essentially) the ratio of the magnetic moment of a physical electron to the magnetic moment that it would have if it were not “dressed” by interactions with other quantum fields.For electrons, the value of

g-2 is dominated by interactions with the electromagnetic field, but at the level of precision attained in present experiments, interactions with other fields (quarks and gluons) have to be taken into account as finite corrections. The calculation of these corrections is hugely complicated, for the physical reason that the quantum fields associated to an electron, at very short length scales, are dynamical mix of a dozen or so interacting fields.The point is that even if some elegant underlying theory predicted a “bare” fine structure constant of precisely 1/137, in practice it is not feasible to turn off all the other interactions that substantially modify the observed value of

g-2.High-precision

g-2 experiments are wonderfully sophisticated, and involve months-long quantum measurements on isolated single electrons that are held in a vacuum chamber. Here in Seattle, I have often visited the laboratory of Bob van Dyck and Hans Dehmelt, where these single-electron experiments were pioneered, for the practical reason that our quantum spin microscopes (dynamically speaking) are a practical embodiment, for biomedical imaging purposes, of their high-precisiong-2 experiments. In essence, their noise is our signal, andvice versa.For these reasons, it is unlikely that a simple look-up of numerical values of

g-2 will provide much fundamental insight … unless it is accompanied by some concrete mathematical ideas … although it’s worth trying nonetheless … mostly because its fun.Another formula to calculate the fine-structure constant is:

which already incorporates both and .

A more formidable obstacle toward calculating the fine-structure constant as mathematical expression – is that the value of the FS constant might not in fact be – constant – but could be changing (very slowly) over time.

I’m sure you must have heard this terrible math joke, but just in case someone hasn’t. (I first heard this joke in a talk by Ron Graham. Credit where credit is due.)

Did you know that all numbers are interesting? What’s that? You don’t believe me? Well I have a proof. Suppose not every number is interesting. Then let n be the smallest uninteresting number. That’s a rather interesting property isn’t it?

Apply this to ordinals naively and you get a form of the Berry Paradox!

The sad thing is I immediately started to ask myself “Hmmm…would proof by contradiction or induction work better for this?”

Please don’t forget to include 108. The Integer is manifested in the Universe in several ways both Sacred and profane. According to experts (Wikipedia of course):

It is the hyper-factorial of 3.

108 is a refactorable number.

The interior angles of a regular pentagon measure 108 degrees each.

There are 108 free polynomials of order 7.

In base 10, it is a Harshad number and a self number.

108 is the sum of 9 adjacent numbers: 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 = 108

Is a component in the (Golden ratio)

In ancient India, Vedic seers had calculated the following distances which modern scientific measurements have reconfirmed:

* The distance between the Earth and Moon is 108 times the diameter of the Moon (true)

* The distance between the Earth and Sun is 108 times the diameter of the Sun (true)

* The diameter of the Sun is 108 times the diameter of the Earth (close – the number is closer to 109.2 )

Is holy in a number of ways to both Buddhist and Hindus.

profane (Latin: “in front of”, “outside the temple”

Sacred (inside the temple ?)

-tony

I was once thinking about Einstein’s thought experiment of the elevator accelerating in deep space to defend his equivalence principle. While Einstein was making a different point regarding mass, it occurred to me that the only way to experimentally determine if your elevator was on the surface of the earth (1g field) or accelerating in deep space at 1g was to wait. If you are accelerating at 1g then eventually you would reach the speed of light… so before that happens your acceleration would have to slow which would be detectable. So I did the math and calculated how long does it take to reach the speed of light if you could accelerate at a constant 1g. I turns out to be about 96.94% of a year. (Here is the calculation at Wolfram Alpa http://tinyurl.com/33tkrfo) Anyway, this seemed like rather astonishing number to me. Maybe there is nothing to this and its just a fluke but I always hoped some one would find a reason behind this ratio that is *almost* 1 year.

It’s not that simple, because you need to adjust for the fact that velocities add differently in relativity. Using special relativity, when you accelerate at constant a for a time T both measured in your (accelerating) frame of reference, you end up with a velocity c tanh (aT/c) relative to your starting speed. This approaches c in the limit as T goes to infinity. See http://www.phys.ncku.edu.tw/mirrors/physicsfaq/Relativity/SR/acceleration.html

Thanks Orjan. I am aware of the way that velocities are added in Relativity and using that theory the answer is infinite time to reach speed c. However, I thought it was clear from the context that I was intentially taking a naive (or pre-Relativity) approach since the elevator experiment was used to justify Relativity, so wanted to avoid any question begging. This is probably not the blog to get into a discussion about physics but I just wanted to share what was a “miracle number” to me that I came across many years ago.

Is Feymann one of greatest physicist?But he was really a kind of interesting.

An interesting number should leap out from law of physics naturally.So,….

e^{\pi \sqrt{163}} is amazing.

“I guess being the lone even prime number makes you special.” – I’ve always felt uneasy about statements about 2 being the only even prime number. Let’s define threeven to mean divisible by 3. Then being the lone threeven prime number also makes you special!

It may be a bit late to comment on this but here is something that it is hard to deny that it is not interesting. It requires two miracle numbers e^{pi sqrt163} and the famous 70^2 = 1^2+2^2+3^2…+22^2+23^2+24^2 which from a Lorenztian 26d space leads to the null vector construction of the 24d Leech lattice (or something like that). This involves square forms: e^(2pi sqrt163)70^2 = 337736875876935471466319632506024463200.00000080231935662524957710441240659… which is a very large ‘near integer’. This is a pure mathematical form which is very nearly equal to a physics form, e^(2pi sqrt163)70^2 ~ hc/piGm^2 where m = neutron mass and using Codata 2006, hc/piGm^2 = 3.37700(34)…x 10^38 (which is dimensionless) Wow! This is exceptionally close. Further more I did say square forms are involved as it can also be expressed as e^(2pi sqrt163)70^2 ~ 2Mpl^2/m^2 where Mpl = Planck mass and m = neutron mass. There seems to be some things going for this which makes it less likely to be a coincidence. The square forms make it less arbitrary. The precision to Codata 2006 is razor sharp. Also, the the target space of the calculation is a very large number as opposed to a small number so why 3.3377…x 10^38 which is so close to the physics form value. Then finally, it has computional power as its inverse can be utilised as the dimensionless weak gravitational coupling constant. You can reference this at OEIS A161771, A160515, A162916.