Miracle Numbers Such as “56″
A polynomial size constant depth circuit for testing if a group is solvable
Richard Feynman was one of the greatest physicists of all time, and was probably one of the most interesting. He shared the 1965 Nobel Prize in Physics with Julian Schwinger and Sin-Itiro Tomonaga, for their work on QED—quantum electrodynamics. Feynman used his now famous Feynman diagrams to understand how electrons and photons interacted, instead of using complex calculations. He is of course known to the general public for his tremendously popular series of books and textbooks—such as Surely You’re Joking, Mr. Feynman! He could make hard concepts accessible in way that no one else could, and he played the bongos:
Below is one of his diagrams.
Today I want to talk about some strange numbers that arise in mathematics and theory. These numbers are usually connected to results that seem like miracles. I will also present a new polynomial time algorithm whose exponent is very large—another strange result.
Such numbers occur in physics too. One such miracle number is approximately and is called the fine structure constant in physics. In the right units the constant is
where is the elementary charge, is the reduced Planck’s constant, and is the speed of light in a vacuum. Feynman spoke about it this way:
Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly!
Let’s look at some miracle numbers.
I will call a number a miracle number if there is some result attached to it that is surprising, unexpected, or just strange.
The number two. Okay this is not exactly the fine structure constant. But there are many examples in mathematics where is special. I guess being the lone even prime number makes you special. A simple example of this that plays an important role in complexity theory is: For any square matrix over a field of characteristic two,
This relies on the trivial but important fact that in such a field
The number one hundred and sixty three. This number has many miracles associated with it. There is the famous polynomial, due to Leonhard Euler,
which is prime for . Note, any polynomial with a constant coefficient , is periodic modulo . So having one be prime for values with a constant coefficient of is pretty impressive. There is a deep reason for this polynomial’s behavior, which is related to a notion of Kurt Heegner. The key is that
is a Heegner Number—see this for more details.
The number also was used by the renowned Martin Gardner who claimed that
was an integer in an April Fool’s column. It is not. It is equal to
Pretty close, but not an integer.
The number one thousand seven hundred and twenty nine. The story of Godfrey Hardy and Srinivasa Ramanujan and the taxicab number 1,729 is well known, in regard to its being the smallest number that can be written as the sum of two integer cubes in two different ways (). But this is not its only notable property.
In the digits of , a sequence of ten consecutive places that have all the digits occurs fairly quickly, at the end of
If you look for the first such sequence in the digits of , however, you will not find one until the 1,729th decimal place.
The number 1,729 is also remarkable as the third Carmichael number, after 561 and 1,105. These and other remarkable facts about 1,729—including one also involving Feynman—can be found here.
One might rebut the idea of “miracle numbers” by asserting that by looking hard enough one can find remarkable facts about any number. On the other hand, from the probabilistic viewpoint one might expect the distribution of short, easy-to-state facts to be “lumpy,” favoring some numbers while shunning others. If the number of facts worth noting is about to where is the largest integer under consideration, then this is like a form of the “birthday paradox.” Of course, the distribution of facts about integers is not really random—so all we can do is appreciate those “miracles” that we observe. One more such observation is what prompted this post:
The number fifty six. This number arises in a relatively recent result on group theory. The following wonderful result of John Wilson was proved in 2005:
Theorem: Let be a finite group. Then is soluble if and only if it satisfies the following: that no non-trivial element is a product of commutators with entries conjugate to .
His proof uses John Thompson’s classification of the minimal simple groups. This is a deep result, but is less than the full classification theorem for simple groups. See the paper of Ari Koponen and Kerkko Luosto for related results on other properties of groups
John Wilson’s theorem allows us to prove the following theorem:
Theorem: Given a finite group, there is a uniform algorithm that determines whether the group defines a solvable group or not.
Recall the notion of solvable—some like Wilson say soluble. Consider a group . The commutator subgroup of is the subgroup generated by all the commutators of the group: elements of the form
is solvable if the action of taking successive commutator subgroups results in the trivial subgroup in a finite number of steps.
Checking a group to see if it is solvable is clearly computable in polynomial time. Form all the commutators and build up the subgroup they generate. If this is reject; if it is the trivial group accept. Otherwise, continue. This easily is seen to run in polynomial time, but it is not obvious how to do it in small space. Let alone in . But, as we will see, that is possible.
The First-Order Connection
The title of Wilson’s paper references the consequence that solvable groups have a finite definition in first-order logic. A group can be represented as a relational structure where is the graph of the group operation: means . The structure defines a group if and only if it satisfies the following axioms:
- (multiplication is a function and the group is closed under it).
One can use axiom 2. to quantify the reference to in axiom 4., or even combine all four axioms into one statement. What Wilson did is find one more axiom that is satisfied by, and only by, those groups that are solvable.
There is a well-known general theorem, by David Mix Barrington, Neil Immerman, and Howard Straubing building on work by Miklos Ajtai and others, to the effect that a language of finite relational structures is first-order definable if and only if it is in . This would prove our result about solvable groups. However, the theorem itself does not immediately answer some important questions:
- (a) How must we represent the languages of relational structures, in order to use the theorem?
- (b) How big are the circuits that we get—what degree polynomial bounds their size?
Regarding (a), note how cumbersome our associativity axiom is. We would rather use a functional symbol for the group operation and write the axiom simply as . For logical meaning this makes no difference, but for complexity and the mechanics of proofs the representation is surprisingly important.
How To Represent A Group
A way to represent a finite group is as a multiplication table. Such a table is called a Cayley table, after Arthur Cayley. Here is a simple example, the table of addition modulo :
A Cayley table can be written as a string of values in row-major order, say separated by commas. Given and , one must look up the entry in the table corresponding to .
The main alternative is to present the group as a series of triples
where , basically writing out the relation above. The key to this encoding is that an circuit can (more-)easily check whether or not in polynomial time and constant depth.
I believe that this encoding facilitates the construction of circuits that come from John Wilson’s theorem.
The proof that can determine whether a group is solvable is now clear. We “simply” have to test the assertion in Wilson’s theorem. Hence we build a circuit that fans out over all in the group, and further over all sequences of 112 elements among those that are conjugate to , and test whether the 56 commutators made from the latter multiply out to . Since the number of conjugates may be linear in the size of the group, we may be dealing with such sequences. Taking in gives order-of cases to check. The size of the sub-circuit for each case adds 1 or 2 to the overall exponent. Thus the final algorithm is constant depth and is polynomial, but not any polynomial I would like to use.
Wilson notes in his paper that “56” might be reducible after all, and that this would follow if certain combinatorial problems arising in his proof can be solved. Independent of this, we can ask whether the degree of the polynomial size of circuits deciding solvability can be reduced from 113-or-so to a reasonable number, like 3 or 4.
What applications of solvability being in can we find in complexity theory? Can we reduce the degree of the polynomial bounding the circuit size?
What are some other great numbers with miracle results attached to them?