# A Challenge From Dyson

* A reversal question *

Freeman Dyson celebrated his birthday last December. He is world famous for his work in both physics and mathematics. Dyson has proved, in work that was joint with Andrew Lenard and independent of two others, that the main reason a stack of children’s blocks doesn’t coalesce into pulp is the exclusion principle of quantum mechanics opposing gravity. He shaved a factor of off the exponent for bounds on rational approximation of algebraic irrationals, before the result was given its best-known form by Klaus Roth. He has received many honors—recently, in 2012, he was awarded the Henri Poincaré Prize at the meeting of the International Mathematical Physics Congress.

Today Ken and I want to talk about one of his relatively recent ideas, which is more mathematics than physics. Perhaps even more theory than mathematics.

It is about an interesting challenge he made, and its significance for knowledge in general. In his popular books on science and public policy, Dyson has issued many challenges to the scientific community and society at large, daring to “disturb the universe” as one of his books is titled. In this challenge he disturbs mathematics—at least we feel that way. Let’s look at it now.

## The Challenge

Dyson was one of over a hundred invited respondents to the 2005 edition of the “EDGE Annual Question”:

“What Do You Believe Is True Even Though You Cannot Prove It?”

His son, science historian George Dyson—two of whose books we covered—was also a respondent, and gave a social example that would have interested Ken and his wife Debbie on their trip to Vancouver this July. George suspects that well-documented differences between calls of ravens in different parts of the Pacific Northwest influenced First Nations languages in these regions. His father, however, kept it strictly mathematical, aiming for the most needling contrast between what we believe and what we can prove.

Ken may have his own answer, mine would have been more like: *Fermat’s Last Theorem* or the *Four Color Theorem*, since in both cases I believe the results, but **I** cannot prove them.

## Dyson’s Response

Dyson answered as follows:

“Since I am a mathematician, I give a precise answer to this question. Thanks to Kurt Gödel, we know that there are true mathematical statements that cannot be proved. But I want a little more than this. I want a statement that is true, unprovable, and simple enough to be understood by people who are not mathematicians. Here it is.

Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and so on. Numbers that are exact powers of five are 5, 25, 125, 625 and so on. Given any number such as 131072 (which happens to be a power of two), the reverse of it is 270131, with the same digits taken in the opposite order. Now my statement is: it never happens that the reverse of a power of two is a power of five.

The digits in a big power of two seem to occur in a random way without any regular pattern. If it ever happened that the reverse of a power of two was a power of five, this would be an unlikely accident, and the chance of it happening grows rapidly smaller as the numbers grow bigger. If we assume that the digits occur at random, then the chance of the accident happening for any power of two greater than a billion is less than one in a billion. It is easy to check that it does not happen for powers of two smaller than a billion. So the chance that it ever happens at all is less than one in a billion. That is why I believe the statement is true.

But the assumption that digits in a big power of two occur at random also implies that the statement is unprovable. Any proof of the statement would have to be based on some non-random property of the digits. The assumption of randomness means that the statement is true just because the odds are in its favor. It cannot be proved because there is no deep mathematical reason why it has to be true. (Note for experts: this argument does not work if we use powers of three instead of powers of five. In that case the statement is easy to prove because the reverse of a number divisible by three is also divisible by three. Divisibility by three happens to be a non-random property of the digits).

It is easy to find other examples of statements that are likely to be true but unprovable. The essential trick is to find an infinite sequence of events, each of which might happen by accident, but with a small total probability for even one of them happening. Then the statement that none of the events ever happens is probably true but cannot be proved.”

See this and this for some discussion about his answer. Of course there is the general issue of his “other examples,” but can we tackle this particular one head-on?

## Our Challenge

One can easily try numbers of the form for modest size to check that Dyson’s intuition is correct. We wonder if it is possible to at least check his intuition for an of size ?

Our challenge is:

Is there an efficient algorithm that given an determines whether the reversal of as a decimal number is a power of ?

Of course we want the algorithm to run in time polynomial in the length of . This would at least allow us to check Dyson’s intuition for extremely large numbers. Not all large numbers, but any particular large number. This seems like a plausible challenge.

Indeed we will now sketch an attack on how one might do this efficiently. The sketch is not a proof—we have not had the time to work all the details but we believe that it might be made into a real algorithm.

Of course, being able to check the conjecture for exponentially wider ranges is not the same as proving it for all . But making the machinery more efficient is a good way to understand the problem. We can try to work in some related ideas, however vague. One is encoding into matrices. There the reversal would perhaps be just the transpose. Another is that the reversal of a regular language remains regular: Given a deterministic finite automaton recognizing , one can create an NFA by adding a new start state that nondeterministically transits to some final state of , reversing each arrow of , and declaring ‘s start state the new final state. This NFA can then be converted back to a DFA. The languages of powers of or in decimal are not regular, but they are sparse enough that the idea might still help.

## An “Algorithm”

Let be the reversal of the number when written in decimal. Thus,

The reversal operator is nasty, non-linear, and hard to understand. But there is hope.

Suppose that is a number that we wish to check to see if for some . The idea is two step:

- Compute the top few decimal digits of : call them .
- Then check whether is a possible decimal pattern of low-order digits for a power of .
- If they are not, then conclude that Dyson is right about .

Several comments are in order about this potential algorithm. Clearly, since is just a few digits computing is easy given . Another point is that if is “random,” then there is a high probability that will not be a possible pattern for the low-order digits of a power of . This depends on the key fact that powers of have very constrained decimal patterns for their low-order digits. Of course all powers of end in , but much more is true. Here is a picture of how they behave:

Note: these cycles come in lengths of powers of two—see here for details. The critical point is that the cycles grow exponentially slower than the number of decimal digit patterns. So that if we can compute the top digits of and they are random we are likely to able to show that is not a power of *without looking at many digits*. This is an important, but simple, insight. Note, of the five lowest order digits only eight out of patterns are possible.

## Computing The Top Digits

Next we need to show that we can compute the high order digits of when it is written in decimal. Clearly,

where the ‘s are decimal digits and is nonzero. Let’s try and obtain just the lead digit . The simple approach is to use the fact that we can compute the logarithm of to polynomial bits of precision in polynomial time. The idea is then to take logs base ten of and try to get . This almost works, but numbers like

could make the precision required more than polynomial. We believe that we can make this work, by using the following idea. Use logarithms to tell from and so on. The point is that this should be able to avoid the above problem.

Whether this can be made to work we will leave for the future.

## Open Problems

Is Dyson right? Can we make our “algorithm” work?

### Trackbacks

- Some Strange Math Facts | Gödel's Lost Letter and P=NP
- More on Testing Dyson’s Conjecture | Gödel's Lost Letter and P=NP
- The Shapes of Computations | Gödel's Lost Letter and P=NP
- Open Problems That Might Be Easy | Gödel's Lost Letter and P=NP
- Terence Tao magic breakthrough again with EDP, Erdos Discrepancy Problem | Turing Machine

I quoted Dyson here. http://arxiv.org/abs/math/0312309

The Collatz Conjecture and Riemann Hypothesis are also examples of problems that as Chaitin says seem to be “true for no reason”.

http://arxiv.org/abs/1105.2103

http://arxiv.org/abs/cs/0507008

Most math problems fall into this category. Gregory Chaitin proves this.

I made a video about this here: http://www.youtube.com/watch?v=iALl05kUgy4&feature=share

In Greek language:

Θα ήθελα να πάρω μια απάντηση, αν έχει αποδειχτεί ή όχι, η εξής πρόταση: Η εξαίρεση επιβεβαιώνει τον κανόνα.

Θα το εκτιμούσα αν μου απαντούσατε. Το θεωρώ μεγάλη πρόκληση.

He’s asking about the phrase, “the exairesis epibaboons the canon.” In Latin:

exceptio probat regulam, which means “the exceptionteststhe rule”. The Romans didn’t understand proofs as well as the ancient Greeks, but they understood the role of exceptions better than saying they “prove” the rule. Though of course, if the rule holds then it wasn’t an exception. Hence I like my phonetic version of the Greek: an exception lies outside, “epi-“, the domain of the rule, but makes a monkey out of it, so it “epibaboons” the rule.Problems that depend on a particular base representation tend to be less interesting number-theoretically speaking. For instance, the no-go theorem is much easier to prove in base 2.

While this seems true in general, *this problem* sees to be rescued by the fact that we work in base ten, and 10=2×5. In other words, If we worked in base 15 it seems reasonable (to me) to ask this question about powers of 3 and 5

I don’t know if this helps but from this link:

http://www.vaughns-1-pagers.com/computer/powers-of-2.pdf

It seems like any power of 2 that starts with a 5 has an exponent that ends with a 9. Such as 2^9, 2^19, 2^29, … It might not continue that way, but if it did it seems like it would really reduce the search space considerably.

Paul.

@phomer: Alas, 2^{102} starts with a 5.

I sort of expected that 🙂

Any idea what 2^{109} starts with? Or 2^{99} starts with?

Paul.

@phomer: 2^99 and 2^109 both start with 6. Here’s an explanation for your initial pattern and why it breaks down. 2^9 is a little more than 500 and 2^10 is a little more than 1000. So 2^19 is a little more than 500 thousand, 2^29 is a little more than 500 million, etc. The only problem is that the roundoff errors start to accumulate, so eventually in your geometric sequence the start digit becomes 6 (and then 7…).

The proportion of powers of 2^n whose m leading digits are some number 10^m>k>0 is $log_{10^m} (1 + 1/k)$ This page http://www.mathnerds.com/best/digit/ gives a proof for m=1. So it seems there are a small but positive number of powers of 2 that when reversed end in one of the sequence cycles listed. E.g., p(R(2^n) ends with 53125) = p(2^n begins with 52135) = log_100000 (1 + 1/52135) = 0.0000016660…

this reminds me somewhat of the collatz conjecture although on the other hand a lot of open conjectures do, hah. anyway there seems to be a deep sense in which random math questions reduce to proving “infinite” properties of random walks. they can also be formulated easily as halting problems where a machine runs indefinitely if the property is proven, and eventually halts if it finds a counterexample. am thinking of blogging on this at some point. remarkably many open conjectures fit that pattern incl the Riemann hypothesis.

above dyson conjecture seems somewhat similar to a conjecture of Erdős and Graham that the base 3 expansion of 2^n avoids the digit “2” for infinitely many n.

Note that for a search, you don’t need to check every power of 2, because powers of 2 with the right leading digits are sparse.

For example, if the first two digits of 2^m are 52 then the next power of 2 starting with 52 is one of 2^(m+93), 2^(m+103), or 2^(m+196), because 2^93, 2^103, and 2^196 are the smallest powers of 2 with leading digits close enough to 1000…

It’s a bit more complicated to skip from a power with an allowed prefix of length k>2 to the next, becuase there are 2^(k-2) allowed prefixes. For each prefix, look up a list of powers, and use the first one that results in an allowed prefix. Building the table of lists requires a search of its own, but only needs to be done once. Each list should be short, but proving that may be hard (or impossible without the randomeness assumption).

You don’t need to calculate the first m digits exactly; it suffices to calculate the first m digits approximately (i.e., calculate 39999 and say with certainty that the first 5 digits are 39998, 39999, or 40000), which can certainly be done in polynomial time (in m and log(k), if calculating the m-prefix of 2^k). Because the m-prefixes that coincide with power-of-5 m-prefixes are exponentially sparse (there are 2^{m-1} of them, out of all the 10^{m} possible m-prefixes), also checking their neighbors doesn’t change that sparsity.

This proves how unethical the computer science community is

http://qbnets.wordpress.com/2014/09/13/academic-peer-review/

Don’t let stuff like this get you down, rrtucci. Just do your work and the truth will prevail in the end.

I like the analysis presented here. But I think Dyson has chosen a poor example for three reasons. First, it is intended to be “simple enough to be understood by people who are not mathematicians.” But the argument presupposes that the reader understands that the sum of an infinite series of positive numbers can be finite (and small). That is a difficult concept for non-mathematicians (or even for mathematicians such as Zeno). Second, it argues not about truth, but probable truth. That both weakens the argument, and makes it more complicated. Third, the argument depends on the assumption that the digits actually are correctly modelled by a random process. We know that the digits are not actually random — the digits of (n+1)^5 are not i.i.d. compared to n^5. The question is whether they act as if they are random for the purposes of the proof. Dyson presents no argument whatsoever to say that they are. It is as if Dyson argued that the Fermat primality test could always “prove” a number was prime. We know that the test fails for two reasons: first, if we randomly happen to have very bad luck in choosing the witness numbers, and second if the tested number is a Carmichael number, which non-randomly resists the Fermat test. I agree with Dyson that the expected number of solutions R(2^z) = 5^z is very close to zero for random reasons, but I have no clear intuition whether there are non-random Carmichael-like numbers that are exceptions.

Peter, agreed on the first point about Dyson’s argument for believing the statement, though he meant that the statement itself is simple. Your second point is more against our argument than Dyson’s, which is squarely about apprehending the truth of the assertion. Your third is a deep matter, and thanks for the extended description.

In our subsequent post on the Ulam spiral (and other “Strange Math Facts”), we drew attention to pros and cons of the “random” interpretation of the primes in particular. Your Carmichael example is another way to confound it—incidentally last year we posed a speculative idea of using Carmichael numbers for factoring. As we hint without “telling” in the Ulam post, we are planning to address this more deeply. Dyson might justify his “unlikely accident” assertion by saying there should be no material connection between a power of 2 and its reversal being a power of 5, but as commenter “anon” noted above, that 2*5 = 10 makes a plausible connection. The idea that low-complexity strings give rise to a greater number of “acausal” connections (or “screwups”) than truly random ones underlies work by Li-Vitanyi and Miltersen on “malign distributions”, which we both have made some efforts to explore.

This comment by “Pip” comes from both of us, Dick and Ken.

I just have a quasi-philosophical comment about the “randomness” of the decimal digits in a power of 2. Even if a sequence looks truly random, according to (say) an algorithmic test of some kind, it may well look highly non-random when viewed some other way. One example is the digits in the decimal expansion of the square root of a non-square integer, which look random indeed, vs. the highly non-random expression of that same square root as a continued fraction. So it may well be that the problem becomes considerably simpler when looked at differently. As a commenter above mentioned, the fact that 2×5 = 10 might provide one such different viewpoint.