# More on Testing Dyson’s Conjecture

* Applying deep number theory to algorithms *

FullWiki source |

Alan Baker won the Fields Medal in 1970 for his work on algebraic and transcendental numbers. He greatly extended a theorem of Aleksandr Gelfond and Theodor Schneider which had solved the seventh problem on David Hilbert’s famous list. Baker’s extension says that if are algebraic numbers other than or , and if are irrational algebraic numbers that together with are linearly independent over the rationals, then the product of for to is transcendental. Hilbert had stated the case, which Gelfond and Schneider solved, and believed it would be harder than the Riemann Hypothesis.

Today Ken and I want to talk about computing numbers to high precision and their relationship to our recent discussion of Freeman Dyson’s challenge.

Recall that Dyson argued that the following is true, but unlikely to ever be proved:

For any , let . Reverse the digits of in decimal. The result is never a power of .

For example, , and its reversal is which at least ends in five. But dividing it by 25 yields , so it is not a power of .

To tell that the answer is *no* may only need computing a handful of the leading digits of . This is because when we reverse , we compare them with the low-order digits of a power of . It is easy to compute recursions for the low-order digits, in any base. Getting high-order digits, however, is a problem of approximation. Baker’s theorem, in the form that drove his transcendence result, gives a lower bound on how close a “nice” quantity can be to “nasty” values, which in turn provides an upper bound on the error of approximations we need.

## Checking Dyson’s Challenge

At the end of our discussion I pointed out that it seemed likely that we could devise an algorithm to test Dyson’s claim for in time polynomial in the number of bits in . Thus we should be able to check whether

reversed in decimal is not a power of five. The point is that such an algorithm would be interesting, since it would allow the checking of the claim for huge numbers, numbers that we cannot even write down in decimal notation. The central issue in this was the ability to compute the leading few decimal digits of such large numbers. A theorem by Richard Brent allows the computation in polynomial time of the logarithm of such numbers to exponential precision. But it does not obviously allow the computation of the top digits. See our discussion for why the top few digits may be sufficient.

## The Answer in Sight?

I ended the previous discussion with an implicit question:

Whether this can be made to work we will leave for the future.

Thanks to Eric Allender, I believe that we can **almost** close the loop and prove that there is an algorithm for checking Dyson’s claim. Almost.

I asked Eric how hard it is to compute the leading decimal digits of for large , and he immediately said that it was known. He pointed out that in 2010—so relatively recently—Mika Hirvensalo, Juhani Karhumäki, and Alexander Rabinovich proved that the top few digits of such numbers can indeed be computed in time polynomial in the size of . Their paper (older TR) states this for base , but their results on appear to work for any base, even even bases. The paper title states the point nicely: “Computing partial information out of Intractable: Powers of algebraic numbers as an example.”

What strikes us as their key insight is to connect the computation with Baker’s theorem. Algebraic numbers have two common measures of complexity: the *degree* of the minimum polynomial equation they solve, and their *height* which depends on the size (or rather the lengths) of the coefficients of this equation. The theorem says that provided the algebraic numbers are not all zero, and no is or , then

where is the maximum of and the heights, and is computable in terms of , the degrees, and the choices of complex logarithm branches. When , as the paper intends, this also needs the logarithms to be linearly independent over the rationals. (Independence for the appears needed only for the transcendence deduction; meanwhile, the fact of the sum being nonzero makes and the logarithms collectively linearly independent over the algebraic numbers.)

The paper cites later bounds given by Baker for computing in the case . This is used to control the amount of precision needed to determine the leading digits of the power of . This is all very clever, and another example of how results in numerical approximation can be the fulcrum of algorithms.

## Open Problems

The reason the algorithm is not completely worked out is that the authors prove only that one can get the top two digits, and they restrict their results to odd bases. We believe, but have not yet verified, that they should be able to handle a few more digits. To check Dyson’s conjecture efficiently for a given , we believe it should sufficient to check digits of . It is also unclear how much having base which is not relatively prime to will interfere. We will leave this for the future—again.

Another problem that could be like Dyson’s Conjecture: the famous Navier-Stokes existence and uniqueness conjecture. http://arxiv.org/pdf/math/0404302v7.pdf

If true, it’s due to probability, not logic. It’s unprovable.

That does not follow. Unprovability follows from an independence proof, through forcing, satisfiability solving, or model construction. Otherwise the best you can say is ‘not proven yet.’

Unprovability means you can’t prove it. My paper shows that you can’t prove the Navier-Stokes conjecture.

No it doesn’t. You conjecture it based on what looks like numerology, but you never prove independence. At best you demonstrate why a particular avenue is unavailable for proof.

Read up on proof theory from Gödel, Church to Cohen on how to prove something is ‘unprovable.’ Which you haven’t done.

Numerology? What are you talking about?

Also, the proof theory from Godel, Church, and Cohen aren’t the only ways to prove that something is unprovable.

That particular avenue that I show in my paper is unavailable for proof happens to be a necessary and sufficient avenue for proof that smooth solutions to the NS equations always exist. The fact that you can’t do it that way implies you can’t prove it.

I’m pretty sure proof theory is the only way to prove something is unprovable. Otherwise you’re not proving anything, your just arguing. You prove something is unprovable from a particular set of axioms with satisfiability, model construction, or forcing. If you prove it’s independent of the axioms of your theory, then you can construct a model where your statement is ‘true’ and you can construct a model where the negation of your statement is ‘true.’

Now if you can demonstrate that there is a model for which NS conjecture holds and a model for which NS fails, sure that works. You might even demonstrate that NS fails only in nonstandard models of arithmetic, which would be analogous to the Goodstein theorem.

You don’t do that.

All I am doing in my paper is arguing the same way Freeman Dyson argues about his power of 2/power of 5 conjecture: There is no logical reason why a solution to the Galerkin approximate solutions cannot leave the trapping region given in the proof of my paper just as there is no logical reason why a power of 2 cannot also be a power of 5 when its digits are reversed. Hence, you cannot prove either conjecture (in any reasonable axiom system). If they are true, they are true only because of probability.

You are right that my argument doesn’t consider proof theory or different axiom systems.

You have to prove it when you use phrases like “there is no logical reason.” That’s sort of how math works.

If there is no logical reason, there is no logical reason.

The Collatz 3n+1 Conjecture is also unprovable, because there is no logical reason why the Collatz 3n+1 sequence can’t go to infinity. There’s nothing stopping it from getting odd numbers the majority of the time (which would cause it to go to infinity). This is improbable but not impossible.

The 5n+1 Collatz sequence appears to diverge to infinity for many starting values. But why does it appear to always converge for one problem to one and diverge to infinity for the other? The answer is *probability* – on average the 3n+1 sequence decreases by 3/4 on each iteration, while on average the 5n+1 sequence increases by 5/4 on each iteration.

Dyson’s conjecture is the tip of the iceberg. I bet a great number of the outstanding mathematics conjectures are probably true but unprovable, just like Dyson’s conjecture.

You know what proof is right? When you make statements, you have to prove them. Otherwise it’s just conjecture.

There may be ‘unprovable’ things that are ‘true’ in the standard model of arithmetic… that’s sort of what was demonstrated in Gödel’s incompleteness theorems, and more succinctly demonstrated by Church’s theorem. But you don’t find out they’re unprovable. You only find out that you haven’t found a proof.

You can prove a theorem isn’t a consequence of the axioms with an independence proof. You don’t. You can prove Collatz is undecidable if you manage to make a map between the generator function and problems that are provably undecidable, if for instance you can demonstrate a morphism with the generator and a Turing machine and leverage the proof of the undecidability of the halting problem. Or you can formally define the generator in a first order system and demonstrate that a proof of the conjecture is a proof of unprovability by referencing it’s Gödel number. You don’t do that either.

What you’re doing is just saying “I can’t imagine how you can prove it, so it’s unprovable. Also, probability.” In math you have to do more than that. If you want to say that something is unprovable in some system, you have to demonstrate that construction of a proof leads to a contradiction.

Tell that to Dyson. He’s doing exactly the same thing as me.

No he isn’t. He stated an opined conjecture. He didn’t say he proved it was unprovable.

He said, “It cannot be proved because there is no deep mathematical reason why it has to be true.” That means he’s claiming it’s unprovable. Similarly, I claim that there is no mathematical reason why the Collatz sequence can’t diverge, so the Collatz Conjecture is unprovable.

Yeah, then he’s just wrong, and simply doesn’t understand proof theory. Appeal to large numbers is tempting, but it isn’t proof. Goldbach’s conjecture is a similar appeal to large numbers. It doesn’t mean that the statement is true; It’s a good reason to

suspectit’s true, but counterexamples can be very large.There’s simply no way to know in the absence of a model or a proof the truth value of a statement. Further, a proof can wander down some very unexpected roads; There appeared to be no deep mathematical reason why Fermat’s conjecture was true either, until we developed the mathematics to understand it.

I don’t understand all of the details of his argument either, but I would give Freeman Dyson the benefit of doubt. He’s pretty smart from what I hear.

This post reminded me of something I had read a while back.

From Essay Six in Ingenuity in Mathematics by Ross Honsberger:

“The remarkable theorem that we prove in this essay settles this questions; it asserts that there exist powers of 2 beginning with an given sequence of digits.”

More precisely let (d1, d2, d3, …, dk) be any any sequence of decimal digits. Then it is guaranteed that for some n, 2^n = d1d2d3….dk…

Is this relevant somehow in this discussion?

SAA

Did not know this theorem. Seems related. It does say that we will have to be able to check more than a constant number of digits at the top of a power of 2.

Thanks for the pointer. Cool

That book is filled with some really cool easily accessible mathematics. I highly recommend it.

Very similar to the Collatz 3n+1 conjecture. There is a theorem which says for any sequence of binary values there exists a Collatz sequence of odds and evens which start with those values. See http://www.cecm.sfu.ca/organics/papers/lagarias/paper/html/node4.html#SECTION00022000000000000000

am planning on blogging on this topic also. alas problems like this seem to go in circles. roughly in the form “if TM A1 halts then TM A2 halts. if TM A2 halts, then TM A3 halts…” ad infinitum. and unfortunately the “efficiency” of any of the TMs involved does not seem to help crack the problem. think there is some deeper theory waiting to be uncovered. have been banging on it for years, even decades. the problems seem to have similar attributes in eg the sense of appearing to be variations on proving some universal property associated with random walks. they also seem to have fractal natures. my idea was to look for problems that find universal properties of fractals or random walks for some tieins.

To review, here I show how numbers are used as if they were softwares.

https://www.academia.edu/8809357/UNIVERSAL_EQUATION_-_Proof_of_P_NP