The ABC Conjecture And Cryptography
A missed point in the history of cryptography
Shinichi Mochizuki recently released a series of four papers (I, II, III, IV) totaling 512 pages that claim to contain a proof of the ABC Conjecture of number theory. I understand that he is a strong mathematician, who has a strong track record, so he may be correct. Of course such a long paper will take a serious effort by the experts to check, and given the depth and importance of the ABC Conjecture, we all wish that it is correct.
Today I want to talk about a connection that I see between his work and modern cryptography.
No, there is no direct mathematical connection: the conjecture does not break any crypto-system—at least I do not see how it could be used to do that. Rather there is an interesting—I believe—philosophical connection between his approach and cryptography. I will explain later.
I have already told the story of a joint paper of mine with Jin-Yi Cai and Zeke Zalcstein proving that given three commuting matrices one could decide in polynomial time whether or not there were natural numbers so that
This is essentially an exponential Diophatine equation, and was a nice result—but was not a millionth as important as the ABC conjecture. It was accepted at a top conference and we used the term “ABC” in the title. Mistake. We were quickly flooded with requests for a preprint, and soon realized that we needed to change the name of the paper. This shows public awareness of the importance of what Mochizuki has done, if correct.
Update 9/18/2012: Today’s New York Times Science Tuesday section has an article on Mochizuki and his papers, which also links to a post by Jordan Ellenberg, in which Brian Conrad, Terence Tao, and Noam Elkies among others make comments.
You may know the conjecture, but I thought for completeness I would state it here, along with its curious history. The best source for this, I think, is the beautiful book of Serge Lang Math Talks for Undergraduates. Lang has written many many books, some easy to read, some well not so easy to read, but this softcover book is a gem—perhaps over-priced and under-typo-checked, but still a gem.
I will follow Lang’s version of the story. Richard Mason in 1983 discovered a new property of polynomials. The property is quite neat, and as Lang points out, it came as a surprise that no one earlier had noticed such a basic property of polynomials. That’s not quite true: Walter Stothers had proved it two years previously, but it was unknown to many in the 1980’s. As Lang said:
“So the history of mathematics does not always flow smoothly.”
We need a few definitions about polynomials first to define their beautiful theorem. If is a polynomial, let be the degree of the polynomial and let be the number of distinct zeroes of .
Theorem: Let be relatively prime polynomials over the complex numbers so that . Then,
See this for a simple proof due to Noah Synder, who found it while still in high school. One cool consequence of this theorem is that there are no non-trivial polynomials so that
where . That is, Fermat’s Last Theorem is true for polynomials.
The Game Change
Not long after the Mason-Stothers work, Joseph Oesterle and David Masser made a conjecture of the same spirit, but replacing polynomials by integers. Of course they needed to replace the degree by something that made sense for integers and replace the number of zeroes by something also.
The degree was not too hard, since it measures the “size” of a polynomial, so the degree of an integer becomes the absolute value. However, what plays the role of the number of zeroes? They found that the radical of an integer is the correct idea: the radical of , denoted by , is the product of all the distinct primes that divide .
The ABC Conjecture then is:
Given , there exists a constant such that for all relatively prime integers with , we have the inequality,
Again as we noted before, it can be used to also prove Fermat’s Last Theorem for all large enough. This is due to the constant . If we knew explicitly, then this could be a way to prove Fermat for all , since it is known to hold, independent of Andrew Wiles work, for very large .
The above conjecture is the amazing ABC Conjecture that may now be proved by Mochizuki.
Why Is ABC Amazing?
The integers are closed under addition and multiplication. Their structure under one of these operations alone can be non-trivial, but it is relatively well understood. Under addition the integers just form an infinite abelian group with one generator. Under multiplication they form an abelian semigroup with all the primes as the generators.
A precise measure of how “easy” the additive group structure is seen by Mojżesz Presburger’s Theorem. This theorem shows that the first order theory of addition is decidable: so any question about addition that can be stated in a first order theory can in principle be decided. There is no Gödel Incompleteness Theorem to worry about. The same is also true for the first order theory of multiplication: it is also decidable.
Thus questions about only addition or only multiplication are at least in principle decidable. The lower bounds on proofs in these respective theories are not friendly, so automatic answers are in general impractical, but at least we know that answers exist.
Everything changes when we ask questions that talk about the interplay between addition and multiplication. The first-order theory is undecidable and incomplete. There are countless simple-to-state questions that mix the two structures—addition and multiplication—that are beyond are current abilities. A simple example of this is the Twin Prime Conjecture: Are there an infinite number of primes so that is also prime? Note how it mixes primality (which depends on multiplication) with addition.
This is also the reason that the ABC Conjecture is so deep. It mixes in a critical way the additive structure
with the multiplicative structure, the radical of . This mixture is at the core of why the question is so deep.
A disclaimer: I am a complete innocent when it comes to the type of number theory that is being used by Mochizuki. So is Ken. I am completely in the dark, but that never stops us here at GLL from trying to explain things. So the following analogy could be way off and misleading. That said, I will push on and try to give you some idea of perhaps how his breakthrough work can be explained without mentioning:
the theory of semi-graphs of anabelioids, Frobenioids, the étale theta function, and log-shells, and more.
Books and texts are messages. In the pre-modern era of cryptography, messages were text strings. Letters could be changed from one to another. Or they could be re-ordered. But that was all. This meant that cryptosystems were limited in what they could do to text to attempt to encode it.
For example, the German Enigma machine used rotors and plugboards to change a letter to another letter in a potentially complex manner. It was broken as we know by the Allies, and this helped save many lives, but it was a quite difficult feat.
The real modern era started when the famous paper of Whitfield Diffie and Martin Hellman (DH) was published. It contains the great idea of a public-key asymmetric cipher system. This is what we think about when we credit them for their great work.
The Message is the Medium
But in my opinion a greater advance was the realization that messages had a natural richer structure. Messages could be mapped one-to-one to integers, and this simple idea opened whole new vistas. Previously a book could be encoded by changing its letters or re-ordering them. Now new things could be done. One could multiply two messages together, or add them, or raise them to an exponential modulo a prime, or a composite. This made no sense previously: what is the square of Charles Dicken’s a Tale of Two Cities? I have no idea. But now there were magic things that we could do to text. This opened the door for modern cryptography.
In my opinion it was a huge breakthrough and was just as important, if not more important, than the concept of public key. Suppose DH had written a paper that simply said:
Messages are numbers; they can be manipulated as numbers in any way you like.
I believe one could argue that this would have been a huge advance, and that shortly later many great ideas would have come forward.
I may be in the minority here, but I do believe that the idea that messages had rich hidden mathematical structure changed cryptography forever. This is the connection with the work in the ABC Conjecture. Really. I think that what I hear that Mochizuki has done is create new mathematical structures that allow Diophantine problems to be addressed in new ways. It is immensely more difficult, more abstract, more technically demanding, than what DH did. But I would argue it is in the same spirit: take ancient objects, in their case messages, and embed them in a richer structure that allows entire new ideas to be used. This revolutionized cryptography, and I believe that the same may be happening now in number theory.
Besides the claimed proof being correct, is the above connection cogent? Okay I am out on the limb, and I hope that what I said today is not too silly. I hope that it helped a little in our understanding of the ABC Conjecture and of cryptography. If not, then so be it.
[fixed spurious “deg” in Mason-Stothers theorem; added update]