As a CMU alum I think the Steelers are a lock, but those Packers $\dots$

Joseph Oesterlé and David Masser are two mathematicians who are world renown for their wonderful conjecture, the ABC conjecture. It has been called the “one of the best conjectures of the century,” by Serge Lang, who was referring to the twentieth century, but it may still be true this century.

Today I really want to talk about the Super Bowl, since I am a huge football fan. But I think that this is hard to fit under the umbrella of our usual topics, so instead I will talk about game changing.

The ABC conjecture is one of those surprises that changes the way that people think about an area, which in this case is number theory. One shock, I believe, is that the conjecture was only made late last century in 1985. That a conjecture of such importance can be found just a few years ago, especially a conjecture about integers, is extremely exciting. Perhaps there are other basic conjectures lurking out their, conjectures that will have change the “game”—help change the way that we think of an entire area of mathematics. Perhaps.

I must add a story of how I once caused a stir on the ABC conjecture, without any intention of doing so. Jin-Yi Cai and Zeke Zalcstein and I had proved a result on the following problem: Suppose that ${A,B,C}$ are matrices over the rationals that commute pairwise:

$\displaystyle AB = BA,\ AC = CA,\ BC = CB.$

We were able to show that it is decidable, actually in polynomial time, whether or not there are natural numbers ${i,j,k}$ so that

$\displaystyle A^i B^j = C^k.$

This is a question essentially about the word problem for commutative semigroups of matrices. The result really comes down to the study of certain exponential Diophantine equations.

Our mistake, innocent, but a mistake nevertheless was to title the paper initially: “The ABC question resolved,” or something like that. Well this caused a great deal of email to be sent to us asking for the paper. A lot of email. Our result has nothing to do with the real ABC conjecture, and eventually all was worked out. One obvious suggestion: be careful how you title your paper—we knew about the ABC conjecture, we knew it was not the “ABC problem,” but we messed up. The conference version, in FOCS 1994, was changed to: “The Complexity of the Membership Problem for ${2}$-generated Commutative Semigroups of Rational Matrices.” Strangely the final paper in SIAM Journal of Computing had the older title: “The Complexity of the A B C Problem.” Maybe the spaces between the letters helped not to confuse anyone.

Change The Game

One of the traditions in mathematics when faced with a resistant problem is to change the problem. Perturb it a bit, change the definitions, or the model—make a change. The art is to make the change small enough so that the new problem is close to the “real” problem, yet large enough so that you can make progress.

Football—I had to work this in somehow—is one sport that often changes the game. The NFL Rules Committee regularly makes small but tangible changes in the rules of play, enough to change the results of games. For example, the NFL recently abolished the old “force-out rule”—now a receiver must always get both feet in-bounds even if he is man-handled over the line. This came close to being a factor the last time the Steelers were in the Super Bowl in 2009, when Santonio Holmes just barely got his feet in for the game-winning catch. It is not clear these changes make the game better, or fairer, but they are changes that the NFL feels are important.

${\bullet }$ Integers ${\rightarrow}$ polynomials: This is a classic switch that is used in number theory. I will discuss it in more detail in a moment. But it is a powerful idea: integers share many properties with polynomials over the integers, but polynomials are usually much easier to handle. One simple example: the famous Goldbach conjecture is solved for polynomials but is still open for integers. The first solution was due to David Hayes who proved in 1965:

Theorem: Every polynomial of degree at least one over integers can be written as the sum of two irreducible polynomials.

${\bullet }$ Graphs ${\rightarrow}$ trees: Many problems, perhaps most interesting problems, on graphs are difficult. They are often ${\mathsf{NP}}$-complete, and cannot be solved efficiently—at least not today. One way to change the game is to restrict the class of graphs that we work on to a tractable subclass. Trees are often a great class, many problems that are hard in general are easy for trees. Another way to change the game is to move to graphs that have bounded tree-width, graphs that are planar or have low genus, and many other restrictions.

${\bullet }$ General circuits ${\rightarrow}$ monotone: There still are essentially no non-linear lower bounds for general non-uniform circuits. One way to change the game here is to replace general circuits by monotone or by other restricted classes of circuits. Then one can often obtain strong lower bounds, by results that are still quite hard. It is notoriously unclear how well they help us understand the general problem of circuit lower bounds.

${\bullet }$ Turing machines ${\rightarrow}$ machines with oracles: I have discussed this at length before, and am on the record as not being a fan of oracle results. I do agree that many oracle results are very technically challenging, many oracle questions remain open. Perhaps this is the best message with this game change: even replacing Turing machines by machines with oracle access does not make some separation results easy. If with this game change the problems are still hard, then it seems plausible that the “real” problem without oracles must be very difficult. Oracles also play a major role in Scott Aaronson and Alex Arkhipov’s recent important work on quantum complexity.

${\bullet }$ Exact ${\rightarrow}$ approximate: There are countless problems that we would like to solve, but cannot solve exactly. Some we are blocked because they are ${\mathsf{NP}}$-complete or worse; others are in polynomial time, but we cannot find a fast exact algorithm. Examples of the former include solving integer programs, non-linear optimization problems, and many others; the latter include solving near neighbor problems, eigenvalue computations, and again many others.

${\bullet }$ Worst case ${\rightarrow}$ average case: Changing the model of how we measure complexity is another important game changing idea. Average case is one that sometimes is insightful, but for many problems the choice of the distribution is critical. Another way to change the basic game is to use smooth analysis, which seems to be quite powerful. This brilliant idea is due to Dan Spielman and Shang-Hua Teng—I will discuss it in more detail in the future.

The honest point is: no one wants a solution to the changed problem, we switch the game to allow “near” solutions to be acceptable. But no one really wants these—people want the answer.

The ABC Conjecture

The ABC conjecture connects the additive structure of the integers with their multiplicative structure. it has its own home page created by Abderrahmane Nitaj. Results that connect these two are usually deep; many of the open problems in number theory concern exactly the interplay of addition and multiplication. For example, the Twin Prime Conjecture exactly does this: Given an ${x}$, is there a prime ${p>x}$ so that ${p+2}$ is also prime? The adding of ${2}$ to a prime is the source of the reason this conjecture is so hard.

Define ${N_0(m)}$ to be:

$\displaystyle \prod_{ p | m} p.$

This is called the radical of the number ${m}$. It is the product of all the primes that divide ${m}$, but taken with multiplicity ${1}$. Thus ${N_0(9) = 3}$ and ${N_0(60) = 30}$. The ABC conjecture is:

Given ${\epsilon>0}$ there is a constant ${K}$ so that for all non-zero relatively prime integers ${a,b,c}$ with ${a+b = c}$, we have the inequality

$\displaystyle \max \left( |a|,|b|,|c| \right) \le K (N_0(abc))^{1+\epsilon}.$

One simple demonstration of the immense power of this conjecture it that it implies Fermat’s Last Theorem is true for all ${n}$ large enough. Suppose that

$\displaystyle x^n + y^n = z^n,$

where ${x,y,z}$ are non-zero and relatively prime. Then let ${a=x^n}$, ${b=y^n}$, and you guessed right ${c=z^n}$. The key insight is that the radical of ${abc}$ is very small

$\displaystyle N_0(abc) = N_(x^ny^nz^n) = N_0(xyz) \le xyz.$

A simple calculation using the ABC Conjecture shows that this is a contradiction for ${n}$ large enough. Pretty impressive.

The ABC Problem Changed

Walter Stothers in 1981 and independently Richard Mason in 1983 discovered a basic theorem, not a conjecture, about polynomials, that looks like the ABC Conjecture for integers. Since ${1981 < 1983 < 1986}$ their game changing idea preceded the integer ABC Conjecture. I am not sure whether there was actual intellectual synergy between the two forms or whether the timing was coincidence—I welcome comments on this. In any event, the polynomial version is provable, and this really is the point that I wish to make. Often the polynomial version of a hard integer problem can be solved

Theorem: If ${f(t),g(t),h(t)}$ are relatively prime univariate polynomials over the complex numbers, not all constant, such that ${f+g=h}$, then

$\displaystyle \max \{ \deg(f), \deg(g), \deg(h) \} \le n_0(fgh)-1,$

where ${n_0(q(t))}$ is the number of distinct roots of ${q(t)}$.

I will not give a proof—see this for a well written version of a proof. I will say that one of the many reasons this works for polynomials is that they have additional structure: for instance we can form the derivative of a polynomial, but there seems to be no counterpart for the integers.

Though it is a “second prize” compared to the original ABC Conjecture, this theorem can still be used to solve many questions. In particular it can be used to prove:

Theorem: Let ${n \ge 3}$ be an integer.

$\displaystyle x(t)^n + y(t)^n = z(t)^n,$

has no solution in non-constant polynomials over the complex numbers.

The proof is similar to the above deduction of Fermat-for-large-${n}$ from ABC. Note that since there are no unknown constants here, the proof can handle all ${n \ge 3}$.

Open Problems

What other game changing methods can we use in complexity theory? Are the results from the game changing really insightful? Or not? What do you think?

February 6, 2011 7:58 pm

As regards the P vs. NP problem, one way of changing the game could be to view it as a physical problem and no more as a mathematical one. The time needed to solve an NP-complete problem would be similar to the energy that is required to reduce the entropy of the system it represents – in a sense that we still have to discover. And in that new theory the case P = NP, although unfeasible in practice, would be the analogous to the so called “reversible processes” of a system that happen with null entropy increment. Finding a polynomial algorithm for an NP-complete problem would be a sort of ideal situation: no one could do it, but no one could prove that it can’t be done either…

2. February 6, 2011 8:54 pm

$\mathb {N}_0(60)+30$?

• February 6, 2011 8:55 pm

$\mathb{N}_0=30$?

3. February 6, 2011 9:00 pm

$\mathcal{N}_0(60)=30$?

4. February 6, 2011 9:02 pm

does $\mathcal{N}_0(60)=30$ hold?

sorry for making so many unparsable formula in latex

• February 6, 2011 11:36 pm

Seems like “yes”: 60=2^2*3*5, so (by definition, taking multiplicity 1) N-naught(60) = 2*3*5 = 30.

• February 7, 2011 2:42 am

By the way,there is a rumour which says that the ABC is just a joke made by the mathematician

5. February 7, 2011 6:21 am

Gödel’s Lost Letter asks: One of the traditions in mathematics when faced with a resistant problem is to change the problem […] What game-changing methods can we use in complexity theory?

In any given decade, the history of mathematics tells us that the most interesting game-changing processes are already underway, but have not yet been formalized, or even widely recognized as game-changing processes. For example, Nathaniel Bowditch’s 1807 edition of The New American Practical Navigator: Being an Epitome of Navigation anticipates by two decades Gauss’ 1827 Disquisitiones generales circa superficies curvas: with Bowditch’s early exposition of post-Euclidean geometry being eminently practical; and Gauss’ later exposition being natural and universal (both Bowditch’ and Gauss’ texts are on-line).

Nowadays similar game-changing mathematical processes are underway, and as in past decades (and centuries), our practical grasp of this game-change presently is stronger than our natural and universal grasp … which is very good news and a wonderful opportunity for young mathematicians.

What might these modern game-changes be? It seems to me that one theme is the merging of our understanding of classical and quantum complexity theory.

With respect to quantum complexity theory, nowadays we are abundantly blessed with magisterial texts like Michael Nielsen and Isaac Chuang’s Quantum Computation and Quantum Information and John Watrous’ Theory of Quantum Information: Lecture Notes from 2008. These texts reveal to us the crystalline perfection of Hilbert space, just as Euclid’s Elements revealed to Bowditch and Gauss the crystalline perfection of Newtonian space.

Despite this perfection of postulates, the practical challenges of navigation and surveying led Bowditch and Gauss to abandon the rigid postulates of Euclid and Newton, at first in practice (Bowditch) and later axiomatically (Gauss and Riemann). Similarly today the practical challenges of quantum technologies are leading us to abandon the rigid geometries of Hilbert space, at first in practice (tensor network manifolds) and later axiomatically (string theory?) … although the best form for these post-Hilbert postulates is not yet known.

If 20th century quantum complexity theory was (perhaps) too rigid, we are now beginning to appreciate that 20th century classical complexity theory (perhaps) was too fluid. Already in working with the complexity class P we encounter obstructions: given a Turing machine M, in general only an oracle tell us whether it belongs to P … and even if an oracle assures us that M∈P, we are still are unable even to upper-bound the runtime of M (we are assured the upper bound exists, but are unable to calculate what it is).

Once we appreciate how large and mysterious the class P is, then we are less surprised at our inability to separate P from NP, and more willing to embrace game-changing restrictions on P that make separations easier to prove … although (as in post-Hilbert quantum dynamics) the best form for these post-Turing complexity theory postulates is not yet known.

Thus, the present decade is exciting precisely because in practice we are already living in a post-Hilbert, post-Turing world, in which post-Hilbert dynamical flows are well-adapted to describe post-Turing computational processes … but we have not yet crystallized the natural and universal mathematical postulates that describe this world. If the history of mathematics is any guide, this crystallization will not be too long in coming … and it will be plenty exciting … because the net effect is to carry us toward a world whose dynamical and computational complexities we grasp within a more natural and universal mathematical context.

In summary, the mathematical games that are associated to complexity theory are not going to change; in the ways that matter most, these games already have changed. Good! :)

• February 7, 2011 3:03 pm

Partly as a followup to the above, but mainly as an example of how the (wonderfully enjoyable) questions asked on Gödel’s Lost Letter and P=NP can inspire concrete conjectures that are fun to think about, let us consider the following question:

Let R be an oracle that computes the maximal runtime f(M,n) that is associated to a polynomial-time Turing machine M over all inputs of length n. What is the power of P^R? In particular, is R low for P?

The motivation here is that we have the strong intuition that R is low for P … or at least, it should be low for P … for any definition of P that is natural for engineering purposes.

Thus WLOEG (“without loss of engineering generality”) we are led to define a maximal subset P’⊆P such that the restricted oracle R’ = f(M∈P’,n) is low for the restricted class P’. Moreover, we are even tempted to weaken the restriction, such that the oracle R returns only an (polynomial) upper bound on the maximal runtime (rather than the precise maximal runtime).

Is P’ an interesting class? Is it weaker than P for any practical purpose? More generally, are runtime oracle questions mathematically interesting? Heck … don’t ask me! But if you think think these questions are interesting … or think they’re trivial (and you know the answers) … then I’ll happily post this comment as a MathOverflow/TCS StackExchange question, in whatever amended form(s) that Gödel’s Lost Letter readers may suggest … so that you can earn reputation points by your answer(s). :)

• February 8, 2011 9:30 am

Here maybe I should add, the restricted oracle R’ is constructed with the idea that Dick’s suggested change, (Turing machines)→(Turing machines with oracles), might perhaps be reversed to (Turing machines with oracles)→(Turing machines).

In effect, the oracle R’ bootstraps itself into non-existence relative to P’. The intuition is that when we are proving separations, it is mighty handy to have oracles around … and yet, when it comes time to instantiate concrete calculations, it is nice not to be obstructed by oracular elements in the algorithm … and so it is natural to seek the best of both worlds.

So when we ask “when do oracles disappear in final results?”, one answer is “bootstrap-low oracles disappear from the complexity classes that they induce” … of which the runtime oracle R’ is one example, and P’ is the “bootstrap class” that it induces.

This is the narrative that is (slowly) crystallizing around the TCS StackExchange question “Do runtimes for P require EXP resources to upper-bound? … are concrete examples known?