Skip to content

Pretending In Number Theory

October 30, 2013


A definition is often more important than a theorem

Unknown

Andrew Granville is a British mathematician and a world expert in analytic number theory. He came to Canada for his PhD, and after a postdoc in Toronto and time at IAS in Princeton, joined the faculty of another Georgia university, Tech’s arch-rival in football, the “Bulldogs” of the University of Georgia. After serving as Chair of Mathematics there he went back to Canada in 2002 to join the Université de Montréal, where he is today. Interestingly his university also has a football team, where “football” does not mean soccer like in Britain, but does not quite mean football. It plays Canadian football, which uses a wider and longer field for teams of 12 not 11 but allows only 3 downs to advance 10 yards. Is that football? It depends on how you define “football.” Anyway, his new team is not a rival of ours.

Today I wish to discuss the role of the right definition in advancing mathematics and theory in general, and Granville’s relatively new brilliant definition in particular.

Granville is both a solver of hard open problems, and the writer of beautiful papers. He takes time to write insightful expository articles and has won several prizes for them. Five years ago he won the coveted Chauvenet Prize from the Mathematical Association of America for his paper “It is easy to determine whether a given integer is prime” surveying developments ushered in by the polynomial congruence method of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, which yielded the first deterministic polynomial-time algorithm for primality.

He has a short survey on his website with the provocative title

“Don’t be seduced by the zeros!”

—about alternatives to the zeta function as tools for analysis. He is also the keeper of a indispensable web resource—a summary of many of the properties of binomial coefficients—if you need to know about them, check out this page.

Definitions

A definition, as used in mathematics, cannot be wrong. Every definition is legal, is allowed, none can be wrong. A definition can, however, be useless, unproductive, or just plain uninteresting.

A definition really is just a shorthand for some concept, and the definition itself can always be eliminated. Yet a definition can be amazingly powerful. The right definition suggests the right conjectures, the right arguments, and the right definition can make understanding a proof much easier. A definition can be of immense power.

Consider a simple example from number theory:

\displaystyle  \mathsf{Even}(x): \exists y \ (x = 2y).

We know that this concept is of great importance, and is used in basic results and in advanced results throughout math. The classic proof that the square root of {2} is not a rational number is based on an argument that uses this concept. We could prove this result without this definition; we could, but it would be much harder to follow. That is the beauty of the right definition.

Consider another example: the notion of a graph. Leonhard Euler in 1736 invented graph theory, with the publication of his famous paper on the Seven Bridges of Königsberg—here it is in Latin. Below is a diagram from the original paper:

eulerpicture

And here is how we view it today:

Konigsberg_bridges

One could also argue that he invented topology, but he certainly invented graph theory. He essentially defined the notion of a graph, and opened a new chapter in mathematics. Euler did a mountain of great work, solved hard open problems, but this definition was one of his greatest results.

Before going on to discuss a relatively new definition due to Granville, I note that mathematical logicians care about the elimination of definitions. They know that they can always be removed, although sometimes the removal can lead to an increase in the length of a proof. See for example the paper by Jeremy Avigad on the cost of elimination of definitions in proofs in first order logic. Spoiler alert: the cost in first order theory is at most a polynomial increase in length.

Pretenders

Andrew Granville and Kannan Soundararajan have created a beautiful definition that has already made important changes to number theory. They have defined the notion of when one function pretends to be another one. It allows them, for example, to give a new proof of the prime number theorem: which they call

A pretentious proof of the prime number theorem”.

The definition is wonderful; the name is clever; and the notion suggests lots of new directions.

A meta-comment before I give their main definition. The idea that one math object can “pretend” to be another is really related to the notion of distance. Instead of number-theoretic functions consider a simple pair of vectors, say {x} and {y}, that lie in some Eucliden space. We could say that {x} “pretends” to be {y} if the distance between {x} and {y} is small. Certainly this notion of pretending makes good sense: if {x} is near {y}, then often statements about {x} also hold for {y}—especially if the statements are stable under small perturbations. Many general statements about football also apply to Canadian football, such as the importance of having a good field-goal kicker, the nature of various passing plays, the need to gain 10 yards. Other differences matter more, such as the end zones being twice as deep as in the US, which compensates somewhat for having only 3 downs when the offense is close to the end zone.

Granville with his colleague Soundararajan have noted that {\dots}, well let me quote the abstract to one of their short papers on the notion of pretending:

We note how several central results in multiplicative number theory may be rephrased naturally in terms of multiplicative functions {f} that pretend to be another multiplicative function {g}. We formalize a `distance’ which gives a measure of such pretentiousness, and as one consequence obtain a curious inequality for the zeta-function.

The inequalities they get are indeed curious:

zeta

See their paper for full details. Also look at this page, which is the main page of a course that Granville gave in 2011. The title of the course is: Pretentious Introduction to Analytic Number Theory.

EDP

What I really find exciting about their notion of pretenders is its relationship to the Erdös Discrepancy Problem (EDP). We covered EDP last spring, and it was a PolyMath project.

One version of the problem is this: Suppose that {f(n)} takes on values {\pm 1} and is multiplicative. Erdös asked years ago, and placed a $500 prize on its answer, whether is it possible (for any such function) that the partial sums of {f} are bounded.

The development, shown last year in a paper by Joseph Vandehey, is that if we use the notion of pretender then the resulting analogue of this still-open problem has a positive answer.

Here is what he proved. For a function {f(n)} to be multiplicative it suffices to stipulate that

\displaystyle  f(p)f(m) = f(pm),

for every prime {p}. The key insight is to weaken this definition. Call a function {f(n)} a multiplicative pretender if the above holds only for primes in some finite set {P}. Then Vandehey can prove that there do exist such functions {f(n)=\pm 1} so that

\displaystyle  \left| \sum_{1 \le k \le x} f(k) \right| = O(1),

for all {x}. The proof is a clever case analysis that constructs the actual function.

Open Problems

I like the idea of viewing one function as pretending to be another. That it already has been very powerful in number theory suggests we might use the notion in complexity theory. Can we define the right definition about pretenders for objects that we study? How would the pretender notion be used in the study of boolean functions, or the study of complexity classes? Indeed, what are the right definitions?

About these ads
14 Comments leave one →
  1. Joe permalink
    October 30, 2013 6:41 pm

    “A definition, as used in mathematics, cannot be wrong. Every definition is legal, is allowed, none can be wrong.”

    Are you sure? What about:

    Let R be the set of all sets that are not members of themselves.

  2. kodlu permalink
    October 30, 2013 9:34 pm

    Richard. Do you mean the usual EDP conjecture is false for pretentious functions?

    • October 31, 2013 9:16 am

      The thing is that if f is not multiplicative, then EDP is *not* equivalent to just bounding partial sums. For general functions, EDP asks if the partial sums along every homogeneous arithmetic progression are bounded.

  3. October 31, 2013 9:16 am

    Dear Dr. Lipton,

    Since you are promoting new (“right”) TCS definitions, would you be slightly starting to believe that Barbosa’s Program at http://arxiv.org/ftp/arxiv/papers/0907/0907.3965.pdf?

  4. John Sidles permalink
    October 31, 2013 11:07 am

    Another wonderfully pretentious Chauvenet Lecture is Saunders Mac Lane’s “Hamiltonian mechanics and geometry” (1970) which begins with the following remark:

    Mathematical ideas do not live fully till they are presented clearly, and we never quite achieve that ultimate clarity. Just as each generation of historians must analyse the past again, so in the exact sciences we must in each period take up the renewed struggle to present as clearly as we can the underlying ideas of mathematics.

    Increasingly many scientists and engineers — 2013’s Nobel chemists for example — are simulating large-scale quantum dynamical systems on varietal state-spaces, which leads us to ask pretention-centric mathematical questions:

    Q  How is it that we so commonly find that small-dimension varietal state-spaces — whose symplectic geometries are generically encumbered with non-smooth defects — can ‘pretend’ to be smooth large-dimension Hilbert spaces?

    More generally, when we compute large-scale quantum mechanical simulations, are we pulling back and blowing down Nature’s nonsingular Hilbert space onto varieties that successfully pretend (both dynamically and thermodynamically) to be Hilbert? Or is the opposite the case, namely, that Nature’s true state-space is varietal, of which Hilbert space is a pushed-forward blowup that credibly pretends to be our reality … as is irresistibly convenient to teach undergraduates?

    Lamentably, the research-level mathematical literature that is associated to these pretention-centric questions is exceedingly daunting to non-specialist readers.

    Hopefully by the end of the 21st century, we will conceive simpler understandings, that more closely and naturally approaches Saunders Mac Lane’s ideal of ultimate clarity, and explain why our STEM simulations work better-and-better with every passing year (not that we’re complaining).

    Summary and Conclusion  To paraphrase Walter White: “Simulation is the study of pretending!”

    • October 31, 2013 11:51 am

      Where’s that iconicon when I really need it?

      Many likelihoods informed me of this before,
      which hung so tott’ring in the balance that
      I could neither believe nor misdoubt.

      All’s Well That Ends Well, 1.3.119–121

    • October 31, 2013 6:07 pm

      Yes, your definition of space illustrates a lot about how you pretend to understand nature.

      Butterfield explains Belot’s overall idea, “Where the relationist admits one possible configuration, as (roughly) a specification of all the distances (and thereby angles) between all the parts of matter, the absolutist (or substantivalist) sees an infinity of possibilities: one for each way the relationist’s configuration (a relative configuration) can be embedded in the absolute space. This makes it natural to take the relationist to be envisaging a mechanics which is some sort of “quotient” of the absolutist’s mechanics.

      “But as Belot emphasises, one can instead consider quotienting the absolutist’s statespace — i.e. in a Hamiltonian framework, the phase space—rather than their configuration space. Indeed, this is exactly what one does in symplectic reduction. In particular, the Euclidean group’s action on the absolutist’s configuration space, Q say, can be lifted to give an action on the cotangent bundle T∗Q; which is accordingly called the ‘cotangent lift’. One can then take the quotient, i.e. consider the orbits into which T∗Q is partitioned by the cotangent lift.”

      (The main references are Belot (1999, 2001, 2003: Sections 3.5, 5)

      This recalls the classic debate between Leibniz and Clarke where this definition matters:

      “Space is not a Substance, but a Property.” Clarke

      “Absolutely speaking, it appears that God can make the material Universe finite in Extension; but the contrary appears more agreeable to his Wisdom.” Leibniz

      “Children are remarkable for their intelligence and ardor, for their curiosity, their intolerance of shams, the clarity and ruthlessness of their vision.” Aldous Huxley

      “There’s only one requirement of any of us, and that is to be courageous. Because courage, as you might know, defines all other human behavior. And, I believe – because I’ve done a little of this myself – pretending to be courageous is just as good as the real thing.” David Letterman

  5. November 1, 2013 10:06 am

    hi, the idea of functions emulating other functions does seem to be quite ubuiqitous in math. it would help if you sketched out the definition & related it to complexity theory constructions. lacking that [and not yet revealing the punchline by diving into the papers] my mind roamed & wandered & came up with all kinds of connections.
    couldnt it be said that big-oh notation [at the heart of complexity theory] is a kind of emulating function for other functions?

    also look at razborovs approximation proof. a major point to make is that complexity theory is significantly concerned with C_n(M) where $M$ is a recursive language eg defined by a turing machine and n is the number of bits in the input, and C_n is the size of the minimal circuit. but notice how similar this is to kolmogorov complexity K(x).

    therefore C_n while it is well defined must likely be quite intractable to compute. razborov came up with an approximation function for C_n which is close enough to establish nontrivial exponential lower bounds. more ideas & a reserch program on that here

    ah, notice also how “natural proofs” seems to fit into the framework of looking at emulating functions of other functions. all complexity-measuring functions seems to be a function emulator of C_n.

    realized last night, broadly construed, have also been looking at this concept wrt the collatz conjecture. the idea is to look at the function giving stopping distances which is quite random and apply “small” “local” transformations that seem to decrease its randomness/entropy. this is mostly an empirical program so far but think it could bear fruit some day.

    • November 2, 2013 12:24 pm

      If we’re going to think about approximations then we might as well think about successive orders of approximation and that will eventually lead us to thinking about analytic series analogous to Taylor series.

    • November 7, 2013 5:40 pm

      Here I was thinking of beginning with something simple, like boolean functions, and developing the appropriate form of differential analysis for those. Jumping toward the middle of a long story —

      Taylor Series Expansions of the 16 Boolean Functions on 2 Variables

  6. November 2, 2013 4:09 am

    Reblogged this on citedcorpse and commented:
    a small part of me wishes we could use the word “wanky” instead of pretentious, but alas, my dream cannot be realised

  7. November 4, 2013 11:53 am

    one other thought here. it seems at times many proofs or even large swathes of mathematics can be represented as starting with a question about an important function say f_1 which cannot be directly analyzed, and then looking at incremental alterations to this function f_2, f_3, f_4... which preserve some property in question (somewhat in the way that one makes equivalent algebraic alterations to equations to solve them, preserving equality, but in this more broader sense, not restricted merely to equality which is too limiting). finally one arrives at a final f' which can be analyzed for the property. this is a sort of “design pattern of proofs”. & think there may be some much deeper theory here. a sort of vast algorithmic calculus waiting to be uncovered & formalized, possibly quite powerful. glimpsed in the idea of functions “emulating” other functions in various papers. but, alas exact details are sketchy at this time. =(

Trackbacks

  1. Pretending In Number Theory | Rocketboom
  2. collatz derandomizing & chasing phantoms contd | Turing Machine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,881 other followers