Skip to content

Why Do Some Arguments Seem Wrong?

August 25, 2012


Or: how stupid am I?

Siegbert Tarrasch was a very strong player and teacher of chess in the late 19th century and into the early 20th century. He coined the term chess blindness—the failure of a chess player to make a normally obvious move or to see a normally obvious threat. He also stated the following rule, which is named for him:

Always put the rook behind the pawn {\dots} Except when it is incorrect to do so.

Today I want to talk about a simple mathematical claim that I found to be hard to follow.

Is my trouble due to age? Or a version of chess blindness transferred to math? I am reminded of then-World Champion Vladimir Kramnik’s one-move blunder in 2006 against the chess program Deep Fritz 10 running on an ordinary quad-core PC.

Kramnik played 34…Qe3??? just as Ken was leaving to teach his intro graduate theory class, and when he arrived he put the match broadcast on the classroom Internet console just to be sure that this and 35.Qh7 checkmate had really happened. This still stands as the last time a human player has faced a computer in a match on even terms.

I have been working in the theory salt-mines for many decades now—too long perhaps. But I believe that I still can understand basic arguments, create new ones, and generally understand things. Recently I read an argument in a number theory text that was a “throw-away” easy remark. Just obvious. No proof supplied. None needed. Yet I was puzzled, the remark seemed wrong; it seemed too simple to be true.

The statement and it’s claimed “obviousness” made me think about all those statements in our papers, in our talks, and in our lectures, statements that may not be so obvious to all. How many times in the past did I claim something to another—in talking or in writing—that was not obvious? How often?

Some Background

The statement concerns the Riemann Zeta function and one of its near cousins. Recall, since many of you may know it, that the zeta function {\zeta(s)} is defined for complex {s} with real part greater than {1} to be the value of the series

\displaystyle  \sum_{n=1}^{\infty} \frac{1}{n^{s}}.

This function can be extended to all complex values except for {s=1} via technical means that are not needed here today. See this for more and a pretty picture. Of course the behavior of this function is well known to capture the global structure of the prime numbers, and this makes its behavior very “complex”—so difficult that the question of where its zeroes lie is one of the six outstanding Clay Millennium Prize Problems.

A cousin of the zeta function is a function that was used by Dirichlet to prove his famous arithmetic progression theorem. Let {\chi(n)} be a character (more on this in a moment), then define {L(\chi,s)} to be:

\displaystyle  \sum_{n=1}^{\infty} \frac{\chi(n)}{n^{s}}.

A character {\chi(n)} is an arithmetic function that is both multiplicative and periodic, but that is really not important. The only character that is needed is the following basic one: Define \chi_{q}(n) to be zero except when n and q are co-prime. For example, {\chi_{6}(5)=1} and {\chi_{6}(4)=0}.

The Statement

One comment that has nothing to do with my confusion is this is a perfect example of math’s often inconsistent notation. Note that the zeta function is just { L(\chi_{1},s) }, yet I believe I have never seen it written in this way. Oh well.

A natural question is how do we relate { L(\chi_{q},s) } with the original zeta function? It seems plausible that they should be related—the question is just how. Here is the statement when {q} is a prime:

\displaystyle  L(\chi_{q},s) = \zeta(s)(1-\frac{1}{q^{s}}).

That is it. Pretty simple relationship. No?

This is the statement that puzzled me. The one that I could not believe is true. It just seemed too simple. Is it clear to you? If so I am sorry for wasting your time, but for me it just seemed wrong. Let me try and unravel why I was troubled by it.

The equation really says this:

\displaystyle  \sum_{(n,q)=1} \frac{1}{n^{s}} = \sum_{n} \frac{1}{n^{s}} - \sum_{n} \frac{1}{(qn)^{s}}.

What confused me was that I worried, was there some double counting happening? This would be a problem.

The Proof

The proof that the equation above is correct is embarrassingly simple. Here is the key insight: Define {A} to be the set

\displaystyle  \{ x \mid (x,q) > 1 \text{ and } x \ge 1 \}

and define {B} to be the set

\displaystyle  \{ qy \mid y \ge 1 \}.

The key insight is that these sets are the same: {A = B}. If {x} is in {A}, then {x} has a common factor with the prime {q} and so {x = qy} for some {y}. This shows that {x} is in {A}. If {qy} is in {B}, then {qy} and {q} have common factor greater than {1}. So {qy} is in {A}.

The equation is:

\displaystyle  \sum_{(n,q)=1} \frac{1}{n^{s}} = \sum_{n} \frac{1}{n^{s}} - \sum_{n} \frac{1}{(qn)^{s}}.

Now we can rewrite the right hand side:

\displaystyle \sum_{(n,q)=1} \frac{1}{n^{s}} + \sum_{(n,q)>1} \frac{1}{n^{s}} - \sum_{n} \frac{1}{(qn)^{s}}.

This follows since either {(n,q)=1} or {(n,q)>1}—pretty deep, color me something. Then the last two terms are just the same since {A} is equal to {B}:

\displaystyle  \sum_{(n,q)>1} \frac{1}{n^{s}} - \sum_{n} \frac{1}{(qn)^{s}} = \sum_{n \in A} \frac{1}{n^{s}} - \sum_{n \in B} \frac{1}{n^{s}} = 0.

The Blind Spot?

Ken thinks I was fooled because one sum used n to index Term(n) while the other used n to index Term(qn). He thinks it is similar to the fallacy

\displaystyle  L^2 =? \{x^2 : x \in L\}

in the formal-languages part of a theory course where squaring involves concatenation, or where L is a set of numbers and it’s just multiplication. His advice to students is to rewrite any set definition of the form

\displaystyle  S = \{E(\dots) : \dots\}

where {E} is a function or compound expression, into the “atomic set” form

\displaystyle  S = \{y : y \text{~can be written as~} \dots \}

This he says gives the reader an extra chance to realize in the case of {L^2} that “…can be written as {y = x^2}…” is wrong and “…can be written as {y = xw} where {x \in L} and {w \in L}” is correct.

In my Riemann case, the advice becomes to rewrite the compound use of the index {n} as {E(n) = qn} into the form

\displaystyle  \sum_{n} \frac{1}{(qn)^{s}} = \sum_{m} \left(\frac{1}{m^s} : m \text{~can be written as~} m = qn \text{~for some~} q,n ...\right).

Well here the {:} involves non-standard notation, but this churns out the same clarification as my use of sets {A,B} above.

Open Problems

Do you sometimes run into math statements like this? How often does this happen? Can we do a better job at explaining things?

9 Comments leave one →
  1. seanpkh permalink
    August 25, 2012 9:01 am

    Great post, but let me just comment that both examples (the chess move and the proof) also seem to be examples of a certain type of tunnel vision – you have a sort of tree of options and you search so far down one initial branch that in the end you forget to even start down the other branches and see if there is something that is already obviously better at the start. In the zeta function identity you posted the “other branch” is instead of evaluating immediately into series, to first divide both sides by (1-1/q^s) and then expand the 1/(1-1/q^s) on the lefthand side into a geometric series, after which the identity really is obvious.

    • August 25, 2012 2:51 pm

      Ok so you expand 1/(1-1/q^2) into sum from 1 to infinity n^(1/q^(s)) on the left side. What makes the identity obvious after that?

  2. Neil Slagle permalink
    August 25, 2012 12:30 pm

    I’ve noticed that as the wine glass nears total emptiness, the proportion of obvious statements that become suddenly not obvious approaches one almost surely. The solution… more wine?

  3. pydi permalink
    August 26, 2012 3:22 pm

    It is always nice to discover fallacies. They are opportunities to improve our understanding of the theory of fallacy. Nice read. Thanks.

  4. August 27, 2012 6:13 pm

    It’s a simple exercise to show that for composite q, the (1-1/q) term needs to be transformed into a product of (1-1/p) over all prime factors p of q.

    • anonymous permalink
      August 28, 2012 5:26 am

      Sorry, but I don’t get it, i have
      5/6 = (1-1/6) \neq (1-1/2)*(1-1/3) = (1/2)*(2/3) = 1/3

      • August 28, 2012 1:09 pm

        Right, the only reason this was worth mentioning is precisely because the two quantities *are* different. The original formula only holds when q is prime (as mentioned above). The generalisation to all integers is to multiply zeta by (1-1/p^s) for all prime factors p of q. For instance, when q = 6 and s = 3, you can verify numerically that the LHS gives about 1.0128 , while the RHS of the original formula would give about 1.1965. Applying my correction gives the right result.

  5. August 30, 2012 2:44 am

    If I’m doing a lock on a drive for multi-core safe access, I have a lock bit. When I want the lock, I spin with a LOCK BTS instruction while Yielding if it is not obtained. When done, I unlock with LOCK BTR.

    I have heard of the bakery algorithm and I know the XCHG instruction can be used. I think there’s a CMPXCHG something like that. I’ve never need to look into these.

    If I did not spin with Yield, I’d have to do like I do with going-to-sleep-until-message.

Trackbacks

  1. C++ memory management | pyd's blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s