# Descending Proofs Into Algorithms

*A way to make indirect reasoning more palpable*

Wikimedia Commons source |

Nicholas Saunderson was the fourth Lucasian Professor at Cambridge, two after Isaac Newton. He promoted Newton’s *Principia Mathematica* in the Cambridge curriculum but channeled his original work into lecture notes and treatises rather than published papers. After his death, most of his work was collected into one book, *The Elements of Algebra in Ten Books*, whose title recalls Euclid’s *Elements*. It includes what is often credited as the first “extended” version of Euclid’s algorithm.

Today we raise the idea of using algorithms such as this as the basis for proofs.

Saunderson was blind from age one. He built a machine for doing what he called “Palpable Arithmetic” by touch. As described in the same book, it was an enhanced abacus—not a machine for automated calculation of the kind a later Lucasian professor, Charles Babbage, attempted to build.

We take the “palpable” idea metaphorically. Not only beginning students but we ourselves still find proofs by contradiction or “infinite descent” hard to pick up at first reading. We wonder how far mathematics can be developed so that the hard nubs of proofs are sheathed in assertions about the availability and correctness of algorithms. The algorithm’s proof may still involve contradiction, but there’s a difference: You can interact with an algorithm. It is hard to interact with a contradiction.

## An Example

It was known long before Euclid that the square root of 2 is irrational. In the terms Saunderson used, the diagonal of a square is “incommensurable” with its side.

Alexander Bogomolny’s great educational website *Cut the Knot* has an entire section on proofs. Its coverage of the irrationality of itemizes twenty-eight proofs. All seem to rely on some type of infinite descent: if there is a rational solution then there is a smaller one and so on. Or they involve a contradiction of a supposition whose introduction seems perfunctory rather than concrete. We gave a proof by induction in a post some years ago, where we also noted a MathOverflow thread and a discussion by Tim Gowers about this example.

We suspect that one reason the proof of this simple fact is considered hard for a newcomer is just that it uses these kinds of descent and suppositions. Certainly the fact itself was considered veiled in antiquity. According to legend the followers of Pythagoras treated it as an official secret and murdered Hippasus of Metapontum for the crime of divulging it. To state it truly without fear today, we still want a clear view of *why* the square root of 2 is irrational.

Our suggestion below is to avoid the descent completely. Of course it is used somewhere, but it is encapsulated in another result. The result is that for any co-prime integers and there are integers and such that

The and are given by the extended Euclidean algorithm. Incidentally, this was noted earlier by the French mathematician Claude Bachet de Méziriac—see this review—while Saunderson ascribed the general method to his late colleague Roger Cotes two pages before his chapter “Of Incommensurables” (in Book V) where he laid out full details.

## An Old Attempt

Here is the closest classical proof we could find to our aims. We quote the source verbatim (including “it’s” not “its”) and will reveal it at the end.

Proposition 15.If there be any whole number, as , whose square root cannot be expressed by any other whole number; I say then that neither can it be expressed by any fraction whatever.

For if possible, let the square root of be expressed by a fraction which when reduced to it’s least integral terms is , that is, let , then we shall have ; but the fraction is in it’s least terms, by the third corollary to the twelfth proposition, because the fraction was so; and the fraction is in it’s least terms, because 1 cannot be further reduced; therefore we have two equal fractions and both in their least terms; therefore by the tenth proposition, these two fractions must not only be equal in their values, but in their terms also, that is, must be equal to , and to 1: but cannot be equal to , because is a whole number by the supposition, and is supposed to admit of no whole number for its root; therefore the square root of cannot possibly be expressed by any fraction whatever. Q.E.D.

The cited propositions are that two fractions in lowest whole-number terms must be identical and that if and are co-prime to then so is . The proof of the latter starts with the for-contradiction words “if this be denied,” so the absence of such language above gets only part credit. This all does not come trippingly off the tongue; rather it sticks trippingly in the throat. Let’s try again.

## A Proof

In fact, we don’t need the concepts of “lowest terms” or co-primality or the full statement of the identity named for Étienne Bézout. It suffices to assert that for any whole numbers and , there are integers and such that the number

divides both and . This is what the extended Euclidean algorithm gives you.

Then for the proof, suppose that for integers and . We take and , and let , be the resulting integers. Now let’s do some simple algebra:

It follows that

Now divide both sides of this by . We get

The conclusion is that , hence . Thus . But also divides . So is an integer—the same end as the classical proof.

This is a contradiction. But it is a palpable contradiction. For instance, of course we can see that isn’t an integer. Thus we claim that the effect of this proof is more concrete.

## Open Problems

Is this a new proof? We doubt it. But the proof is nice in that it avoids any recursion or induction. The essential point—the divisibility of into and —is coded into the Euclidean algorithm.

Is ours at least smoother than the classical proof we quoted? The latter is from Saunderson’s book, on pages 304–305 which come soon after his presentation of the algorithm on pages 295–298.

What other proofs can benefit from similar treatment by “reduction to algorithms”?

[fixed missing *n* in last line of proof, some word tweaks]

Do all of the square root proofs require recursion? I don’t see such a requirement in proof #14 (to me, perhaps the clearest one), or proof #20 (more sophisticated, but really the same idea).

Determine the residue (mod 3) of both sides of a^2 = 2*b^2. Assuming lowest terms, they aren’t both 0 (mod 3). Then they cannot be the same.

That’s similar to the proof we gave back in 2010, but we made more of a meal of it by induction. Also, variation #14′′ of that proof boldfaces the use of “infinite descent.” Our new(?) proof needs no “modulo” or notation base. The issue is what one takes as basic—for instance, whether it matters how we finessed the point of

r/snot having to be in lowest terms in our proof.The argument that for any r, s with gcd d the Euclidean algorithm gives you numbers x, y such that d = rx+sy proceeds by descent on the remainders calculated in the algorithm. In particular, one establishes that the algorithm terminates by arguing that there can’t be any infinite descending sequence of natural numbers.

You acknowledge that there may be some use of contradiction in the algorithm but I really don’t see how this is any better.

Indeed, if you are happy with this kind of reasoning why not skip all the algebra and just use the following algorithm:

Given \sqrt(n) = x_i/y_i. If y_i is 1 terminate. If not we compute x_{i+1} and y_{i+1} as follows. Search for the least d > 1 that divides both x_i^2 and y_i^2. Let x_{i+1} = x_i/d and y_{i+1}=y_i/d. One can prove by contradiction that such a d must divide both x_i and y_i. Furthermore as x_{i+1} < x_i the algorithm must terminate thus \sqrt(n) = x_i/1.

I don't really see the advantage here. The fact that the part that proceeds by contradiction is in an algorithm doesn't really help.

Actually I want to say that the situation is even worse than I suggested above. You are actually using a particular characterization of the results of the Euclidean algorithm in the proof. You don’t just need a list of the steps of the Euclidean algorithm and the fact that it terminates but a description of the property the result has, namely that the final remainder is a linear combination of the inputs and divides both of them.

Think about how one would actually prove this. You would say let r_n be the result of the n-th step of the algorithm. One can establish the linear combination by simple induction. However, the most natural way to prove the later claim is to suppose that i is the greatest such that r_i is not divisible by r_k (where r_k is the result of the algorithm).

But if I’m allowed to use contradiction to get a *characterization* of the output of the algorithm and then use that in my proof I can use the following trivial proof:

Consider the following algorithm.

Given n search for integer m < n such that m^2 = n. If such an integer is found output m/1. If not output 0. I characterize this algorithm as giving the rational sqrt of n if it exists and 0 otherwise. I can use whatever means I like to prove this description is true and then trivially use this fact to get the result.

Peter

I think the point is this: we use descent yes but it is proved in a general manner that has nothing to do with square roots. It seems that this might make things easier to follow.

But that is our opinion

Prof R.J.Lipton:

What other proofs can benefit from similar treatment by “reduction to algorithms”?

Kamouna:

The answer to this question:”Is paradox recognition paradoxical or not paradoxical?”

Recursion is paradoxical by its nature, or by its definition if you prefer.

To get a job, you need experience; and to get experience you need a job.

That’s recursion, it must be paradoxical. If you see recursion such that it is not paradoxical, then you have never understood what recursion is about.

Ask yourself:”The lambda-calculus recursion operator “Y”; why is it called the Paradoxical recursion operator, for more than 8 decades?

If a gang occupies your house, you go to the police station to report; the police tells you go to court. Then, you go to court, they tell you:”Go to the police”.

The very concept of an algorithm which is necessarily built around recursion harbours a paradox. How? There exists an algorithm D such that:

1. Alice can invent an algorithm D to solve a problem X.

2. Bob (independently or not) invented the same D to solve the same problem X.

==========================================================

The result: “Alice algorithm works if and only if Bob’s algorithm does not work”.

Where is the catch:

Answer the question:”Is paradox recognition paradoxical or not paradoxical?”

Best,

Rafee Kamouna.

If one is looking for a transparent approach that avoids explicit recursion by putting it inside a black box, then I think the standard proof that uses the fundamental theorem of arithmetic is about as clear as one could possibly hope for. I’m referring to the argument that the equation cannot be solved in integers, because 2 occurs an even number of times in and an odd number of times in .

This seems to me to be much more conceptual than a proof that says, out of the blue, “Now let’s do some simple algebra.”

Of course, in a sense it’s more complicated than the usual proof, because we don’t need anything like the full strength of FTA. Another approach would be to prove as a lemma that every number can be written uniquely in the form . The existence is easy, and for uniqueness, if , then or we could divide through by the smaller of the two powers and get an even number equal to an odd number. And that implies that . Armed with that lemma, we would have that the for was even and the for was odd.

Dear Gowers the Great,

It suffices to consider the problem:”Is paradox recognition paradoxical or not paradoxical?”. Posing this question will make you the most hated person in history. Considering it, you will get straight threatening emails as I you may read at my blog we’ll come to you to beat you etc.

I recently posted this question to cstheory.stackexchange, they immediately put it on hold. I posted it in the under-graduate cs.theory.stackexchange, they did no better.

Clearly, everybody knows that this is the last mathematical question to be asked. Thereafter, all answers are “true” if and only if “false”.

I must greet Prof Lipton for his (many) posts about paradoxes, let alone self-defeating sentences like that of the public in the 30th of June, 2013 second revolutation:

“No for insulting the idiot president”

such sentences sound like “music” to Prof Lipton as well as many paradoxes of mathematics. However, when it comes to the Kleene-Rosser paradox:

It’s impossible,

tell the sun to leave the sky,

It’s impossible,

ask a baby not to cry,

Can the ocean,

Keep from rushing to the shore,

It’s impossible, it’s impossilbe.

(Courtesy of Perry Como)

===========================

Rafee Kamouna.

From someone who doesn’t regularly play with algebra, I found Saunderson’s proof by contradiction easier to follow. And, of course, Gower’s fundamental arithmetic is something I can remember well-enough to repeat at a cocktail party (if anyone in an inebriated audience wants to go further than that then they can probably do a better job than me in any case).

I’m posting because kamouna reminded me of a joke I found in an old manuscript by von Neumann’s brother that Sidles loaned me. In it, the German is standing on a corner in Berlin shouting, “the Kaiser is an idiot! the Kaiser is an idiot!” The police come-along and arrest him for treason, and he says, “Wait! I didn’t mean our Kaiser. I, I was talking-about the Austrian Kaiser.”

They respond, “You can’t fool us, we know who the idiot is.” (ha ha)

No for insulting the idiot Kaiser. That was the paradox. It’s possible in software, impossible in hardware. That’s why computability (software+hardware) is inconsistent.

https://www.academia.edu/28025536/Computability_and_Complexity_Theories_are_Inconsistent

Rafee Kamouna.