Mapping out the landscape of a proof

 composite of src1, src2

Kenneth and Laurel Appel, our friend Andrew Appel’s father and sister, passed away in the past two months. Kenneth Appel partnered with Wolfgang Haken and some helpers to prove the Four-Color Map Theorem in 1976. One of those helpers was Laurel, then 13. Laurel became an adjunct associate professor of biology at Wesleyan University, where her husband Michael Weir is a professor, and directed the Ronald McNair Research Scholars program at Wesleyan. Her father was chair of mathematics at the University of New Hampshire, where he hired Yitang Zhang and is remembered for mentoring many junior faculty.

Today we remember both of them, and celebrate the nature of their work on the four-color proof.

The proof was the first truly famous one to employ computer calculations. We do not want to revisit here a long-term debate on the acceptance of computer-assisted proofs. Instead we wish to talk about whether the setting of a proof one is trying to construct helps the process converge toward finding one, or has a “slippery slope” sure to magnify any gap. The recognition of a helpful setting by Appel and Haken was an important human step before starting up the computer, and prefigured that Laurel’s role afterwards would be a happy one.

I (Ken) have known Andrew Appel as a Princeton ’81 classmate since 1977, and Dick has known him as a longtime colleague at Princeton. Our deepest condolences go to the entire family.

## Laurel’s Role

When Dick covered the Four-Color Theorem four years ago, his main point was Appel and Haken’s prior mathematical confidence in the structure of the proof, which went back to 1972. Now we note that this confidence was actually enhanced, not dented, during the discovery of errors after the initial computer runs. Many were in transcribing the computer output into readable form, but some involved the connection between the initial “discharging” phase which generated sets of coloring configurations, and the inputs to a computer program written by John Koch to show the configurations are reducible. As recounted by Donald Mackenzie in his book Mechanizing Proof: Computing, Risk, and Trust, members of both families took part in checking, and:

[P]articularly important was the contribution of Appel’s daughter, Laurel. This collective checking effort was crucial in persuading Haken that their proof was sound enough to “go public.”

Mackenzie’s book then picks up the story by quoting Haken:

She…found, I think, 800 mistakes, which is not many on 700 pages. And 650 she could correct herself. There was just…a wrong number, or so… [O]ver the 4th of July weekend, Ken … re-computed those 50 configurations which really had been missing … and only 12 of them did not work. And then I worked on those for one day, and I found that they could be replaced … by something like 20 others, and only 2 of them did not work, and then again [it took] a day… So I said, ‘Then, here somebody has worked one month full-time and found 800 mistakes. And then we have needed only five days to repair all that. This looks so stable—it is this incredible stability.’

While noting that the published paper in 1976 had not shaken out all such glitches, Andrew described the landscape this way in private correspondence to me:

[T]he basic proof method was very robust: there was a huge “forest” of “geographically good” reducible configurations, and any direction you shoot an arrow you’ll hit a geographically good configuration… If a configuration in the unavoidable set was found to be geographically not-good or even irreducible, the unavoidability algorithm was easily adjusted to replace it with three or four good configurations. That’s what was meant by robustness. So even if there happened to be a minor error and a truly irreducible configuration snuck in, it would be fixable.

## A Useful Contrast

What Appel and Haken did first was find a particularly good implementation of the idea of converting a universal statement into an existential one. The universal statement was:

For all planar graphs ${G}$, ${G}$ is 4-colorable.

The existential statement is:

There exists a finite set ${S}$ of graph configurations that are collectively unavoidable, each of which is reducible.

The definition of “unavoidable” means that every (large enough) ${G}$ includes a member of ${S}$, which sounds universal, but they gave a finite criterion for verifying it. Thus their goal was always well in sight.

By contrast, the Twin Prime and Goldbach conjectures are known only as universal statements, as is the Riemann Hypothesis. If they are false, there is a finite counterexample. If you had high confidence in these conjectures being false, you could search for a counterexample.

For both Twin Prime and Goldbach, there are also “heuristic arguments” that connote high likelihood of truth. Namely, suppose every (odd) natural number ${n}$ could randomly declare “I am prime” with probability ${1/\pi(n)}$, where ${\pi(n)}$ is the density of primes in the region of ${n}$, or a good enough estimate of this density. Then as ${n_0}$ increases, the probability of there failing to be a number ${p}$ between ${n_0}$ and ${2n_0}$ such that ${p}$ and ${p+2}$ both declared “I am prime,” or of there being an even number in that range that is not the sum of two self-declared primes, tends rapidly to zero. (Jordan Ellenberg’s article in Slate on Zhang’s proof includes the former heuristic.) However, these arguments say nothing about the likelihood of finding a proof.

The brilliance of Appel and Haken was setting up the problem so that a randomized argument would be about finding a proof. Indeed they were able to tune the argument to gauge the amount of computation time needed to find one. And to make checking the proof literally child’s play. The 1989 book by Appel and Haken rounded up the few errors that remained, while Dick’s Tech colleague Robin Thomas and three others found a smaller ${S}$ with easier verification, and Georges Gonthier automated the entire process of generating and checking the proof.

## Open Problems

We have blogged before about the process of “making a heuristic into a theorem,” and about a case of a 300-page proof that foundered because one inequality sign was wrong. Can we recognize general ways to move toward proof landscapes that are “robust” rather than “fragile”?

Again Dick and I were saddened by the news. We note a New Hampshire State House honor for Kenneth earlier this month, and a memorial fund for Laurel at Wesleyan.

May 28, 2013 6:23 pm

Here’s what David Ruelle said in 1988:

“Perhaps in a few decades shall we see what nonhuman mathematics looks like. I’m not predicting the imminent arrival of little green men from outer space, but simply the invasion of mathematics by computers. Since the human brain is a sort of natural computer, I see no reason why the artificial variety couldn’t perform better in the specialized task of doing mathematical research. My guess is that, within 50 or 100 years – or it might be 150 – computers will successfully compete with the human brain in doing mathematics, and that their mathematical style will be rather different from ours. Fairly long computational verifications – numerical or combinatorial – won’t bother them at all, and this should lead not just to different sorts of proofs, but more importantly to different sorts of theorems being proved.”

He’s a specialist in statistical mechanics and chaos theory. Today, many logicians are making similar predictions.

May 31, 2013 7:44 pm

OK- but who is going drive that computer-doing-mathematics?

2. May 30, 2013 12:00 am

re serge & the ruelle quote– could you plz cite that? it was indeed prescient. there have been major advances, but breakthroughs are still rare. alas we have cars that can drive themselves [via google] but there seems some basic/fundamental principle of ATP (auto thm proving) that is eluding us. it does seem to have some close relationship to AI.
much more on appel & the subj of automated thm proving at this blog post, “adventures and commotions in automated theorem proving”. there is an NYT obituary. also gowers is recently looking into and blogging on ATP in a collaboration with a TCSist and recently posted a remarkable mathematical ATP related Turing test which a significant number of human judges failed! and there is much interest expressed via populist measurement of stackexchange and mathoverflow posts in ATP, all cited on the blog, lots of further leads there.

May 30, 2013 8:20 am

Regarding the role of computers in mathematical research, here’s my brief account of the situation.

1) Henri Poincaré discovered Chaos Theory by proving the chaotic behavior of the Solar System and by foreseeing that of the weather. Unfortunately, he was unable to see the fractal “strange” attractors travelled across by the chaotic trajectories. Only with suitable computing power could Edward Lorenz, Jacques Laskar and others – incuding David Ruelle – highlight their existence and non-integer dimension. Of course, the subsequent theorems of Chaos Theory had to be proved manually.

2) Fancis Guthrie was the first to have the beautiful insight of the 4-color theorem, but he and his successors were unable to achieve a valid proof. Only with suitable computing power were Kenneth Appel and Wolfgang Haken able to do so. Later, Georges Gonthier restated their proof so as to make it look like a “normal” theorem of mathematics – though provable only with the help of a proof assistant.

3) Daniel Gorenstein and his team were able to state and prove manually the theorems about the classification of finite simple groups. Recently, Georges Gonthier and his team used a proof assistant to restate their theorems in a proof assistant – so as to increase our confidence in their validity.

Thus, until now at most a suitable cooperation between minds and machines has been required in order to make some progress. However, I’m of the opinion that, in fifty years from now, students will find it difficult to figure out how it was once possible to do mathematics without a computer.

4. May 31, 2013 9:45 pm

On Giving A Ruelle To Abduction …