# Surely You Are Joking?

* A new way to write mathematics *

Vladimir Voevodsky won the Fields Medal in 2002 for his work on homotopy theory of algebraic varieties. Using his pioneering methods he proved, among many other things, a deep conjecture of John Milnor, that had been open for decades.

Today I want to talk about, no, rant about, a recent “breakthrough” that was just announced on the ACM Tech news service.

Every few days the ACM sends out a collection of breaking news that relates in various way to computing. Some of the stories are interesting, some are surprising, some are relatively mundane. Clearly ACM is under pressure to publish a list of stories every few days. They do this whether the stories are really breakthroughs or not—although the titles are always exciting.

Here is the list that I was looking at the other day.

- Detroit, Embracing New Auto Technologies, Seeks App Builders
- MIT Researchers Can See Through Walls Using “Wi-Vi”
- Siri’s Creators Demonstrate an Assistant That Takes the Initiative
- Feature Stops Apps From Stealing Phone Users’ Passwords
- SDN Awards Seek University, Research Potential
- NSF and Mozilla Announce Breakthrough Applications on a Faster, Smarter Internet of the Future
- Mathematicians Think Like Machines for Perfect Proofs
- Artificial Intellience in Mental Healthcare
- Visual Language Will Result in Better Services
- The Tao of Facebook: ‘Social Graph’ Takes New Path
- New Study Suggests Voynich Text Is Not a Hoax
- Google Creates Developers Cloud Playground for Code Testing

The stories all sounded interesting—some more than others—but one jumped out and got my immediate attention. Let’s look at the story:

Mathematicians Think Like Machines for Perfect Proofs

I have strong opinions about the words “perfect proofs.” Indeed.

## The Story

Here is the story in detail, quoting from ACM’s digest of the article “Mathematicians Think Like Machines for Perfect Proofs” by Jacob Aron in the June 25th issue of the New Scientist.

A team of mathematicians led by Vladimir Voevodsky with Princeton’s Institute for Advanced Study have devised a new mathematical framework that forces people to think more like machines to check perfect proofs in collaboration with computers. The team’s manual explains the use of type theory as an alternative mathematical basis. Type theory stipulates that all proofs must describe how to mathematically construct the object they concern, which is the opposite of set theory. Once mathematicians have completed this task, their proof would automatically be supported by unshakable computational checks. It is impossible to write an incorrect proof, assuming the underlying code is defect-free, along with the automated proof assistants that verify everything as the mathematician goes along. Voevodsky’s team also says it is far easier to check the code than the entire proof in most instances. Not only does the new framework make proof-checking easier, it also is a move toward computers that one day could execute mathematics by themselves, which might potentially clear a path toward more advanced forms of artificial intelligence (AI). “My expectation is that all these separate, limited AI successes, like driving a car and playing chess, will eventually converge back, and then we’re going to get computers that are really very powerful,” says project collaborator Andrej Bauer.

The article says more:

Take the case of the 500-page proof of the long-standing abc conjecture published by Shinichi Mochizuki of Kyoto University in Japan last year. Nine months on, his colleagues still have no idea whether it is correct.

“That is an excellent example of a proof where a serious mathematician produces something that is very hard to understand,” says Bauer. “If mathematicians always proved things with computers, there would be no question whether there is a mistake.”

Voevodsky and his colleagues have created a new approach which is explained in their 600 page book.

## The Rant

To start, let me say that stipulating what a proof really is, to a Fields Medalist no less, is the exact definition to hubris. How can we even dream of saying something to such an expert on proofs? But art critics are allowed to say things about art, or movie critics about movies, well you get the point.

I also think that Voevodsky might not even disagree, but whether he does is immaterial. I feel compelled to say something about not his work, but the New Scientist article. The article is at best in my opinion misleading.

The burning issue in proofs is not in checking them. The main issue is in understanding them. The whole point of a proof is missed by this article. Completely missed. Not even close. Not even in the same universe we live in. Did I say off the mark?

A proof is not like an accounting audit that checks that two tables of numbers add up to the same tally. It is not something that just checks the “correctness” of a statement. A proof must supply insight. It must explain why something is true. It must be able to impart to the reader understanding about the math objects, and must explain.

Ken has also heard buzz about the book. But he recalls a conversation he had with Robin Milner in 1994. Ken asked how long it would be before mathematical proof systems could automate the kind of correctness proofs for context-free grammars he was teaching in undergraduate intro theory courses. Milner replied to Ken’s surprise,

“About 50 years.”

Ken adds that one measure being discussed about chess cheating is asking players to explain the strategies they had during the games. That is to say, a computer analysis printout might be a “proof” of a move being best, but if you the human cannot explain the thinking, it’s not yours.

## Proving Proofs

We understand proofs—short or long—by many mechanisms. Yes people do check them. Here are some of the methods that are used:

**Read the proof**: We do read proofs. We do check them. Some proofs, even of open problems, can be checked very quickly. David Barrington’s beautiful result on bounded-width computation comes to mind. I heard about the proof by phone, and was quickly able to see that it was correct.

**Explain the proof**: We often explain the proof to others. This can be in a class, in a seminar, or in writing. The ability to explain the proof is one of the best ways I believe to be really sure that one understands the proof. I cannot count the number of times that I thought I had proved something, only to discover on explaining it to a colleague that my “proof” was wrong.

The world expert on explaining proofs is perhaps Terry Tao. In his blog and more in his excellent books he has the ability to explain the most complex theorems in ways that lay bare the critical insights. His approach is the exact opposite of that of “perfect proofs.” A typical explanation of his might go like: The general case is really not much different than the case where so we will assume that. Then we note that

Here is an example of this by Voevodsky himself on his own work.

**Find a new proof**: We often find new proofs of a theorem. The new proof can give additional insight in why the result is correct. Note this is not the same as checking the tally of the numbers again. A new proof to be “new” must prove the result in a substantially new way. The famous Quadratic Residue Theorem is a perfect example of this. Carl Gauss first proved it, but later published six proofs and two more were found after he died in his papers. Today there are over 200 published proofs, according to our friends at Wikipedia.

**Use the proof**: This means to actually use the theorem proved to do something. A simple example is Fermat’s Little Theorem. Recall this says that

for all primes and all relatively prime to . This theorem is used millions of times per day by the famous RSA encryption method. If there was some problem with the Fermat’s Theorem companies on the Internet using RSA would get grabbled messages. Since they do not, this is nice evidence that it really works

**Use the theorem**: We check theorems best, in my opinion, by using them to prove other results. As a theorem gets used more and more it is really being checked. I think that this is one of the best tests that it is correct. Who would doubt the Fundamental Theorem Of Arithmetic—the integers have unique factorization—now after countless applications?

**Extend the theorem**: We also check theorems by extending them. Fermat’s Little Theorem is extended to

for all and all relatively prime to , where is Leonhard Euler’s function. Even this could be further extended to

where is the Carmichael function—see here. More on Robert Carmichael, the inventor of this function, in a discussion I am working on right now.

**Use the proof methods to prove new theorems**: We also check proofs by using them methods in them to prove new theorems. One of the deepest examples of this might be Andrew Wiles 1995 proof of Fermat’s Last Theorem. He proved the so-called *modularity theorem* for semistable elliptic curves. Later in 1998 the full modularity theorem was proven by Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor. It used and built on the methods found in Wiles’ original proof. See here for details.

## Movie Answers

Here are the movies that were embedded by title or quote into our July 4th post—it is possible there were some we didn’t notice.

- “Liars” quote from The Oxford Murders;
- “Handle the truth” quote from A Few Good Men;
- Nixon;
- 1776;
- Philadelphia;
- Lost in Translation;
- Nobody’s Perfect;
- Double Trouble;
- Triple Trouble;
- The 39 Steps;
- The Right Stuff;
- There’s Something About Mary;
- The Statement;
- Basic;
- Resolved;
- Pi;
- Only the Strong;
- In the Line of Fire;
- Clue;
- Hitch;
- “Que Será, Será” quote from The Man Who Knew Too Much;
- Nothing in Common;
- The New World;
- Rules of Engagement;
- The World is Not Enough;
- Over the Top;
- Easy Living;
- Nowhere to Hide;
- Anything Else;
- The Delta Force;
- The Power of One;
- Big Fish;
- “Need for Speed” quote from Top Gun;
- Speed;
- Rear Window;
- How Stella Got Her Groove Back;
- “Badges” quote from The Treasure of the Sierra Madre (rather than from Blazing Saddles);
- Friday After Next.

## Open Problems

Ken and I hope you liked the movie game.

For proofs a very neat page to read is this. It contains a history of some famous “proofs”: some were incomplete, some were fixed later, and some were just plain incorrect. What does this say about “what is a proof”?

### Trackbacks

- How To Succeed In Proof Business Without Really Trying | Inquiry Into Inquiry
- What type of theory? | Windows Live space
- What Is Best Result Of The Year? | Gödel's Lost Letter and P=NP
- Why Is 290 Special? | Gödel's Lost Letter and P=NP
- Susan Horwitz 1955–2014 | Gödel's Lost Letter and P=NP
- Lint For Math | Gödel's Lost Letter and P=NP

Just by way of keeping my theme in school, let’s title this one —

How to Succeed in Proof Business Without Really TryingEven at the mailroom entry point of propositional calculus, there is a qualitative difference between

insight proofsandroutine proofs. Human beings can do either sort, as a rule, but routinizing insight is notoriously difficult, so the clerical routines have always been the ones that lend themselves to the canonical brands of canned mechanical proofs.Just by way of a very choice example, consider the Praeclarum Theorema (Splendid Theorem) of Leibniz, as presented in cactus syntax here.

I’ll link to different ways of proving this in the comments that follow.

The proof given via the link above is the sort that a human,

sometoo human, was able to find without much trouble. You can see that it exhibits a capacity for global pattern recognition and analogical pattern matching — manifestly aided by the use of graphical syntax — that distinguish the human knack for finding proofs. When I first set out trying to develop a Simple Propositional Logic Engine (SPLE) ‘n’ didn’t, those were the aptitudes I naturally sought to emulate. Alas, I lacked the met-aptitude for all that.For my next proof of the Praeclarum Theorema I give an example of a routine proof, the sort of proof that a machine with all its blinkers on can be trained to derive simply by following its nose, demanding as little insight as possible and exploiting the barest modicum of tightly reigned-in look-ahead.

• Praeclarum Theorema : Proof by Case Analysis-Synthesis Theorem (CAST)

Wow. That’s a _lot_ of work just to replace a couple of 4×4 truth tables.

Just a quasi insider who took a class with a prof who spoke with him eye to eye and said Voevodsky was the only guy he knew that had something new and knew something. I also knew of a UCLA phd graduate who spoke of his prof who used to be silent on all comments on mathematicians except when it came to Voevodsky whom he had more than a few words of praise of originality.

As I understand my Russian prof and probably Voevodsky as well believes P=NP. A short certificate is a proof. May be the question is hard. However to them the answer is clear. No wonder words like “A proof must supply insight” have no meaning to the folks who finished Fulton’s Intersection Theory by their teen years.

“……who spoke with him eye to eye and said Voevodsky….” should be “……who spoke with Voevodsky eye to eye and who later said to me that Voevodsky….”.

I just wanted to mention this excellent article about the concept of proofs, with Mochizuki’s work on the ABC conjecture as an example — http://projectwordsworth.com/the-paradox-of-the-proof/

Notices of the AMSrecently published Martin Raussen and Christian Skau’s(2012). Milnor comments as follows on a famous wrong proof:Interview with John MilnorMilnor has many more interesting comments on the status and practice of mathematics. By the way, this same issue of

Notices of the AMShas a tribute article about Vladimir Arnold.Conclusion (tentative)Top-rank mathematicians like Milnor and Arnold seemingly don’t need much help (if any) from automated proof checkers. The rest of us might well benefit! :)I for one think the concept of “proof” while used routinely and mostly correctly is still not fully understood by mathematicians or computer scientists, and that when we do some day, coming in the not-so-distant-future, automated proofs may be much more feasible. right now its a case of the blind men and the elephant, or at least the half-seeing men. there is also some thinking similar to this by gowers in one of his visionary papers. see also adventures & commotions in automated theorem proving

also, you stress the idea that proofs must be comprehensible to humans. but this seems demonstrably incorrect. think of the massive proof of the classification of finite simple groups. or the four color theorem. it appears there are some very complicated proofs that may have different “internal structures” such that one internal structure is more comprehensible to people than others, but the proofs are equivalent. this is similar to the concept of “refactoring” in software engineering where two programs are logically equivalent and accomplish the same functionality but one may be shorter than the other. also, short proofs are not necessarily more comprehensible than longer ones. a picture that seems to be emerging is that computers might find very complex proofs that are provably correct, but then there may be further effort required [a gap] to “translate” or reformulate them into more human-comprehensible format/vocabulary/frameworks.

And then there are those who evangelize their claims even without a proof.

Oh, and stop calling me Shirley …

I disagree with the idea that proofs are only useful as a tool for understanding. Via the Curry-Howard isomorphism, you could equivalently say that type checking a program is only useful for understanding, or perhaps even that unit tests are only useful for understanding. Obviously understanding is key, but in the context of programming automated proof checking massively accelerates the process of writing correct and therefore useful programs. Once we have practical proof checking for math, I don’t think understanding with be the *only* advantage gained.

In a talk I heard Voevodsky give, he did not argue that your notion of proof as partly a social construct be dispensed with, but rather that papers and theorems should not be accepted without being accompanied by a computer-verifiable proof. He said that he was drawn to this notion because his own arguments were dealing with such complex objects that he realized that the usual mathematical language of set theory (typically formalized via ZFC) was not precise enough for him to feel comfortable in the correctness of his proofs. In part his goal is for his new types-based mathematical language (ideally verifiable in Coq) to replace ZFC. Presumably this would lead to papers that contain human-readable proofs with a somewhat different structure so it could have a benefit even without the formal side of it. The other part is to convince mathematicians to actually expend the energy to (also) produce fully verifiable proofs. This is a pretty tall order and he has a long way to go there.

I was also at the same talk. What Paul said is correct, but he also had another motivation:

to give a foundation for mathematics that was closer to the way mathematicians like himself actually think than standard set theory does. He found type theory helpful in formulating a consistent “categorical” foundations. Now, I think that he needed this type of foundation for his own work, but only a small fraction (say 10%) of mathematicians are working on the kind of abstract level he is at, and would benefit from this new foundation. Of course, this is an empirical question that I’m just guessing at an answer to. He was really trying to convince the other mathematicians in the audience to try it out. But there were some software glitches early on in his presentation which really defeated the appeal of “computer assisted verification” as being a tool for infallible proofs!

The PCP theorem of course enables the opposite of the “understanding” approach–it gives something like a zero-knowledge protocol to show you have a proof of some theorem, without revealing anything about the proof.

Dick, you might like Mohan Ganesalingam’s thesis, about trying to formalize mathematical intuition as opposed to proofs. There’s some info here: http://people.ds.cam.ac.uk/mg262/

None,

thanks…will look at the thesis

dick

I don’t see in the HoTT project or in the NS article anything that is incompatible with the purposes and “ways of checking” proofs that you list. The thesis is not that that the purpose of a proof is (merely) as an “accounting audit”. The thesis is that there are benefits to having an accounting audit–a formalized, computer-verifed version–of your proofs. Since HoTT is a foundational enterprise, it seems to me that asking for a formalized version of the proofs, now that this is possible, is exactly the right approach. Few, if any, proofs in Principia Mathematica were ever verified by any of the methods you mention, not even reading them! If you are going to propose a new foundation of mathematics, you better verify your proofs systematically–the track record for foundational systems even turning out to be consistent isn’t very good.

The real benefit is from systems and languages that allow you to see why a given proof is formalized correctly (that is, provide a clear chain of correctness from the author’s human-readable statements down to the type theory, or whatever your checker’s foundation is). The remainder (actually verifying the formalized proof) is very interesting too, but not the essential bit.

The challenge is getting mathematicians to see the benefits in this too. Few if any mathematicians would be interested in doing mathematics if it meant that they had to write out proofs to the level of the PM, except with

even moredetail. However, most can be convinced of the benefits of rigor where this rigor is necessary for tackling theessentialcomplexity of the proof. Yet Another Type Theory will not provide you with progress in that area in and of itself — demanding that mathematicians “think more like machines” is dangerously missing the point. The reason we have machines is just so they can relieve us from the tedium of necessary but boring tasks. We need our machines to think more like mathematicians (yet still be machines) so mathematicians can actually use them.In computing terms, your proof checker is the operating system of your proof workstation. Very necessary on its own, but also useless for getting practical work done on its own. You’re going to need some infrastructure on top of it. Some work is being done in this area, but not enough. I do think Voevodsky’s work is being unfairly sensationalized here, since the article is presenting it as if using type theory as a foundation for automated proof checking (and/or generation) is a radical new invention by Voevodsky et al., when he’s clearly just proposing a new type theory.

That said, statements like “if mathematicians always proved things with computers, there would be no question whether there is a mistake” are obviously and glaringly incorrect. Rather, the mistakes would shift around, as they always do. It is perfectly possible to provide a perfect proof of a very perfect statement of something that’s not at all what we intended to prove after all. It is also possible to provide a perfect proof of something nobody actually understands (let alone if the

proofis understood). I’m reminded of Knuth’s famous quote: “Beware of bugs in the above code; I have only proved it correct, not tried it.”As a purely sociological remark, I would say that the way that proof checking is likely to come of age is that computers will have to meet mathematicians 98% of the way there. That is, computers will have to learn to deal with the way mathematicians write their arguments, rather than mathematicians having to learn to write their arguments in a new way. I’m optimistic that this will eventually happen, with Mohan Ganesalingam’s thesis (now a book, incidentally, published by Springer) one of the reasons for my optimism, since it indicates that computers are already close to understanding human mathematical text, provided it is written in a reasonably formal register — but not, crucially, a special-purpose formal language.

A completely separate remark is that I doubt whether Voevodsky and the others who have worked on HoTT would disagree with anything you have written, so I think that, as you suggest, the fault lies with the writer of the New Scientist article.

I should add, just in case I’m misinterpreted, that my comment isn’t intended as a negative remark about HoTT. It’s clear from the excitement that large numbers of mathematicians have about it that it is doing important things, and I’m intrigued enough by this to want to understand better what it is all about. However, I would be sceptical about claims that it will revolutionize mathematical practice amongst all mathematicians, however big an impact it may have on certain areas of mathematics.

Computers don’t “have to” meet mathematicians anywhere just like they didn’t “have to” meet programmers anywhere. If programmers had had to wait for compilers to be able to parse plain English text interspersed with diagrams of boxes and arrows, we wouldn’t have any software today at all.

If it allows mathematicians to do mathematics any better, they should be willing to adapt and use what is available today. Things will definitely get much better over time. But, to insist that 20 years from now, mathematicians will have all their work machine checked, but the work will look almost exactly (98%) the same as today is absurd. Things will have to change a lot.

An important difference between proofs and programs is that there wasn’t a long-established tradition of perfectly satisfactory “human-style” working programs for programs that actually work on a computer to meet. Another way of putting it is that a computer program works if it causes a computer to behave in a certain way, so if you can’t get it to understand your flow chart then that’s tough on you and you’d better learn a programming language. But if a mathematical proof doesn’t work on someone’s proof checker, that’s not tough on you, because it may well convince other mathematicians.

A better analogy might be the effect of developments in analysis and set theory at the end of the 19th and beginning of the 20th century. These had some effect on the way mathematicians presented proofs, but they certainly didn’t cause people to write out completely formalized arguments.

Also, the main point I am making is sociological. It may be that in some sense mathematicians “ought” to invest the time to learn how to write their proofs in a checkable form, even if that’s a very difficult and time-consuming process. But I just don’t think it will happen for the majority of maths unless the process becomes much easier, because the benefits will not be worth it. So while it is hard, it will be done by people in special circumstances, such as Hales needing to convince people of the correctness of his proof of the Kepler conjecture, or Voevodsky needing a foundation that better reflects his way of thinking. For the rest of us, doing more “normal” mathematics, I think that writing machine checkable proofs will make a serious difference only when doing so doesn’t require a big investment of time. Maybe 98% was going too far, but I still think that the meeting point will, as a matter of future historical fact, be close to the human end of the spectrum.

I agree that the framing in the New Scientist article is unfortunate, and that all of the activities you mention are important—the way I would say this is “proofs are for people”. But in fact, the HoTT book is more about “proofs for people” than computer-checked proofs—there are no computer-checked proofs in the book! The book started as a playground where we could experiment with writing and reading homotopy type theory in an informal, ordinary mathematical style, without the use of a proof assistant. Some of the proofs in the book were first constructed with the use of a proof assistant, and the proofs in the book are an “informalization” of them, developed to help people read them. Others were written from scratch in this style. Also, the book contains a development of some basic homotopy theory, and (granted, it’s a bit artificial, because we knew what the answers were going to be in advance), there are examples of the other things you mention in this development (new proofs, proofs with interesting computational content, lemmas, extensions of theorems, new methods, …).

However, even if proofs are for people, I still think computer-checked proofs are important. One reason is that many proofs consist of some fraction interesting ideas, and some fraction boring and tedious “details”/”calculations” that are hard for people to read unless you put in a lot of time. As a reader, it’s really nice to be able to *believe* the details/calculations, without having to check them. This also gives more freedom to the person writing the proof, because they can focus on the illustrative special case (as you say above), and skip the details that make it actually work. I think this will be even more important as computer-assisted proving becomes more common: using a proof assistant, you can believe a proof that a computer finds, even if you don’t understand it. At present, I don’t know how to do better than having both informal proofs for people (like you’ll find in the HoTT book) and computer-checked proofs (like you’ll find in the companion code) as two separate artifacts, though I hope that eventually the two will grow closer together.

William Thurston has a nice article on proof and progress in mathematics: http://arxiv.org/abs/math/9404236

(I found out about it through SelectedPapers.net where Terry Tao recommends it.)

My main criticism of the article (disclaimer: I have only read your quotes) would be the restriction to constructed entities—which implies that proofs by contradiction are impossible and puts proofs by induction on thin ice. I have some sympathies for such schools, noting the problems around the common axiom A and not A = true vs. decidability; however, following this line strictly would invalidate a very large proportion of what is accepted as proofs by the majority of mathematicians.

I appreciate your points about proofs as more than, well, proofs. However, the name is not an accident and the aspect of using a proof to ensure knowlege is of very great importance—and I find it hard to fault the proponents for more than being slightly simplistic.

(However, more generally, I would agree whole-heartedly that understanding is more important than mere knowledge.)

I was interviewed by Jacob Aron for the New Scientist article. We spoke on the phone for about 40 mintues. Let me just say it is very difficult to get the ideas across, and I am quite sure it is equally difficult to write an article for the general public which does not sound like garbage to a specialist. I think the article is strange, at the very least, and I cannot decide whether the title is funny or sad. Joel Hamkin’s statement was taken out of the context, from what I know.

I wish you made it a bit clearer whether you ranted about the article only, or do you also have an opinion about the book and the HoTT project (in which case I would be very interested to hear what it is).

Your emphasis on proofs providing understanding has been enormously useful in my work teaching beginning college students. Much like any writing, it seems like a monumentally important component; as with a novel or film when plot-style-theme-characters all come together perfectly (a rare and special beast). As a goal it seems unimpeachable.

But surely “a proof must supply insight” is hyperbole or a rhetorical device? Just because Gauss re-proved a theorem with better insight does not make the first a non-proof, no matter how opaque. It seems more like a value-add or a continuum between zero insight and some positive number. Having this as an ironclad requirement would seem likely to block certain research or first-discoveries.

I think it’s what’s referred to as an ideal or regulative principle.

The topic of capturing mathematical proofs in a formal system is nothing new (Leibniz, Frege, Russell, Whitehead). With the advent of the computer that dream looked indeed feasible, but we are still lacking its realization. Is the HoTT approach more promising than its many predecessors? I remember Constable’s Nuprl project, now more than 30 years old, with a similar goal of “implementing math.” Also the various systems for program verification. I think it is fair to say that none of these has yet fulfilled its promises, neither is math implemented nor programs verified on a more than academic scale. Will we have to wait for another 30 years?

Indeed. I am fairly bemused at why almost no one in this conversation is making connections with earlier attempts at automated theorems or proofs.

Quote from the article, “It could also be the first step towards computers carrying out mathematics by themselves…” But then there’s an italicized note at the very end, “Correction: Since this article was first published on 25 June 2013, it has been updated to reflect the fact that there is at least one computer-verified proof system based on set theory.”

So clearly the author was admittedly and totally unaware of the long prior history of such attempts.

I have never paid much attention to the area, but it is a very long time (in the late 80s?) since I first read about a computer-generated proof. At the time I was a teenager with the math background of my peers and had yet to own a computer… An adult who presumes to write an article on the topic should have more to offer.

(To my recollection, the proof used mirroring to show that two of the angles of an isosceles triangle were identical, or vice versa: Not ground-breaking news but still quite an interesting demonstration.)

HoTT is fairly close to NuPRL (they are both based on versions of Martin-Lof type theory), but improves on it in several significant ways. For example, it gives a direct syntactic calculus of infinity-groupoids/homotopy types, which is very exciting for certain areas of math. Another is that it allows you to work with structures up to isomorphism, which is closer to informal mathematical practice.

Another virtue of HoTT is that it concretely raises interesting issues relating to decidability.

E.g., suppose a statement (or a class of statements) is proven to be

undecidablein HoTT (say PvNP for example … e.g, supposing that HoTT is less powerful than ZFC in regard to statements about computational oracles)QuestionDoes a red-light from HoTT mean that we should give up seeking a green-light from ZFC?If so, the HoTT seems (to me) like a considerable step

forward:)Many others have formulated excellent comments above, so in some sense, what I’m about to say may seem a bit redundant. Nevertheless, leaving the New Scientist article aside, I’d like to direct attention to some statements of yours, such as this one: “The burning issue in proofs is not in checking them. The main issue is in understanding them. The whole point of a proof is missed by this article.” The one indispensable aspect of a proof is its verifiable correctness; otherwise we don’t regard it as a proof at all. It’s a big bonus if the proof is also understandable, because mathematics is not just about knowing what’s true, it’s also about appreciating its beauty and about creating more mathematics.

Then there’s this one: “A proof is not like an accounting audit that checks that two tables of numbers add up to the same tally.” The proof of the Kepler conjecture that Tom Hales has almost constructed is exactly like the audits you decry; probably more than 99.99% of the 20000 hours it takes the computer to verify the (future) proof is spent verifying floating point arithmetic operations that can be done with no trouble by the floating point units of modern CPUs. But such boring proofs are not the only ones susceptible to formalization in the new framework: all proofs of mathematics are. Formalization will preserve and enhance the essential beauty of the proofs mathematicians have constructed in the past, and will also offer a new style with its own new beauty. For an example of the latter I can recommend section 8.1 of the book, with its computation of the homotopy groups of the circle. What you will see there is the way a human informally explains the details of the formal proof that was submitted to the computer.

Similarly, this statement: “As a theorem gets used more and more it is really being checked. I think that this is one of the best tests that it is correct.” Isn’t that just wishful thinking? How likely is it that a contradiction to an incorrect theorem will

be found that way? When a contradiction is found, how will we decide which of the many theorems it may depend on is the wrong one? Don’t we also want confidence in the truth of theorems that don’t get applied often, such as the Four Color Theorem, and in the truth of theorems that haven’t been around long enough yet to be applied often, such as the ones in the articles currently being refereed for publication in a journal?

I totally agree. Thanks for writing that.

I second Daniel. It is a necessary and sufficient condition for a proof to be correct. It is neither necessary nor sufficient for a proof to be “understandable” (what ever this means).

> “If mathematicians always proved things with computers, there would be no question whether there is a mistake.”

This is the typical opinion of the average journalist whose scientific background is too weak to understand what a proof really is. They’ve suffered from mathematics as pupils during their whole school time, and now they’re taking a revenge by claiming that mathematicians should either be replaced by computers or learn to think like ones.

The quoted assertion is true, but it’s author isn’t aware that the brain is much better at finding proofs than is a computer. The machine and the brain just help one another. While set theory’s a language for the brain, type theory’s obviously intended towards computers. Voevodsky’s category primer, written in the language of types, is rather hard to grasp…

The intrusion of computers into mathematics is fairly random. For the moment, they’ve been required for long calculations – chaos theory, numerical analysis – and long proofs – combinatorics, group theory. Those situations being unpredictable, nobody can tell how mathematicians will use them in the next decades.

@Serge: that is not a journalist’s statement, but a direct quote of what I said. Do you think that a journalist who has “suffered from mathematics as pupils” would really devote their life to working for a major popular science news source just so they could “take revenge”? This is just silly.

As you note, my statement is true. Above that, everything you say about it came from you and your projecting your opinions onto the article. You speak about computers replacing mathematicians as if either that was the point of the article, or the intent of the HoTT project, where neither could be further from the truth. If you would care to read the article again, you will see that it is about making computers better tools for mathematicians, not about replacing them.

Both you and @rjlipton somehow think that the article, or perhaps even the HoTT project, is about automated theorem proving, discovery of mathematical knowledge, etc. Again, I am unable to find this in the article, which speaks about proof assistants. There is even an explicit quote of mine in the article, which explains how the dream of automated theorem proving failed and was replaced with the idea of interaction between humans and computers. In this interaction the human directs the computer with his ideas and overall understanding of what is going on, while the computer serves by checking the technicalities and boring parts of the proof. So why are you ranting at all (this question is for both you and @rjlipton)? Don’t you see that the article has more in common with your position than with what you are criticizing?

Sorry, really silly indeed. We happen to share the very same views on this matter and I hope it’s because we’re right. :) I may have chosen the wrong target but you’ll admit my comment applied to many journalists who pride themselves on being ignorant of math.

As an aside, there must be a common cause for humans not being able to write programs that simulate natural intelligence, and also not being able to prove the most famous unsettled conjectures in complexity theory. Something to do with circularity…

Usually, whenever a proof has reached the appropriate level of ripeness for automated checking, its truth value has already been settled long before. That’s why I had trouble with the preposition “with” in your statement. Apart from a few celebrated exceptions such as the four-color and sphere-packing theorems, non-trivial things are hardly ever proved *with* a computer. More often than not, the machine steps in when the feast is over.

It’s not the main point of what you are saying, but I think it’s interesting to consider whether your assertion that set theory is a language for the brain is true. I think the answer is a resounding no. For example, if I’m told the famous problem about tiling a grid with two opposite corners removed with dominoes, I will think in terms of sentences like, “Each domino covers two adjacent squares.” I certainly won’t bother to define a domino to be a pair of the form or . For many problems, that sort of formalism gets in the way of thinking rather than helping it. (On the other hand, the knowledge that that kind of formalism exists has a strong influence in the background — that I don’t deny.)

The constructive approach you outline is indeed best suited for these combinatorial problems. I was rather thinking of the notion of infinite set and the philosophic flavor that they bore at Cantor’s time. Sets were invented in a topological context and they actually simplified most topological reasonings. Furthermore, I find it hard to think about categories in a constructive way.

@Serge, do you really believe a broad assertion like “the brain is much better at finding proofs than is a computer” or ” set theory’s a language for the brain”?

There is such diversity to brains, proofs and computer-aided reasoning, and computer-generated formal proofs that I do not find a meaningful interpretation of such statements. Have you looked at the material in Timothy Gower’s experiments on mathematical writing? I have used computers in doing proofs a learnt more about proofs than computers in the process. I have met people who internally thought of mathematical objects in terms of topological spaces, categories, statistical mechanics, thermodynamics, etc. For that matter, I do not think of natural, rational or real numbers in terms of sets, but in various different ways, depending on what I’m doing with them. People have immensely diverse and unique ways of thinking of mathematics and I don’t think there is a specific language “for the brain”.

I never said there was such a thing as a *specific* language for mathematicians. It was a relative statement – a comparison with type theory. Some ways of thinking are better suited for humans while others are more machine-oriented. The whole history of software development is a quest for the most natural language and the same goes for mathematics.

… and yes, the brain is much better at finding proofs than is a computer. Otherwise, artificial intelligence would no longer be a dream…

The contemporary field that focuses on computer-aided proof construction is not artificial intelligence but various fields at the intersection of logic and computer science.

Researchers in these fields are not making broad statements about brains vs. computers. I find such statements provocative and uninsightful. Current work is attempting to find the right interface for people and computers to interact to do mathematics. There are extremely competent mathematicians who do believe their mathematics benefits from computer-aided development, so they do believe that however good the brain might be, there is something to be gained from a mechanical proof environment.

A clear distinction should be made between doing math and formalizing the math that’s been done. It’s more important than the distinction between checking and understanding – for one implies more or less the other. The formalization feasts are necessary, but they take place after the endeavors of such geniuses as Grigori Perelman and Shinichi Mochizuki who probably never thought in terms of types like Haskell programmers. If the goal of the HoTT enterprise is to increase the number of computer-aided discoveries, then so much the better. But if it’s only to add one more layer of certainty to already-known theorems, then it’s a bit less interesting.

Personally, I think type theory is better for both brains and computers :) Don’t just dismiss type theory as “for computers”. Try thinking in it for a while and see what you think then!

Alright… :)

Speaking of jokes, think of all the yucks we’ll get someday when our autoproofcheckers work as well as our autospellcorrectors do today.

> “We check theorems best, in my opinion, by using them to prove other results. As a theorem gets used more and more it is really being checked. I think that this is one of the best tests that it is correct. Who would doubt the Fundamental Theorem Of Arithmetic—the integers have unique factorization—now after countless applications?”

Be it as it may, a large part of elementary mathematics has already been double-checked automatically. It’s certainly a nice way to learn how to use these new tools.

> “Take the case of the 500-page proof of the long-standing abc conjecture published by Shinichi Mochizuki of Kyoto University in Japan last year. Nine months on, his colleagues still have no idea whether it is correct.”

Unfortunately, this kind of proof is unlikely to get automatically checked anytime soon – because of its complexity.

Let’s leave the conclusion to Edward Gibbon – “the power of instruction is seldom of much efficacy, except in those happy dispositions where it is almost superfluous”.

How hard is it for someone to understand ones own proof after some reasonable lapse of time say 6 months?

In regard to Māris Ozols link to William Thurston’s “On Proof and Progress in Mathematics”, the following passage especially is relevant to HoTT:

To the degree (at present unknown) that present-day mathematicians seek futilely for proofs of undecidable statements, if HoTT axioms can illuminate the decidable/undecidable boundary more clearly than (for example) the ZFC axioms, then “Good on yah, HoTT!”

Also worth reading is Bill Thurston’s MathOverflow highly-rated answer “

, from which the following is excerpted:What’s a mathematician to do?SummaryFormalizing enterprises like HoTT almost certainly will fill key roles in 21st century mathematics … but (for the reasons that Bill Thurston lucidly sets forth) formalizing roles cannot be the humanizing roles.I couldn’t agree more!

I’ve started to browse the HoTT book and I like it very much – so far as my weak background allows. I particularly appreciate the recently-found connection between categories and type systems, and also the revisiting of the first two constructions of the reals – namely Dedekind cuts and Cantor’s Cauchy sequences – thereby proved non-equivalent unless one accepts the axiom of choice! But let’s not forget that Dekedind himself was still unaware of sets when he invented his construction, and that Cantor never knew of such a thing as an axiom of set theory. As Donald Knuth puts it so wisely, “premature optimization is the root of all evil”…

Vladimir Voevodsky seems to be a true descendant of L. E. J. Brouwer who also made great advances in algebraic topology by using classical methods before turning into a constructionist leader. See also the most enlightening Wikipedia article about Errett Bishop – father of constructive analysis – who believed that “very possibly, classical mathematics will cease to exist as an independent discipline” (1970). I much prefer that other quote of his – “every theorem proved with idealistic methods presents a challenge: to find a constructive version, and to give it a constructive proof” (1967).

The discussion of sets versus types is mostly sterile, but the fact that a technological object has led to some new and influential foundations of mathematics is quite meaningful – and somehow disturbing. Those – like myself – who’re fond of the eternity of mathematical truths as compared with our ephemeral computer novelties might be tempted to change their mind…

… before they realize type theory’s just another wonderful computer-aided discovery! :)

I come in a bit late for the discussion, but perhaps we should do mathematics in the same way as we develop programs. First, we discuss in informal terms what the algorithm, the data structure will be. This is typically human-to-human communication. Then, we program it out fully formally. This is human-to-computer communication. Ideally we should do the same with establishing mathematical theorems. For now, we find it sufficient to write down the informal argument (formalized only enough so as to convince your colleagues). The second phase, the writing of the program, is left out. In this way wrong proofs, subtle glitches, are too easily missed.

Jan, thank you very much. We owe something of an apology to you and Andrej Bauer and some others. We have intended to follow this up, but various events starting in July including developments in chess cheating have had a buzzsaw effect. (Comment by Ken.)