Navigating Cities and Understanding Proofs
How proofs are like cities
Pierre L’Enfant was not a theorist, but a designer, and in particular the designer of the physical layout of Washington D.C.
Today I want to talk about why some proofs are hard to understand, not hard to discover.
What do proofs have to do with cities? At first glance they seem to have little in common, but I believe that understanding proofs and navigating cities have a great deal in common. I think that the reason some proofs can be hard to follow is related to why some cities are hard to navigate.
Washington, as in the US capital, has always been difficult for me to get around, with due respect to L’Enfant. I sometimes feel lost in the capital. Its main streets run in a diagonal pattern which is beautiful, but does not help me form a good mental spatial map. I also almost never drive a car around the city: I walk, take the Metro, or take a taxi. The latter have me let someone else do the navigating for me, which means that I can close my eyes and still get where I want to go. This is very pleasant and easy on me, yet does not make me learn the ins and outs of the streets of D.C.
I think when we read a proof it is like trying to navigate around a city. The layout is critical if we are to be successful in understanding the proof. Also we need to actively interactive with the proof. If you just sit back in a “taxi” and are driven around the proof, say at a talk or lecture, then you are much less likely to really understand the proof.
Let’s turn to discuss some connections between the understanding of cities and proofs. Perhaps these comments will help us both to write better proofs and to help us to better read proofs.
Cities And Proofs
Standard Layout: For all its size and immense density of people and vehicles and buildings, Manhattan is relatively easy to navigate around. The streets, to a first approximation, are laid out on a rectangular grid: so 41 street is north of 33. The avenues run north and south and are numbered west to east—okay not exactly but close—for example, Sixth Avenue is also the Avenue of the Americas and Ninth is Columbus Avenue. So figuring out a rough location is pretty straightforward, especially if you stay away from the lower part of Manhattan.
What does this mean for a proof? We should, as much as possible, layout our proofs in a standard “grid” manner. Definitions, lemma’s, and proofs should be in as standard a layout as possible. Sometimes this is hard or even impossible, but proofs with “diagonal” structure are much harder to understand and should be avoided.
Stated another way, with a mixed metaphor: Traffic flow in a proof should be topologically sorted. You might have to refer back to check the source of the next logic step, how Theorem 5 depends on Lemmas 2 and 3, but you should never have to cycle around, where to prove Lemma 3 you need to understand part of the proof of Theorem 5. Another way to say this: the understanding of a lemma should never need to be “deferred” to a later result. When such “deferrence” happens, you have what Jacques Derrida called différance, which might be fine in literary theory, but shouldn’t be in math. This last reference is due to Ken, I defer to him on this, and simply add: keep the structure as simple as possible.
Good Guide Books: Any city is easier to navigate if their are well written guide books. Ones that explain how to get around, how to locate major landmarks, and how to navigate.
What does this mean for a proof? We should supply a guidebook with our proofs. If a proof is very short or simple, then a guide may be unnecessary. However, for a proof of any complexity having a good overview that explains where you are, where you are going, and how all fits together is invaluable. Terence Tao is a master at this: even some of his deepest theorems have an overview that at least allows you to get your bearings. Here is an overview from one of his papers.
There are three major ingredients. The first is Szemerédi’s theorem, which asserts that any subset of the integers of positive density contains progressions of arbitrary length. The second, which is the main new ingredient of this paper, is a certain transference principle. This allows us to deduce from Szemerédi’s theorem that any subset of a sufficiently pseudorandom set (or measure) of positive relative density contains progressions of arbitrary length. The third ingredient is a recent result of Goldston and Yildirim, which we reproduce here. Using this, one may place (a large fraction of) the primes inside a pseudorandom set of “almost primes” (or more precisely, a pseudorandom measure concentrated on almost primes) with positive relative density.
Districts: Many cities, even huge ones, have districts or neighborhoods. This modular structure is immensely useful in navigating around. Even Manhattan, for example, has neighborhoods: TriBeCa, West Village, Washington Heights to name just three out of dozens.
What does this mean for a proof? We should structure our proofs to have a neighborhood structure. Proofs that are modular often can be read much more easily. The modules can be understood separately and that helps the reader greatly. Often this modular structure is based on the use of powerful abstraction, which used properly can be very reader friendly. Used properly a highly abstract level proof can be quite clean, simple, and very understandable. Or used poorly the abstract level may hide the key issues that are being argued, and make the proof very hard to understand.
The next two are things to be avoided.
Many side streets: Cities with lots of . dead-end streets, with numbering schemes that jump in unexpected ways, in naming schemes that are strange can be very hard to navigate.
What does this mean for a proof? We should avoid proofs with many side steps and many parts that are unneeded. I believe that side streets in proofs correspond to many cases. The more cases the more complex a proof is, usually. The reason is that we can leave a case out, or argue that case (iii) follows as case (ii), which is false. Lots of cases is a symptom that there is complexity, and that is usually an enemy of understanding.
Cases cannot always be avoided. In some areas of mathematics, for example finite group theory, special cases abound. One is quite likely to see a theorem like this:
Theorem: Every group of even order that does not have as a subgroup and has no elements of order where satisfies .
Crazy Structure: Some cities have endearing, but nutty structures. Pittsburgh where CMU is located, is famous for having the property: you may be able to see where you want to get to, but there is no way to get there. The many hills, rivers, and bridges make this happen. You can see where you want to be, but there seems to be no road that leads you there. Of course, there is a way to get there, the way is just hard to discover, and may involve you heading initially away from where you are going. It can be very confusing and makes for difficult navigation.
In Cleveland there was a road that self-intersects—crosses itself—at least when I was an undergraduate there. Really. Many cities also have roads that change names for no obvious reason, or become one-way at certain times of the day. All of this makes getting around a challenge. Atlanta, where I live, has many roads that are named
where is a modifier like: way, street, place, and so on.
What does this mean for a proof? We should be careful to avoid crazy structure. We should use logical names, keep notation from changing for no reason. We must avoid circular reasoning, even reasoning that is not circular but appears to be will be quite hard to follow.
A classic example of this are inductive arguments, which are essential to many areas of mathematics. But an inductive argument can be tricky. There is always the danger that the argument is circular or has some other defects. Be very careful writing them and reading them: be sure the base case is handled properly and that you understand what is being “inducted” on. Some proofs, especially advanced ones, may use a complex measure that is being used to do the induction on. Be sure to follow these carefully.
If you are writing or reading a proof look for the above examples to help you, and understand the parts that make the proof hard. Avoid taking proof “taxis” if you really want to understand a proof.
An open problem: classic proof theory measures complexity by size: longer is harder. But this seems to be a bit naive. Perhaps there is a more reasonable measure of proof complexity. The same applies to much of complexity theory in general: a bigger circuit is more complex, a longer computation is more complex, and so on. Is this the best we can do?