# The Four Color Theorem

* Are there human checkable proofs of the Four Color Theorem? *

Kenneth Appel and Wolfgang Haken made the Four Color Conjecture into the Four Color Theorem (4CT) in 1976. Their proof, as you probably know, relies essentially on computer aided checking, which makes it not checkable by a human. Today I want to talk about their proof, the nature of computer aided proofs, and new alternative approaches to the 4CT. I also want to draw a parallel between their great achievement and some of our open problems.

I was at Yale University, in the Computer Science Department, when they discovered their proof. The Yale mathematics department quickly invited Haken to give a talk on their proof. As you might expect the room was filled, and Haken give a wonderful talk: on the history of the famous problem, on previous approaches, and on their proof.

I remember two things vividly about his visit–even though it was over thirty years ago. During the talk, he was asked what type of computer they used, since the proof required extensive computation. Today no one would probably ask this question, since the computations could be done easily on laptops. However, by 1970’s standards, the computation was huge: over 1200 hours was needed on a “super-computer”. Haken answered that he did not know what type of computer they used, but it was “blue.” That got a good laugh; in those days blue meant an IBM mainframe.

The other recollection is what Haken said to me during a private conversation I had with him, before his talk. I quickly realized that Haken, who previously had been a Siemens engineer, had a very pragmatic approach to the four color problem. To him it was more like an engineering project, then a delicate mathematical proof. One of the central issues was discovering whether certain graphs were reducible, which I will define later on. The proof required that a specific set of about two thousand small planar graphs were reducible.

What impressed me was that Haken had heuristic rules. He could look at a small graph and almost always see instantly whether or not the graph was reducible: even though the computer took a long time to actually decide reducibility. He also said that if one of the graphs later turned out to not be reducible, then he could easily find another one that would work in its place. Somehow he had “over-engineered” the proof, as you would a bridge for safety.

** The Computer Assisted Proofs of 4CT **

Subsequently, Neil Robertson, Daniel Sanders, Paul Seymour and Robin Thomas created a new proof based on the same basic approach as Appel and Haken, but they simplified some technical parts. There even is some belief that perhaps the “new” proof is the first complete one. The Appel and Haken proof had a number of details that were found to be wrong, and had to be fixed. They also used some technically tricky arguments that the new proof avoided.

I tend to believe that Appel and Haken have a correct proof, but I base this not on any first hand knowledge. I know Appel’s son well, he has been a professor at Princeton and has told me some details of how he helped his dad and Haken with the programming. Many times, Haken has stressed that he and Appel should not deserve full credit for solving the four-color theorem. “We did the last 20 percent,Ó he says. We were the fifth generation of people working on the problem, so four-fifths had been done by others.Ó Someone like that deserves to be right.

In any case both proofs rely on an extensive computer computation, and they both have roughly the same overall structure. As Haken says, they were just following the path that previous researchers had travelled.

The proofs of the 4CT are both inductive. Assume by way of contradiction that the 4CT is false, then let be a planar graph that is the smallest size counter-example. Let be a finite set of planar graphs, this set of graphs is cleverly chosen to have two key properties.

First, is *unavoidable*. This means that every planar graph has one of the graphs as a subgraph. In both proofs the size of this set is large, but not too large. For example, in the original proof the size was about , in the newer proof the size is less than half that. The proof that is unavoidable is based on a clever argument that uses the famous Euler’s relationship

for planar graphs, where are the number of edges, is the number of vertices, and the number of faces. The way that is proved to be unavoidable is based on a discharging argument, that was created by Heinrich Heesch. Roughly “charges” are assigned to the vertices of a planar graph. The charges are moved based on certain local rules: total charge is preserved by all such moves. The key is the rules eventually yield a negative charge as long as no subgraph of is in . But the charge can be shown to be positive initially by Euler’s formula, so this is impossible.

Now for the computer aided part. The set of unavoidable graphs are proved to be *reducible.* Roughly this means that they cannot be a subgraph of any counter-example to the 4CT. This is where the computer checking occurs. The computer needs to do an exponential calculation of all possible boundary colorings to show that these graphs are reducible. Note, it is not the number of graphs that is a problem. Checking that even one is reducible seems to be an impossible task for a human.

This implies the 4CT, since is seen not be a minimal counter-example graph.

** Conventional Wisdom was Wrong–Again **

One of the great achievements, I believe, of Appel and Haken was the not the proof of 4CT. It was believing that the structure I just outlined would work. The general structure of unavoidable graphs and reducibility, had already been used before them. The previous proofs were all of the following type: If a planar graph has less than vertices, then it is four colorable. Researchers believed that could be made larger and larger with greater work, but the conventional wisdom was that one could never prove that , i.e. use the method to completely solve the four color conjecture. Note, some believed that the method could not work in principle, while others believed that it could work in principle, but could not be done in practice. However, Appel and Haken “really” believed and push on, where others had stopped.

Everyone before them had used sets that came with both good and bad news. The bad news was that the set was not unavoidable, it was true that a graph could only avoid by being larger than some bound, say vertices. The good news was that the graphs in were small enough that their reducibility could be checked by hand.

The brilliance, I believe, of Appel and Haken was to *believe* that there would be a set of unavoidable graphs with two properties: First, that the set would be finite and unavoidable for all planar graphs, not just small ones, Second, that the graphs in the set could be checked for reducibility by computers that were available to them at the time. The search to test reducibility grows roughly exponential in the size of the boundary of the graphs. Haken told me that he had heuristic arguments that showed that he could keep the size of graphs small enough so that the computer calculation would work, yet large enough so they would be unavoidable.

The conventional wisdom at the time was mixed whether or not this approach could ever prove the 4CT for all planar graphs, but Appel and Haken believed in the method. It sounds almost silly, but sometimes problems get solved when someone believes that a method will work, where others believed it would not. This has happened over and over in mathematics; for example, the same phenomena happen with Hilbert’s Tenth problem on Diophantine Equations. More on that in the future.

I think this is a lesson for us. Is there some method that we have tossed aside, yet it can be made to work to solve P=NP?

** Are They Proofs? **

I was at a meeting at the Royal Society in 2004, on: What is the nature of proof? Rich DeMillo and I are (in)famous for writing a paper on the role of proof in the correctness of computer programs. I will discuss our paper another time. They invited Rich, but since he was unavailable I got to go and give a talk at the conference. In any event, I was honored to be at a Royal Society meeting. One talk was given by Robert MacPherson on another proof that is based on large computer computations and has no known human checkable proof. To quote MacPherson:

In 1609, Kepler made a beautiful conjecture about spheres in space. It was one of the oldest unsolved problems in mathematics. In 1998, Tom Hales produced a brilliant computer-assisted proof of the Kepler conjecture. By now, the theoretical part of Hales’ proof has been refereed as usual mathematical papers, but the parts involving the computer have resisted all efforts at checking by humans. Should we think of the Kepler conjecture proved?

As you might expect there was quite a lively discussion about what are proofs and is a computer assisted proof a proof? No consensus was reached by the group, but it is interesting that most (all) the heavy hitters– Michael Aschbacher, Michael Atiyah, Paul Cohen, Jean-Pierre Serre–argued against computer aided proofs. Their main point, stated strongly by Cohen, was that proofs were not to check that something is true. Rather the role of proof is to give insight into the problem at hand. A humanly understand proof can lead to additional insights, can open up new avenues of discovery. That is missing, in their opinion, in any proof that is not understandable.

I will say more about this in the future, for I think it is a fascinating issue. One that is at the core of much of what we do in theory.

** Human Proofs of 4CT? **

As you might expect there continues to be considerable interest in finding a proof of the 4CT that does not use any computer search. Even recently, new restatements of the theorem have been discovered.

Yuri Matiyasevich, of Hilbert’s Tenth Problem fame, has proved that the 4CT is equivalent to the statement that a particular Diophantine equation

has no solution in natural numbers . He has also showed that 4CT can be proved by proving certain statements about a probabilistic space on the triangulations of a sphere. The statements concern the positive correlation among pairs of events.

In the same spirit, I wish to report a new restatement of the 4CT. To my knowledge, this is the first restatement to an “Erdös-type” *approximate* combinatorial problem. This is joint work with three of graduate students: Atish Das Sarma, Amita Gajewar, Danupon Nanongkai of Georgia Tech.

** Our Approach **

Peter Tait proved in 1880 that the 4CT is equivalent to showing that certain planar graphs have edge 3-colorings. More precisely, Tait proved the following theorem:

Theorem:Let be a cubic two-edge connected plane graph. has an edge 3-coloring if and only if has a face 4-coloring.

Tait’s proof of his theorem is constructive. In modern terms he actually proves that each implication can be done in linear time.Thus, if a graph has an edge 3-coloring, then the face 4-coloring can be constructed in linear time.

Tait used his theorem to claim a “proof” of the 4CT. He assumed that every two-edge connected cubic planar graph has a Hamiltonian tour.This assumption is unfortunately false as shown years later, in 1946, by William Tutte: there are such planar graphs without any Hamiltonian tours. Note, the existence of such a tour easily proves that the graph is edge 3-colorable. The proof of this is simple: Color the edges of the tour alternatively red/green. Color all other edges blue. It is easy to see that this is a valid edge 3-coloring of the graph. Note, the reason this works is that a cubic graph must have an even number of vertices. Our approach to the 4CT is this:

Theorem:If every two-edge connected, cubic, planar graph can be edge 3-colored with errors, then the Four Color Theorem is true.

By a 3-coloring with errors we mean an assignment of 3 colors to the edges of the graph so that at most vertices have two edges of the same color. Thus, the theorem says that we can strengthen Tait’s theorem to allow errors, where is the number of vertices.

The hope, that we have made no progress on yet, is that there is a clever combinatorial argument that colorings with few errors always exist. We think that allowing errors in the coloring could allow powerful tools from modern combinatorics to be applied to the 4CT. We will see.

In the our theorem we regard the four color theorem as a conjecture (otherwise, the theorem statement is trivial). One way of looking at our theorem is we ask for a proof of the fact that every two-edge connected cubic planar graph , on vertices, can be decomposed into a union of disjoint cycles covering all vertices, with at most odd length cycles. If such a decomposition exists, we can color the edges on every cycle with *red* and *green* alternatively, and color all other edges with *blue*. This way, there will be at most errors. In fact, we do not even need a cycle cover, it is sufficient to have a cover of paths and cycles with paths and odd cycles. This is a weaker requirement than Tait’s, as he needed a Hamiltonian tour.

For the full details of the proof see our paper. However, the basic idea is based on an amplification “trick”. Suppose that is a two-edge connected, cubic, planar graph that cannot be 3-edge colored. We will take copies of and glue them together to form a new graph .

The gluing together yields a two-edge connected, cubic, planar graph. By assumption it has an approximate 3-edge coloring with at most

errors. For large enough , we get that one of the copies of is colored with no errors. However, there is a small problem, the subgraph that is a copy of has been modified: it has extra vertices and edges. These were added so it could be connected to other copies of . Thus, the last step is to show that this can be overcome. We finally, get a 3-edge coloring of , which is a contradiction.

** Open Problems **

There are several open questions. The main one is to find a non-computer based proof of the 4CT. Either use Matiyasevich’s approach, or our approach, or one of your own; but make it humanly checkable.

Another question is to make the known computer proofs human proofs. The issue is the reducibility step needs to be made humanly checkable. A way to do this, at least in principle, is to show that checking whether a graph is reducible or not is in NP. Then, we could use the computer to generate a short proof that we could check by hand.

A very enjoyable post. :)

Given that you are working on a “short” proof, I take it you believe that Ashay Dharwadker’s (http://www.geocities.com/dharwadker/) proof is wrong? I haven’t had the time to work through it in its entirety, but it seems like an interesting approach.

I have to disagree with Cohen’s claim about computer-aided proofs, for two reasons.

First, the structure of the proof provides lots of insight into the problem (in fact, I would claim that often the entire insight comes from the structure of the proof). Which graphs exactly are in is probably not useful outside of the proof, and nor are the details as to why they are irreducible.

Even if we assume that a computer-aided proof contains no “insight value”, it still has value. Many other problems can be understood in terms of the 4CT. As a result, we gain insight into these other problems by being able to use the 4CT as a tool. This is no different than using a program library without looking at the source code; as long as the API is well-documented, everything is happy.

Perhaps what we need are two terms for the two types of proofs. I nominate “Explanatory Proofs” and “Toolbox Proofs”.

Thanks for this great comment.

1. Do not know what the status of that “proof” is. If is correct, it would end the search for a “human” proof. But am doubtful, since the same author has made many other claims that seem suspect. So probably not correct.

2. I agree with your view that there are perhaps two types of proofs. How about “explanatory proof” and “black-box proof”?

I will post more in the future on this issue. Apparently we are starting to see more computer type proofs and the issue will not go away.

Finally, I hear on and off rumors that some serious group theorists do not believe the Simple group classification is completely solved. I cannot confirm this, however.

His list of “accomplishments” seems to have grown enormously since I first came across his alleged proof of the 4CT. Sorry for the false lead.

Interestingly, I was unable to find a specific pointer to where the proof fails.

I read the proof it seems to be correct. In his web page, the author also gives polynomial time algorithms for Hamiltonian path, max-clique, vertex coloring, and graph isomorphism problems, which are also correct.

I have given an algorithmic proof for the four color theorem in 2004. All my papers are in arXiv. Since your paper is on Tait’s coloring of the bridgeless cubic planar graphs, I would suggest my paper:

Spiral Chains: The Proofs of Tait’s and Tutte’s Three-Edge-Coloring Conjectures,

( http://arxiv.org/abs/math/0507127 )

For some other illustrations:

1. http://www.flickr.com/photos/49058045@N00/3293135616/

(for double-spiral chain resolutions of the counter-examples to Kempe’s proof )

2. http://www.flickr.com/photos/49058045@N00/2372086534/

(for the illustration of spiral chain coloring algorithm)

I have e-mailed my papers separately.

Note. Dharwadker’s proof of 4CT may be correct, but I wonder why there is no any coloring algorithm in his proof.

FYI, the December 2008 issue of the Notices of the AMS was entirely devoted to the topic of computer-aided proofs: http://www.ams.org/notices/200811/.

As for the two types of proof, I agree. However, “black-box proof” might not be a great term, as it conflicts with proofs in TCS that use only black-box techniques. There may also be a third type of proof: for example, the Robertson-Seymour Structure Theorem may be an explanatory proof, but it is so long that most of the time the result is only ever used as a black box. Another example is the proof that all manifolds are triangulable: its a highly technical proof, it probably qualifies as “explanatory,” but most of the time it only ever gets used as a black box.

A similar but slightly different distinction applies to lemmas: some lemmas get used over and over again, and hence seem to have really general power. For example, Bézout’s lemma, Urysohn’s lemma, Dehn’s lemma, Fatou’s lemma, Gauss’s lemma, Nakayama’s lemma, Poincaré’s lemma, Riesz’s lemma, and Zorn’s lemma (list taken from the wikipedia page for lemma: http://en.wikipedia.org/wiki/Lemma_(mathematics)). Other lemmas are just a way to break a long proof up into manageable chunks.

Yes I aware of this issue on computer proofs.

I think the best way to be sure that a proof is correct is to see it used, generalized, and re-proved. We are all sure that about the Fundamental Theorem of Algebra: one reason is that it has been used countless times; another reason is it has dozens of independent proofs. I think these are the ways that proofs get believed.

Szemerédi’s theorem on arithmetic progressions is a perfect example. Few read the original very complex proof. But it has lemmas in it that are useful in their own right, and it now has several completely different proofs. I would say that we are pretty confident that the theorem is correct.

Perhaps what we need are two terms for the two types of proofs.I don’t think there are in fact two types of proofs; instead, there’s a whole continuum. At one extreme are incredibly short, simple, illuminating proofs; at the other are complicated, mysterious, non-conceptual proofs (whether carried out by humans or computers). Most proofs lie somewhere in between.

Finally, I hear on and off rumors that some serious group theorists do not believe the Simple group classification is completely solved. I cannot confirm this, however.One serious issue was the “quasithin” case. In the early 80’s, Mason claimed to have done it but he never completed and published his manuscript. Recently (2004?) Aschbacher and Smith published a new proof for that case. This was a tricky issue, since it’s reasonable to give someone the benefit of the doubt for a while, and it’s not clear when to declare that you are giving up on the previously claimed proof and declaring that the problem is open again. Plus everybody was quite confident that this case could be done without requiring huge new breakthroughs, and indeed it could be, but it’s important to get a detailed proof published. Aschbacher and Smith deserve a lot of credit for taking on this lengthy task (it’s an 800-page proof), rather than simply declaring it was someone else’s problem.

I am not aware of any other such gaps. (I don’t work in that area, but I know enough people who do that I probably would have heard of other gaps.) There’s always some grumbling about the possibility of mistakes in the proof. It’s so long that it’s inconceivable that there are no errors, and there are probably a few real blunders. It’s hard to imagine that any finite simple groups have actually been missed: the only remotely plausible case is another sporadic group, and the reasonable approaches were investigated so thoroughly that I’m sure there are no more. However, of course I’m not nearly as certain as I am regarding, say, the classification of finite fields.

I think this is part of the discomfort with some computer-assisted proofs. It’s really hard to check the details of a complicated proof perfectly. Most of the confidence in a proof’s correctness doesn’t come from utterly exhaustive and perfect checking, but rather from achieving a sufficiently rich conceptual understanding that you see how everything has to fit together and work. This gives conceptual proofs a profound robustness, where minor errors and oversights can easily be corrected.

Many computer proofs (and some human proofs) don’t seem to have the same robustness. Instead, a tiny error in one part of the proof could in principle doom the entire thing. I think this is part of what bothers people. Some authors (like Hales) are incredibly careful about doing everything perfectly, but it’s really hard to do huge computer calculations with no bugs whatsoever in your code. Most people, like Appel and Haken, have not been able to maintain the same standards as Hales. In their case, the structure of their proof was actually pretty robust, so there’s great confidence in the result despite the minor weakenesses in their execution. (And the second-generation proof seems to have taken care of everything.) However, it’s a genuinely tricky issue.

I think it’s perfectly reasonable for some proofs, both human and computer-assisted, to end up in a sort of limbo, where nobody can pinpoint a fatal flaw but the research community just doesn’t have confidence in the result. It’s unfortunate when this happens, but ultimately it’s unavoidable. Machine-checkable formal proofs could address this issue, but they won’t address the issue of human understanding, and with current technology they are sufficiently difficult that it would be unreasonable to demand that every proof be formalized. Maybe in the future, though…

Regarding rumors about gaps in the proof of the Classification, see what Smith and Janko have to say about the subject. (Executive summary: There is no gap.)

Regarding the controversy over computer proofs, my view is that the controversy is overblown. Machine-checkable formal proofs are good for verifying

correctness(and, for complicated proofs, are better at this job than “ordinary” human proofs are). They’re typically not so good for producingunderstanding. But there’s no reason to argue about anything; for any result of interest, we should just keep working on producing new proofs until we’re satisfied with both correctness and understanding. Any proof that produces progress on either front should be welcomed.Do you know about Gonthier’s proof of 4CT, and the so-called de Bruijn criterion?

Basically, people were uncomfortable with Appel and Haken’s proof, because it involved zillions of lines of computer code, and in principle there could have been a bug anyplace in the program that made the whole proof wrong. We know how hard it is to find bugs in programs, so checking the proof (by inspecting the huge program for errors, leaving aside the over-engineering statement) is quite hard.

The de Bruijn criterion basically says a proof is reliable if it can be checked by a very tiny program, tiny enough that a person can study it and be confident in it. So I could write a small program that multiplies multi-word integers, and if I can convince you that the program (maybe a few dozen lines of code) is correct, then I can use it to show that some 500-digit number is composite (by exhibiting factors and using the program to multiply them together), even though a human couldn’t do the multiplication by hand.

Anyway, Gonthier did a formalized proof of 4CT in a general purpose proof assistant called Coq, where the central checking module of Coq was supposedly small enough to be human checkable. (There was a lot of syntax sugar surrounding that central module). Coq uses constructive type theory but an analogy might be having a complete derivation in a Hilbert-style proof calculus, a big chain of formulas and a simple inference rule, that could be checked with a small program.

Of course the huge chain of formulas (like a binary computer program) might not give much insight, but one can have pretty good certainty that the proof is correct.

Maybe in the dystopian future when mathematics is controlled by big evil corporations, they’ll apply “digital restrictions management” to proofs, using the PCP theorem…

By the way, I liked your Bob Solovay stories on another post. Bob gave me a good explanation of why he felt the Appel-Haken 4CT proof was convincing. He said there are two kinds of proofs in mathematics. Say you want to prove “x is greater than 3” for some interesting x. In the first kind of proof, you show that x=3.000000001 and therefore x>3. If you are even slightly off, the whole proof collapses and the result may be wrong. In the second kind of proof, you show that x=2**10000 and therefore x>3. For this kind of proof, you can be way off and the result is still pretty solid. He said the Appel-Haken proof is the second type of proof. I never studied it myself though.

I have considered the four color problem for several years. I think the theorem also may be proven by specifying an algorithm, that creates a 4c-solution for any conceivable “map”. I propose such an algorithm on my website. There I desribe the movement of a “fux” along the edges of a cubic graph. After some starting conditions are set these movements are determinated and lead to a four color solution.

Grateful for comments!

It may be used to find all 4-colorings of a map. For a possible proof of the 4CT directly on (normal) maps, I would suggest “A Victorian Age Proof of the Four Color Theorem” ( http://arxiv.org/abs/0903.4108 ) in which I have given an algorithmic proof of the following theorem without use of known proofs e.g., A&H and RSST proofs of the four color theorem:

Theorem: Every planar graph can be decomposed into the edge disjoint union of two bipartite graph.

The proof has been given in three steps. The first two steps are the maximal mono-chromatic and then maximal dichromatic coloring of the faces (of normal map) in such a way that the resulting uncolored (white) regions of the incomplete two-colored map induces no odd-cycles so that in the (final) third step four coloring of the map has been obtained almost trivially.

Finally, in a recent lengthy discussions with Tommy Jensen (see http://tech.groups.yahoo.com/group/Graphs/message/229 ) we agreed that definition of spiral ordering of the faces (used in Step 1 and 2) need more precise explanation and the importance of some points in maximal mono-chromatic coloring must be emphasized.

My paper “A Victorian Age Proof of the Four Color Theorem” has been submitted to the Royal Society Proceedings (A) and after a very detailed “peer review” I will re-submit a revised version soon in which I have fixed some certain bad-examples to my spiral ordering and coloring algorithm in accordance to the referee’s suggestion.

Heawood’s vertex character function in its dual form:

Given a triangulated planar graph. Give each triangular face a triangle number of +1 or -1 and make all the combinations. Make for each vertex the sum mod3 of the adjacent triangular faces and you get a vertex number of 0, 1 or 2. The graph is 4 colorable if all vertices can have 0 as vertex number.

A more general conjecture is the following:

Given a triangulated planar polygon and all combinations of triangle numbers for the internal triangular faces, then:

a) The internal vertices have ALL the combinations of vertex numbers.

b) For each of this combinations any vertex on the boundary has at least TWO DIFFERENT vertex numbers.

I have not seen this conjecture elsewhere, and it is not proven because the 4CT is proven, so it’s more general. Perhaps it can inspire for another approach.

It’s easy to prove part b for a triangulated polygon with 0 internal vertices. We can then step by step enclose a boundary vertex with an added triangle with its two possible triangle numbers. Then each newly enclosed vertex has also three different vertex numbers for each combination of the vertex numbers of the other internal vertices. At the end we have a triangulated planar graph where all internal vertices can have a vertex number equal to 0. It’s easy to prove that the three boundary vertices can then have also a vertex number equal to 0. Proving part b after each step is the hard part.

Patrick Labarque

I believe I can simplify the proof of Theorem 3.1 in your paper (FourCT.pdf, linked to above) by removing the need for the “mirror trick” at the end. Specifically, referring to graph H in Figure 3 in the paper (which is edge-3-colored with no errors), the pair of edges a-u1 and b-u4 together cut the graph. It is easily shown that in any 3-edge-colored cubic graph, the “sum” of the colors on any set of edges which cut the graph is 0, where the 3 colors correspond to the 3 nonzero elements of (Z_2)^2 (i.e. to the 2-tuples of bits 01, 10, 11, using elementwise sum mod 2). (Proof: it’s true of the 3 edges around one vertex, and then by induction, true of the set of edges which separate any subset of the vertices of the graph from the rest. My guess is this is “well known”, since this color-sum operation is closely related to a way of implementing Tait’s correspondence to face 4-colorings, but I don’t know.) When just two edges cut the graph, this means their colors are equal, so the 4 vertices above the specified two edges in H can be discarded and these two edges glued together to reconstruct the original graph G, regardless of how the dangling edges of H were colored.

Hi,

I would like to discuss with someone a finding that I’ve made, that is related to the four color theorem. If true, it may help to find a “pencil and paper proof” of the four color theorem.

I’ve already verified it multiple times, but it does not count. I’m a computer analyst, and in computer programming it is believed that is almost impossible to find errors if you are called to test your own sofware and this, I think, is not different for ideas. So please help me to verify my findings!

I’ve found out that all maps, no matter the complexity, can be topologically deformed into “circular or rectangular maps”, which are more manageable forms of maps. I also found that, for the scope of the four color problem, all maps can be simplified by removing all faces with 2, 3 and also 4 edges. The removal of faces with 2 and 3 edges, was already descibed into the “What is mathematics” immense book of Courant & Robbins. After this simplification, one of the Euler’s identity becomes:

My blog, dedicated to the 4CT, is at: http://4coloring.wordpress.com.

I also realized a program to generate such maps … and color them automatically.

Thanks in advance!

Ciao da Mario Stefanutti

Every map can be deformed into circular OR rectangular maps!

Can you do it for the map shown at http://www.flickr.com/photos/49058045@N00/3637649311/

or

I’ve modified the web page (T2) to better express the proof.

Yes I think your map (really nice) can be transformed into a circular map (or rectangular). I’ll be working on it. I’ve never done it before with so many faces, let’s see what happens. Consider that I’ve to do it manually.

The good thing is, IF the theorem is true, we can focus only on simplified maps (circular of rectangular is the same) without having to consider all possible shapes that the nature can invent.

In this case, what about your method of Spiral Chains? Would it be easier to be applied on circular maps? Or is it the same?

By the way: this is an amazing website!!!

Circular and rectangular maps certainly are cubic planar maps (normal maps) and if every cubic planar map can be represented by circular and/or rectangular map then it may add an property that can be used in the non-computer proof. In the case of spiral chain and spiral chain coloring (algorithmic proof) that will add further visualization for the 4CT (see:

http://neu-tr.academia.edu/IbrahimCahit/Papers/423919/VISUALIZATION_OF_THE_FOUR_COLOR_THEOREM )

Some more examples for spiral chain coloring of the circular and rectangular maps are:

Link (Corrected)

http://neu-tr.academia.edu/IbrahimCahit/Papers/423920/VISUALIZATION_OF_THE_FOUR_COLOR_THEOREM

Here is just an initial result of the conversion of the Haken and Appel hardest case map into a rectangular map: Conversion of famous maps

I preferred to explain the steps in deeper detail instead of giving the final result.

I’ll update the page when I have the map fully converted and colored using the program here: https://sourceforge.net/projects/maps-coloring/

Is the proof that I have in T2 not clear enough? Or wrong?

I think your construction works for all cubic planar maps since “bending” of the edge is allowed. But without edge bending if the maximum vertex degree is four then not all plane planar graphs can be embedded on the rectangular grid drawing points. In [1] we have given necessary and sufficient conditions which is an extention of Thomassen’s characterization of the rectangular drawing of plane graphs with maximum vertex degree four. A drawing algorithm is also proposed.

[1] I. Cahit , H. Kheddouci, “Embedding Plane Graphs in the Grids”,Ninth International Conference on Information Visualisation (IV’05), London, England, July 2005. (http://www.computer.org/portal/web/csdl/doi/10.1109/IV.2005.45)

The approach I’m using, is to work directly and exclusively with maps instead of using the graph representation of maps. This is because, even if the two representations are equivalent and the graph theory has many mathematical tools that can be used against the problem, visually the approach used is different and I think it is more intuitive with maps instead of graphs … at least for me.

So what you said about the “construction that works for all cubic planar

maps” is perfect for my case. I am focusing only on regular maps (three boundaries that meet at each vertex, no enclaver or exclaves, etc.), because for example maps that have boundaries (vertexes) with more than three edges can be easily simplified without affecting the search of the proof.Just for fun, I’m converting some famous maps (an easy one for now taken from Wikipedia) into circular and rectangular maps. Here is the updated page: http://4coloring.wordpress.com/t2/conversion-of-famous-maps/

I’ve also updated (http://sourceforge.net/projects/maps-coloring/) the software with these new functionalities:

Possibility to color the map by clicking on faces

I hope the two theorems described on my site can be used as a starting base to find an easy proof of the 4ct, for example applying old and new methods of investigation on the problem. I am also working on it … with no luck!

I think the two theorems (T1 and T2 on the website) are really general for what concerns the four color theorem.

http://4coloring.wordpress.com/

Bye, Mario Stefanutti

The use of T1 and T2 to transform any cubic planar map into circular and rectangular map can further give an natural visual abstraction for the use of spiral chain in the short algorithmic proof of the 4CT.

Do you like this conversion of Tutte’s map?

http://4coloring.wordpress.com/2011/02/24/tuttes-map-conversion-1-of-2/

If you would show only the rectangular map then, I think, no one can think of the map is actually Tutte’s map. My second observation is that non-Hamiltonicity of your rectangular map is also not apparent as the original Tutte’s map from the point of three sectors.

Yes, you are right. They seem to be so different in shape, that it is difficult to see that they are actually the same map. But the theorem in T2 is pretty easy to follow and, I think, it removes all doubts. To manually verify that they are the same map, just select any face and follow the border of that face clockwise (or counterclockwise). All neighbours appear in the same order and also all other topological attributes (related to the coloring problem) are exactly the same between the two representations of the same map.

About the second observation, it may be the case for Tutte’s map, but in general maps can be shaped in infinite ways and in these many cases Hamiltonicity of the related graph may be difficult to identify. The “reguralization” that I found offers the possibility to treat all maps the same way. Hopefully this representation (rectangular or circular) of all regular maps, can help to find other ways (maybe not identified today or simply re-elaborated) to get to a relly easy and human checkable proof of the problem.

Bye, Mario Stefanutti

http://4coloring.wordpress.com

A made presentation on Prezi: Four color theorem news

And uploaded a video on YouTube: Maps coloring for the four color theorem

Let me know what you think.

Check for updates on: http://4coloring.wordpress.com

The previous video on youtube was low quality. Here is the new video.

Bye, Mario

http://4coloring.wordpress.com

Here is a video that shows the nature of rectangular and circular maps.

All maps concerning the four color theorem (“regular maps” | “planar graphs without loops”) can be topologically transformed into rectangular or circular maps.