Fathering Randomized Fault Tolerance
Plus visiting Michael Rabin and talking about Gödel’s Theorems
Michael Ben-Or and Michael Rabin have won the 2015 Dijkstra Prize for Distributed Computing. The citation says,
In [two] seminal papers, published in close succession in 1983, Michael Ben-Or and Michael O. Rabin started the field of fault-tolerant randomized distributed algorithms.
Today Ken and I wish to congratulate both Michaels for the well deserved recognition for their brilliant work.
Ben-Or’s paper is titled, “Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols.” Rabin’s paper, “Randomized Byzantine Generals,” brought randomness to a distributed consensus problem whose catchy name had been coined in 1982. The award committee said:
Ben-Or and Rabin were the first to use randomness to solve a problem, consensus in an asynchronous distributed system subject to failures, which had provably no deterministic solution. In other words, they were addressing a computability question and not a complexity one, and the answer was far from obvious.
Their work has continued to have a huge impact on the whole area of distributed computing. The fact that a negative result, an impossibility result, could be circumvented by using randomness was quite surprising back then. It changed the face of distributed algorithms by showing that a problem with no deterministic solution might still have a solution provided randomness is allowed. Besides being a beautiful result, it has great practical importance, since randomness can be used in “real” algorithms.
We applaud the prize committee—Paul Spirakis, James Aspnes, Pierre Fraigniaud, Rachid Guerraoui, Nancy Lynch, and Yoram Moses—for making such a thoughtful choice, which shows that even “old” results can be recognized. It took years for Ben-Or and Rabin to be honored: we are glad they did not have to wait for the next power of two.
A Visit to Rabin
While we are thrilled to see Rabin honored for his work on distributed computing, we are even more excited to report that he is doing well. This year he had a serious health issue that required a major operation. The operation went well, but unfortunately during and after it Michael had serious complications.
I, Dick, just visited him at his home in Cambridge. I am happy to report that Michael seems to be on his way to a full recovery. He is as sharp as ever, and looks like he will be his normal self in the near future. This is great news.
We wish Michael and his wonderful wife Ruth and their daughters the best. His elder daughter, Tal Rabin, also works in cryptography; I heard her give a talk four years ago on their father-daughter research. Michael plans to be part of several upcoming events—a good sign that he is on the mend. Again our best wishes to Michael and his family.
Incompleteness
Most of my conversation with Michael was about friends, gossip, and funny stories. As always it was fascinating to hear Michael tell stories—he is such a great story teller. As I left, after over two hours together, I talked to him about research—if only for a few minutes.
He surprised me by averring that he had been thinking anew about the famous Incompleteness Theorems of Kurt Gödel. Recall the first of these theorems implies that in any sufficiently powerful consistent theory, there exist true sentences that are unprovable, and the second says the consistency of the theory is one of these sentences. There are by now many proofs of these great results: some short and clever, some longer and more natural, some via unusual connections with other parts of mathematics.
What could be new here? Michael pointed to recent proofs he really liked. These include an AMS Notices article by Shira Kritchman and Ran Raz. The thrust is that one can view the Second Incompleteness Theorem through the lens of Kolmogorov complexity as a logical version of the famous Unexpected Examination paradox. As expressed by Timothy Chow, this paradox goes as follows:
A teacher announces in class that an examination will be held on some day during the following week, and moreover that the examination will be a surprise. The students argue that a surprise exam cannot occur. For suppose the exam were on the last day of the week. Then on the previous night, the students would be able to predict that the exam would occur on the following day, and the exam would not be a surprise. So it is impossible for a surprise exam to occur on the last day. But then a surprise exam cannot occur on the penultimate day, either, for in that case the students, knowing that the last day is an impossible day for a surprise exam, would be able to predict on the night before the exam that the exam would occur on the following day. Similarly, the students argue that a surprise exam cannot occur on any other day of the week either. Confident in this conclusion, they are of course totally surprised when the exam occurs (on Wednesday, say). The announcement is vindicated after all. Where did the students’ reasoning go wrong?
Michael said something about getting sharper bounds on the Kolmogorov complexity constant involved in their article. There wasn’t time to go into details, so we had to leave the discussion “incomplete.” So I asked Ken to try to help reconstruct what Michael was seeing and trying to do.
Gödel’s Theorems
I, Ken, usually give only a quick taste of Gödel’s theorems in one lecture in Buffalo’s introductory graduate theory course. Let be the predicate that the Turing machine with numerical code never halts when run on empty input. Let be a strong effective formal system such as Peano arithmetic or set theory. Then I show (or give as homework) the following two observations:
- The set of such that proves is computably enumerable.
- But the set of such that is true is not c.e. So .
Now if is not included in , there is an such that proves but is false. being false means that in the real world the machine does halt on input the empty string . Hence there is some finite number such that the decidable predicate (saying halts in steps) is true. By the strength assumption on it proves all true and false cases of , so proves , but since proves , also proves . This makes inconsistent.
Thus if is consistent, then is included in , and properly so by the c.e./not-c.e. reasoning. Taking any gives a true statement that cannot prove.
Gödel’s diagonal method shows how to construct such a , and Gödel’s definition of incompleteness also requires showing that cannot prove either. This is more subtle: proving when is true is not a violation of consistency as with proving when is false. Note that
For reasons we put in Gödel’s own voice at the end of our second interview with him, it is possible to have a model with a non-standard integer in which the statements and for all all hold. This is why Gödel originally used a stronger condition he called -consistency which rules out proving and all the statements . (As Wikipedia notes, since is a halting predicate the restricted case called -soundness is enough.) It took Barkley Rosser in 1935 to make this too work with just consistency as the assumption.
But if all we care about is having a true statement that cannot prove, the c.e./not-c.e. argument appealing to consistency is enough. Then comes the “meta-argument” that if could prove its own consistency, then because can prove the c.e./not-c.e. part of the argument, would prove . As Kritchman and Raz observe in related instances, this does not alone yield a concrete such that proves , which is the real contradiction needed to deduce the second theorem. Still, I think the above is a reasonable “hand-wave” to convey the import of the second theorem with minimal logical apparatus.
The question becomes, How concrete can we make this? Can we push the indeterminate quantity into the background and quantify ideas of logical strength and complexity in terms of a parameter that we can bound more meaningfully? Dick and I believe this objective is what attracted Rabin to the Kritchman-Raz article.
Concrete Gödel
Kritchman and Raz obtain the second argument without hand-wave and with minimal “meta” by focusing on the Kolmogorov complexity of a binary string :
Here means the string length of —that is, the length of the program producing from empty tape. Now let us imagine a function —for Gregory Chaitin—that takes as parameters a description of and a number and outputs a program that does the following on empty tape:
Search through all proofs in of statements of the form ‘‘ and as soon as one is found, output .
Then , where is a constant independent of . Whenever exceeds a constant
where is the Lambert -function, we have .
Thus for there are no proofs in of the form , for any —else by running until it finds such a proof and outputs we prove and so expose the inconsistency of . Define ; then we need only find such that is true to prove the first theorem concretely. Most important, by simple counting that is provable in , such a must exist among the finite set of binary strings of length .
Kritchman and Raz conclude their argument by letting be the statement “at least strings have ” for , and (taking false). There is exactly one such that is true, but cannot even prove : By the truth of there are strings with . can guess and verify them all by running their programs with to completion. If proves then deduces that all other have , which is impossible by the choice of . Likewise, cannot prove since then it proves for every .
We start a ball rolling, however, by observing that via the counting argument, does prove . So either is false or is inconsistent. This turns around to say that if proves its own consistency, then proves that is false—which is like the “surprise exam” not being possible on the last day. But since proves , proves . Hence either is false or is inconsistent. This turns around to make deduce that its ability to prove its consistency implies the ability to prove . This rips right on through to make prove , which however it can do only if really is inconsistent. Thus cannot prove its own consistency—unless it is inconsistent—which is Gödel’s second theorem.
The article by Kritchman and Raz has the full formal proof-predicate detail. There is a “” lurking about—it’s the number of steps the programs outputting -es need to take—but the structure ensures that the eventuality of a non-standard length- computation outputting an never arises. The finiteness of and drives the improvement.
Along with Michael we wonder, what can be done further with this? Can we turn the underlying computability questions into complexity ones? The natural place to start is, how low can be? If is Peano arithmetic or set theory, can we take ? This seemed to be what Michael was saying. It depends on . And there is another thought here: We don’t need the length of to be bounded, but rather its own Kolmogorov complexity, . Can we upper-bound this—for set theory or Peano—by setting up some kind of self-referential loop?
Open Problems
The main open problem is how fast will Michael be back in form. We hope that the answer to this open problem is simple: very soon. Congratulations again to him and Michael Ben-Or on the prize, and Happy Father’s Day to all.
[fixed inequality after Lambert W]
That reminds me of the proof that there can be no uninteresting numbers.
I’ve been dreaming of a proof that, if a polynomial algorithm for a hard problem were found, then it would imply the inconsistency of Peano arithmetic with a nonzero probability. The reason is because such an algorithm would look too much like the proof that it can’t exist. I don’t know if that makes sense at all, but I find it interesting to use the probability of a contradiction in PA as a new kind of computational ressource. After all, solving a hard problem can drive you crazy sometimes…
If further work establishes the bound on L more completely, I would love to hear about it!
Shouldn’t it rather be {|e| = |d_F| + c + \log_2 L > L_0} i.e. the inverse inequality, whenever {L} exceeds {L_0}
Thanks. In fact the subscript 0 was the error—the point is to have just “L” on both sides.
Congratulations, to Michael and Michael!