# Turing’s Tiger Birthday Party

* The Princeton Connection *

Robert Sedgewick and Jon Edwards were the co-ordinators of the just-held Princeton Turing Centennial Celebration to honor the one-hundredth birthday of Alan Turing. The three-day conference blended historical and current material, which attracted hundreds of registrants who filled Princeton University’s largest lecture hall. Their team also put together extensive webpages, including an overview that also lists the sponsors who made registration free of charge, and a page with further information on Turing and Princeton.

Today Ken and I wish to talk about this terrific event, and give some flavor of how wonderful it was.

Bob, as Sedgewick is usually known, was the first chair of the Computer Science Department at Princeton, and is well known for his brilliant work in algorithms analysis, data structures, and many other areas of computing. His textbooks are classics, and it is not surprising that his introductory class at Princeton is one of the best classes there—it probably is the best intro class in the world.

Jon Edwards recently retired from being Coordinator of Institutional Communications and Outreach for Princeton’s Office of Information Technology, and served in other computing-related capacities. He previously worked as an editor for *Byte* Magazine. Now he is a full-time chess teacher and promoter of general talent in teenagers looking toward college—he is a master player and won the 1997 US Championship in correspondence chess.

## Why Princeton?

Alan Turing earned his Ph.D. at Princeton in 1938 under the great Alonzo Church. This alone gives Princeton a unique claim to Turing. But there are many other connections between Turing and Princeton. Two of the other great “fathers of computation,” John von Neumann and Kurt Gödel, were also at Princeton and promoted his transfer in 1936 from Cambridge University.

Turing had found himself “scooped” by Church in regard to proving the undecidability of the decision problem, called in German the *Entscheidungsproblem*, of whether a statement in first-order logic is a theorem. Church’s proof was with regard to his *lambda calculus* as system of computation, but the proof technique using diagonalization was the same as Turing had employed with regard to his machines. It was not the first time Turing was scooped: in 1934 he proved what is now called the Central Limit Theorem with no one realizing the Finnish mathematician Jarl Lindeberg had done this in 1922.

Still Church and others not only encouraged him to publish his paper on what we now call Turing Machines, but to cross the “Pond” to share his innovative ideas and his analytical prowess. Thus Turing became a Princeton Tiger, *38. It is fitting that in his centenary year, the first hard evidence affirming his theory of how tigers get their stripes was published.

## In McCosh Hall

The meetings were held in McCosh 50. I (Dick) taught freshman Pascal CS101 there years ago there with Andrea LaPaugh, while Ken remembers taking Econ 102: Microeconomics there. This is the same hall where Andrew Wiles gave his “general” talk to a standing-room audience on his famous solution to Fermat’s Last Theorem, after he had repaired it.

The room, which holds almost five hundred, felt nearly filled for most of the conference, which is really a tribute to the terrific set of speakers that Bob and Jon had invited. The question periods were fun, with many interesting points raised.

Shirley Tilghman, the president of Princeton, gave a welcome address, after lunch on the first day. She hailed Turing and the Princeton cohort for effecting the transition from “numbers than mean things” to “numbers that *do* things.” She raised the issue of how Turing was treated, horribly, by the very nation that he served so brilliantly during WW II.

## The After-Dinner Speakers

There were two very different speakers for the two evening sessions: Eric Schmidt of Google and Andrew Appel of Princeton. Both are graduates of Princeton, both are wonderfully compelling speakers, and both had interesting things to say.

Eric spoke mostly about how the coming advances in connectivity would change the world. How it would yield a almost futuristic world where cars drive themselves and people have all information at their disposal. One of the best questions was from an undergraduate who told Eric that earlier in the year Princeton was completely off the grid for a whole day: no Internet, no Google, no email, nothing. She said she was at a loss for what to do that day. Eric and the rest of us laughed, and he went on to explain how he tried to force his engineers at Google to not be on-line one hour per week. He said he mostly failed.

Andrew, a 1981 classmate of Ken’s, spoke about the creation of the conducive environment that brought Turing to Princeton. Much of the vision came from Oswald Veblen, who advised Church, chaired the Mathematics Department, helped secure funding and recognition from the university of mathematics as a pure-research subject, and co-founded the Institute for Advanced Study, serving as its first full professor in 1932. Veblen formed a core from Solomon Lefschetz, Church, Gödel, von Neumann, and “some random physicists” as Andrew captioned under a photo of Albert Einstein. He showed that Veblen, Lefschetz, and Church alone have over 9,000 doctoral descendants, including numerous Turing Award winners and others who permeate computing. Thus he ended by saying Princeton built the first great *Computer Science* department.

## The Speakers—Thursday

There were eighteen regular talks in total, each of one hour. Each was interesting and supplied different insights into Turing’s thinking and made different connections on his impact. Slides and even videos of some of the talks will be on-line sometime in June—check the main page for details—but for now we would like to reduce each hour’s talk to a few sentences. Hopefully this extreme compression is not too lossy and will help you get the feel for some of the excitement that we felt in being there.

Dana Scott spoke on *Lambda Calculus, Then and Now*. His first slides referred to the “Old” and “New” -calculus, saying that it “has been recycled every decade,” and his talk showed how it related to combinators and other formal systems that gave rise to programming languages. Two notable statements toward the end: “When I first heard of combinators, I had nightmares,” and “John McCarthy claimed that he *wasn’t* influenced by the lambda calculus in creating Lisp, but it is hard for me to believe that.”

Philip Wadler spoke on *Church’s Coincidences*. Philip is an advocate for the lambda calculus—actually more that an advocate. He gave a beautiful talk on the lambda calculus and ended by ripping open his vest to reveal a Superman-like T-shirt with the lambda symbol, on it. He told a story that once when Church was asked in a letter why he had chosen the symbol, he replied on a postcard that it was

“Eeny, meeny, miny, moe.”

The coincidence of multiple equivalent models of computation arising in the mid-1930’s he explained as no surprise, but he conveyed amazement that Gerhard Gentzen’s Intuitionistic Natural Deduction and Church’s simply typed lambda calculus turn out to be isomorphic.

Les Valiant spoke on *Computer Science as a Natural Science*. Les is working on a mathematical model of evolution, and most of his comments were on the model. His fundamental question is: “how can we get three billion bases right in our human DNA in about three billion years?” This is a very much open problem that perhaps we, as computer scientists, can help solve.

Andy Yao spoke on *Quantum Computing: A Great Science in the Making*. I, Dick, recall that Andy was in the past a skeptic about quantum computing. Now he is completely sure that we will have quantum computers “sooner than you think.” He gave a variety of reasons why he believes this: the main being successes in quantum communication that he thinks will help with quantum computation, and particular advances with ion-trap technology.

Robert Kahn spoke on *A Systems Approach to Managing Distributed Information*. Robert pointed out that the Internet is not a network. He views it as an abstraction, as a kind of data structure, and this view was quite interesting.

Dick Karp spoke on *Theory of Computation as an Enabling Tool for the Sciences*. Dick gave a series of examples from all parts of science where computing either has had or will likely have a major impact. Some of the new examples, like astronomy, were quite neat. He opined that the Web may supplant the Turing Machine as the “cardinal model” for theory of computing.

## Friday

Martin Davis spoke on *Universality is Ubiquitous*. He emphasized the importance of the philosophical content of Turing’s famous 1936 paper, noting for instance that Emil Post had a system a decade earlier that we now know is equivalent, but he did not have the philosophical awareness of universality by which to advance it. He noted that universality requires only a few simple ingredients, quoting Turing himself on its being “essential…that the memory…be infinite.” He showed how this played out immediately in practice—whereas some early computers were control and instruction-heavy, Turing’s plan for the Manchester ACE machine maximized the availability of memory. He then explained how current deep thinkers can be understood in relation to two planks of the artificial intellligence hypothesis:

- The human mind is a function of the brain.
- The brain is a computer.

Some believe both, others only one or neither, but the universal ability of the human mind must be acknowledged by all.

James Murray spoke on *Mathematical Biology, Past, Present and Future: from animal coat patterns to brain tumours to saving marriages*. James gave a beautiful series of examples where really simple mathematical ideas should yield important insights into problems in biology. From explaining how spots form on animals to other problems the models were elegant but extremely powerful. Starting from the first he showed how this flows from Turing’s work on how patterns can form in very simple chemical systems.

Barbara Liskov spoke on *Programming the Turing Machine*. Barbara spoke on the history of programming languages, with special attention to ideas that are essential to modern programming. The crux is how one organizes programs in the first place, and this separates programming *methodology* from particular programming *languages*. The critical property is *modularity*, which she explained as the soul of her language CLU from the 1970’s. This was great talk for those who are not experts on the nuances in the theory of languages.

Tom Mitchell spoke on *Never-Ending Language Learning*. Tom spoke about a computer system that is now just about two years old, which he calls NELL—you can probably guess why this is the name. The system asks about one hundred thousand questions per day, over one per second, to Google. These questions help it learn the meaning of words and phrases. The results seem amazing to us and a bit scary.

Andrew Odlyzko spoke on *Turing and the Riemann Zeta Function*. Andrew explained how Turing checked the Riemann Hypothesis using a new method. The method, with some modern twists, is still the basis for checking whether the zeta function’s zeroes all do lie on the critical half line: those values with real part . He noted that Turing himself became more skeptical of the Riemann Hypothesis as he went on.

Ron Rivest spoke on *The Growth of Cryptography*. An earlier version of this lecture is viewable here. He traced two strands of computer science to the codebreaking work by Turing and others during World War II. Turing’s conceptualization of the German Enigma Machine influenced the first generation of computers. When Turing was posted to Washington, D.C., in 1943, he met Claude Shannon, and fusing their approaches led to Shannon’s formalization of information theory. These strands then flow together into modern cryptography, including public-key, and Ron spoke of many new applications.

## Saturday

David Harel spoke on *Standing on the Shoulders of a Giant*. David was awarded an honorary doctorate by Technische Universitet Eindhoven last month, on which we congratulate him, and his talk from then is online. He proposes to test whether knowing the complete genome of an organism—in this case Caenorhabditis elegans, suffices to simulate its development well enough that the properties of the model cannot be told apart from those of the real organism. He presented this as an updated version of the Turing Test.

Avi Wigderson spoke on *The Hardness of Proving Computational Hardness*. Avi gave a wonderful talk on the general theory of computational complexity. He avoided many technical terms and issues, but still got his point across. Only at the end did he touch on our barriers to proving lower bounds, but it was a great summary of what we know and do not know about complexity theory.

Shafi Goldwasser spoke on *Pseudo Deterministic Algorithms*. Shafi presented a relatively new model of computation that she calls “Bellagio Algorithms.” Every deterministic algorithm computes a function: given the same put you get the same value. Her new algorithms must be functions in this sense, but can use randomness inside to make the computation go faster. A great definition—the polynomial-time version might have been called “” following the nomenclature of Ken’s colleague Alan Selman—but how did we not see its relevance years ago? A non-polynomial example is that the best known algorithms for factoring employ randomness, but always output the unique prime factorization of the given number. She then explained the state of the what is known about such algorithms.

Bob Tarjan spoke on *Search Tree Mysteries*. Bob is the world expert on balanced binary trees. He gave a neat introduction to this area and reported on new types of balancing methods. The main lesson is that there is still plenty to do in this classic area.

Dick Lipton spoke on *What Would Turing Be Doing Today?* Dick classified Turing as more of a “bird” than a “frog” in Freeman Dyson’s lexicon, with far sight of future problems. One is that a Turing would develop a theory of when computing only what one *needs* to compute is easier than computing all that one *can* compute. He also showed a movie of the Aarhus Lego Turing Machine in action.

Christos Papadimitriou spoke on *Origin of Computable Numbers*. Christos gave the last talk, and linked Darwin to Turing in a very clever way. He then moved on to talk about sex. Yes sex. He has worked on why sex is an evolutionary advantage when at first glance it appears to have many drawbacks. Each mating, for example, uses only one half of the genetic material of each parent. The answer appears to be that sex promotes what he called *mixability*—the ability of alleles to perform well with a good spectrum of other alleles.

We congratulate Christos and co-author Elias Koutsoupias on sharing the just-announced 2012 Gödel Prize for their joint paper (older 1999 version) on “Worst-case equilibria” in computational game theory, together with Tim Roughgarden and Éva Tardos (paper) and Noam Nisan and Amir Ronen (paper). These three papers did the most to build a flourishing area today.

## Open Problems

Shafi’s talk raised a number of intriguing open problems. The main one that is easy to state is: Is there a way to given an find a prime in the interval so that the algorithm is functional in her sense?

The other question is what should we call these algorithms. I do not see that her name, “Bellagio Algorithms,” is the best. The name should be a city, not a hotel in a city. Sanjeev Arora suggested “Kremlin algorithms”—also not quite right in my opinion. I suggest Washington: the place is clearly random inside, but the outcome from our capital is always the same—deterministic deadlock.

(The speaker photos in this post were taken by Ken Regan, and not by Ken Regan.)

### Trackbacks

- Fast chips, Ripe Bankers, Turing, Facebook IPO postmort, Tail Risk Tranche did it, and Glass Steagall 2 « Pink Iguana
- Beyond Las Vegas And Monte Carlo Algorithms « Gödel’s Lost Letter and P=NP
- Chess Knightmare and Turing’s Dream « Gödel’s Lost Letter and P=NP
- Happy Unbirthday To Turing « Gödel’s Lost Letter and P=NP
- Do We Need Mysticism In Theory? « Gödel’s Lost Letter and P=NP
- Thinking Out of the Notation Box « Gödel’s Lost Letter and P=NP
- Mathematical Mischief Night « Gödel’s Lost Letter and P=NP
- The Amazing Zeta Code « Gödel’s Lost Letter and P=NP
- Evolution: A Complexity View | Gödel's Lost Letter and P=NP
- Peter M. Neumann, 1940–2020 | Gödel's Lost Letter and P=NP

For finding a prime in [x,2x], Cramer’s conjecture (http://en.wikipedia.org/wiki/Cram%C3%A9r%27s_conjecture) would imply that it suffices to check x, x+1, x+2, etc.

Nice example if the conjecture is true—and even as an exponential process (tilde-oh of sqrt(x) proved if Riemann is true) it illustrates the “Bellagio” definition.

It’s especially interesting because its running time and output are still somewhat “random” in the sense that the primes appear to be distributed as if they were random. I.e., it’s actually a randomized algorithm, but it’s pulling the random bits straight out of logic.

The heuristic that the primes are random except when they’re obviously not is nice, but you are substituting the power of the randomized computing model for a very strong conjecture. The beauty of randomized primality testing is that it works regardless of how random the primes actually look. There is something unsatisfying about a trivial algorithm that relies on a conjecture significantly stronger than the GRH. If we do not assume Cramer’s conjecture or some strong derandomization result (but say allow ourselves to use GRH), finding primes deterministically seems to be a very hard problem. Given that, a polytime Bellagio algorithm will be a great illustration of the power of the model!

For finding a prime in [x,2x], Cramer’s conjecture (http://en.wikipedia.org/wiki/Cramer's_conjecture) would imply that it suffices to check x, x+1, x+2, …

Church also gave an alternative story for the origin of lambda, see pg.7 of http://www.users.waitrose.com/~hindley/HistlamReport.pdf

Bellagio _is_ a city: Bellagio is a comune (municipality) in the Province of Como in the Italian region Lombardy, located on Lake Como.

It gives the name to the hotel in Las Vegas.

Richard Feynman earned his Ph.D. at Princeton in 1942 under John Wheeler

Alan Turing earned his Ph.D. at Princeton in 1938 under Alonzo Church.

Did they ever meet?

I suspect not—if they had met, I imagine this article would have said so.

Thanks for the reference, Ken. I had a lot of fun reading it. Looks like you are right and Turing and Feynman missed each other by a hair.

I presented Riemann hypothesise prove and it’s connection with prime numbers how can i publish it?

Ashammena@gmail.com

Eng/ Mena Adel