There are many kinds of surprises in mathematics and theory: here are some examples

Gil Kalai is one of the great combinatorialists in the world, who has proved terrific results in many aspects of mathematics: from geometry, to set systems, to voting systems, to quantum computation, and on. He also writes one of the most interesting blogs in all of mathematics—Combinatorics and more; I strongly recommend it to you.

Today I want to talk about surprises in mathematics. I claim that that there are often surprises in our field—if there were none it would be pretty boring. The field is exciting precisely because there are surprises, guesses that are wrong, proofs that eventually fall apart, and in general enough entropy to make the field exciting.

The geometer Karol Borsuk asked in 1932: Can every convex body in ${\mathbb{R}^{d}}$ be cut into ${d + 1}$ pieces of smaller diameter? This became the Borsuk Conjecture, which was proved in low dimensions and for special classes of convex bodies—for example, Hugo Hadwiger proved in 1946 that it was true for all smooth convex bodies. The intuition seemed to be that the result was going to eventually be proved; it was just a matter of time.

The shock, I think more than a surprise, came when Jeff Kahn and Kalai proved in 1993 that for ${d}$ large enough the answer was not ${d+1}$. On the contrary, they showed that for ${d}$ large enough, we need at least

$\displaystyle c^{\sqrt d}$

pieces. Notice that this is a lower bound, and does not imply that ${c^{\sqrt d}}$ pieces are enough. Here ${c>1}$ is a fixed constant. I think it is fair to say that ${c^{\sqrt d}}$ is really different from ${d+1}$—no? Their paper is short and brilliant: you definitely should take a look at it.

Surprises In Mathematics

There are many additional surprises in mathematics. I have listed a few that you may find, I hope, interesting. I tried to break them into “types.”

I also want to thank Subrahmanyam Kalyanasundaram for his tremendous help with this post.

We Guessed The Wrong Way

Sometimes in mathematics and theory, the conventional wisdom is wrong—we believe that X is true, but is false. The evidence for X could have been wrong for many reasons: it was based on small or special cases, it was based on false analogies, it was based on a view that the world would be simpler if X were true, or it was based on our lack of imagination.

The “we” is used to denote the community of all who work on mathematics and theory: it includes you and me. So this comment, and all later ones, are addressed to me as well as everyone else. I have guessed wrong many times, which has led me to work extremely hard on trying to prove a false statement. Perhaps, everyone else is a better guesser than I am, but I think we all have our limits.

Here are two examples of this:

${\bullet }$Nondeterministic space closed under complement: Neil Immerman and Robert Szelepcsényi independently solved the famous LBA problem. They used a very clever counting method to prove that ${\mathsf{NLOG}}$ was closed under complement. This was, I believe, a pretty good example of a surprise. As I discussed earlier, most had guessed that ${\mathsf{NLOG}}$ would not be closed under complement. A great result.

${\bullet }$A Vision Conjecture: In the current issue of the Bulletin of the American Mathematical Society, a conjecture due to Béla Julesz is discussed. He was an expert in human visual perception, and is famous for a conjecture about perception. After many human experiments, over the years, he concluded that if two images had the same first and second order statistics, then a person would find them indistinguishable.

Left is random and right is not: both have same statistics to third order

David Freedman and Persi Diaconis founded a simple rule that generates pictures that have the same first, second, and even third order statistics that a random picture has. Yet, their pictures are easily distinguished by a person from a random picture. I quote Persi: When we told Julesz, he had a wonderful reaction. “Thank you. For twenty years Bell Labs has paid me to study the Julesz conjecture. Now they will pay me for another twenty years to understand why it is wrong.”

We Accepted a False Proof

Sometimes in mathematics and theory, a proof is given that is false—just wrong. We all are human, we all make mistakes; sometimes those mistakes can be believed for long periods of time.

Here are three examples of this:

${\bullet}$Four-Color Theorem: The Four-Color Theorem (4CT) dates back to 1852, when it was first proposed as a conjecture. Francis Guthrie was trying to color the map of counties in England and observed that four colors were enough. Consequently, he proposed the 4CT. In 1879, Alfred Kempe provided a “proof” for the 4CT. A year later, Peter Tait proposed another proof for 4CT. Interestingly both proofs stood for 11 years before they were proved wrong. Percy Heawood disproved Kempe’s proof in 1890, and Julius Petersen showed that Tait’s proof was wrong a year later.

However, Kempe’s and Tait’s proofs, or attempts at a proof, were not fully futile. For instance, Heawood noticed that Kempe’s proof can be adapted into a correct proof of a “Five-Color Theorem”. There were several attempts at proving the 4CT before it was eventually proved in 1976. See this article by Robin Thomas for a historical perspective of the problem.

${\bullet }$Hilbert’s ${16^{th}}$ Problem: I will not state what Hilbert’s ${16^{th}}$ problem is—it will not affect this example. The key is that in 1923 Henri Dulac proved a pretty result about the number of limit cycles that a certain family of differential equations could have. He showed that the number was always finite, while the ${16^{th}}$ was not his goal, this result helped solve at least part of the Hilbert problem. Dulac’s result was considered a great theorem.

Almost sixty years later, Yulij Ilyashenko, in 1981, found a fatal bug in Dulac’s theorem. Seven years later he and independently Jean Écalle, Jacques Martinet, Robert Moussu, and Jean Pierre Ramis found correct proofs that showed that Dulac’s theorem was correct, even if his proof was not.

${\bullet }$The Burnside Conjecture: William Burnside is a famous group theorist who proved many wonderful theorems. Perhaps his most famous is the ${pq}$ theorem that played a major role in starting the quest to classify all finite simple groups:

Theorem: A finite group of order ${p^{a}q^{b}}$ where ${p,q}$ are primes is solvable.

He also made a number of very influential conjectures. Perhaps the most famous was made in 1902, and became known as the Burnside Conjecture: If a group is finitely generated and periodic, then it is finite. Periodic means simply that for any element ${x}$ in the group, ${x^{n(x)} = 1}$ for some ${n(x)}$. This was eventually shown to be false in 1964.

However, a natural question arose immediately, what if the ${n(x)}$ was the same for all the elements of the group, then would the group, then, be finite? In order to attack this new problem, group theorists broke it up into many: ${B(m,n)}$ is the class of groups that are generated by ${m}$ elements and all elements in the group satisfy, ${x^{n} = 1}$. Sergei Adian, Pyotr Novikov proved that ${B(m, n)}$ is infinite for ${n}$ odd, ${n \ge 4381}$ by a long complex combinatorial proof in 1968.

Another group theorist, John Britton, claimed an alternative proof in 1970. Unfortunately, Adian later discovered that Britton’s proof was wrong.

I once looked at Britton proof. It is a whole monograph of about ${300}$ pages, with many many technical lemmas. Britton needed many constants in his proof, so rather that statements like, “let ${v > 191\cdot j^{2}}$,” he would write “let ${v > c_{11}\cdot j^{2}}$, where ${c_{11}}$ was a constant to be determined later on. Then, in the appendix he collected all the inequalities that he needed his constants to satisfy, and all he had to do is show that the inequalities were consistent. He did this. Britton had indeed successfully gone through the cumbersome task of checking the consistency of the inequalities, coming up with constants that satisfy all of them simultaneously.

The bug that Adian discovered was that Britton had made a simple mistake and written down one inequality incorrectly. Unfortunately, the resulting system was inconsistent, and the proof was unsalvageable.

We Misjudged a Problem

Sometimes in mathematics and theory, a problem is believed to be very hard. The problem is believed to be so hard that either no one works seriously on the problem, or people try to create whole new theories for attacking the problem. Then, a elementary proof is given—one that uses no new technology, one that could have been found many years before.

Here are two examples of this:

${\bullet }$Van der Waerden Conjecture: Van der Waerden made a conjecture about the permanent in 1926. He conjectured that

$\displaystyle \mathsf{perm}(A) \ge n!(\frac{1}{n})^{n}$

for any doubly stochastic matrix ${A}$. Further, that equality holds only for the matrix that has all entries equal. Recall a doubly stochastic matrix is a non-negative matrix with its rows and columns sums equal to ${1}$.

This seems like a straightforward inequality. Somehow it stood for decades until it was solved independently by Georgy Egorychev and Dmitry Falikman, in 1979 and 1980. The surprise here is that this “notorious” problem was solved by fairly simple proofs. They were awarded the prestigious Fulkerson Prize for their work, even though the proofs were simple—their result was a breakthrough in the understanding of the permanent.

${\bullet }$Approximation for Planar TSP: The classic problem of finding a TSP for planar graphs is well known to be NP-hard. What Sanjeev Arora did in 1996 was to find an approximation algorithm that had eluded everyone for years. His algorithm’s running time was near-linear and the error ${1+\epsilon}$ for any given ${\epsilon > 0}$. Joseph Mitchell found a similar result at almost the same time—I think there should be a whole post on: why do problems stay open for years, and then are solved by two researchers independently? Another day.

László Lovász told me that these algorithms could have been discovered years before—he thought one of the “amazing” things about them was precisely that they did not use any new tools. They are very clever, and both use important insight(s) into the structure of an optimal tour. Yet, he said, they could have been found before, way before.

We Misjudge an Approach to a Problem

Sometimes in mathematics and theory, someone outlines an approach to solve an open problem, which the rest of the community does not believe is viable. This mistake can lead to an interesting surprise when their method does actually work.

Here are two examples of this:

${\bullet }$Hilbert’s ${10^{th}}$: The famous Tenth asks for a decision procedure that can tell whether or not a Diophantine equation has an integer solution. Such an equation is of the form:

$\displaystyle F(x_{1},\dots,x_{n}) = 0$

where ${F}$ is an integer polynomial. The combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson yielded a proof that this problem in undecidable in 1970. The last step was taken by Matiyasevich.

This result would probably have surprised the great Hilbert—he likely thought that there was such a procedure. So this result could be in the “we guessed wrong section,” but I included it here for a reason.

Julia Robinson showed in 1950 that proving that a certain number theory predicate could be encoded into a Diophantine equation would suffice to solve the entire problem. What Matiyasevich did was to show that this could be done. Before he did this the eminent logician Georg Kreisel had remarked: “Well, that’s not the way it’s gonna go.” (The quote of Kreisel is part of a film about Hilbert’s Tenth.)

Kreisel’s insight was that finding such a predicate would not only prove that the Tenth was undecidable, but would also prove that it was exactly the same as the Halting problem. He thought that perhaps this was too much to expect. He was wrong.

${\bullet }$Four-Color Theorem (Again): The 4CT theorem was proven by Kenneth Appel and Wolfgang Haken in 1976. The proof is famous, since it not only solved a long standing conjecture dating from 1852, but made extensive use of computation.

They point out that their result is simply carrying out an attack on the problem that was already known; the method is essentially due to George Birkhoff. Others had used the method to get partial results of the form: the four color theorem is true for all planar maps of size ${\le X}$. In 1922 Philip Franklin proved 25, and later Walter Stromquist 52—this is not an exhaustive list of all the partial results.

Haken once told me that one reason he felt that they succeeded was that they believed that the current attack could be made to work. That there was no need for an entire new approach. Rather hard work and some computer magic could resolve the problem. Perhaps the surprise is that the attack actually worked.

We Never Thought About That Problem

Sometimes in mathematics and theory, a result is found that is entirely new. Well perhaps not completely new, but is not from a well studied area. The result can be very surprising precisely because not one previously had dreamed that such a result of this type could even be proved.

Here are two examples of this:

${\bullet }$Collapse Of The Polynomial Time Hierarchy: Dick Karp and I proved that if SAT has polynomial size circuits, then the polynomial time hierarchy collapses to the second level. The proof is really simple—yet it is one of those basic results about computation. It’s one of my own results, so there is the risk that my opinion is biased, but I think most would agree with this assessment.

I always felt that one of key reasons that we found this result is simple: we were probably the first that ever thought about the problem. Previously, all circuits results went the other way: if there was a good algorithm, then there was a good circuit. Indeed, in general having a linear size circuit says nothing about the algorithmic complexity of a problem: a problem can have a linear size circuit for all ${n}$ and still be undecidable.

One of the things that I always tell my students is this: beware of simple counter-examples. Often there can be a gem hiding inside a trivial counter-example.

${\bullet }$Voting Theorems: Kenneth Arrow is a Nobel Prize winner in economics. One of his great results is the proof that any reasonable voting system that tries to select among ${3}$ or more outcomes is flawed. The proof of this result is not too hard—see this, but I believe the brilliant insight was the idea that one could prove something at all about voting systems. That, seems to me, to be a great surprise.

We Never Connected Those Areas

Sometimes in mathematics and theory, the connection of two disparate areas can be quite surprising.

Here are two examples of this:

${\bullet }$Prime Numbers and Complex Analysis: Johann Dirichlet’s theorem is one of the first uses of complex analysis to prove a theorem about primes. The theorem states that every arithmetic progression

$\displaystyle a, a+b, a+2b, a+3b, \dots$

contains an infinite number of primes, as long as ${a}$ and ${b}$ have no common divisor. Dirichlet introduced certain complex functions that he cleverly connected to the number of primes that were in a given progression. Then, he proved his theorem by showing that his functions were non-zero at ${1}$, which proved that there are an infinite number of primes in the progression.

${\bullet }$Distributed Computing and Topology: I will end with a connection that I know little about, but still feel that it is safe to say that it is surprising. That is the connection between distributed computing and algebraic topology. Maurice Herlihy and Sergio Rajsbaum have a primer that should help explain the connection better than I can.

Open Problems

What is your favorite surprise? What is your favorite type of surprise? Are there other types of surprises? I love to hear your thoughts.

Finally, could there be major surprises in complexity theory in the future?

1. September 27, 2009 4:46 pm

Gödel’s Theorem is an example of both “we guessed the wrong way”–most people thought that (first order) arithmetic is complete, since Dedekind showed that that (second order0 arithmetic is categorical–and of “we accepted a false proof”–at the time most people familiar with Ackermann’s work thought that his consistency proof would work out. The Church-Turing Theorem about the undecidability of logic is another example. It seems only von Neumann had doubts about the decidability of logic (or the incompleteness of arithmetic and about Ackermann’s consistency proof, for that matter).

September 27, 2009 5:59 pm

Absolutely correct. Hilbert’s whole program was destroyed by this one theorem. A great example.

September 27, 2009 7:48 pm

At the “Barriers in Complexity” workshop one month ago, a distinguished panel all agreed that Barrington’s theorem was the most surprising result in complexity.

September 27, 2009 7:50 pm

At the Barriers workshop in Princeton last month, the panel on Boolean Complexity was asked about the most surprising results in the field. Several panelists agreed on one you did not mention: the surprise of Barrington’s Theorem that constant-width (width 5) branching programs of polynomial size can compute all of NC^1. NL=coNL was also discussed as a major surprise though a lot of the surprise happened several months earlier than the papers of Immerman and Szelepscenyi with the result of Jenner, Kirsig, and Lange showing a collapse of the nondeterministic space hierarchy at the second level. Both of these are of the guessing wrong variety.

Another surprise can come when we interpret our negative results about techniques to apply more broadly than they really do. This led to a surprise that the panel did not mention: Valiant pointed out that monotone and non-monotone circuit complexity of the n+1 “slice functions” into which one can decompose any function using only O(n^2) overhead are the same. The conclusion seemed to be that this meant that monotone and non-monotone circuit complexity would be essentially the same except for this overhead. Razborov’s lower bounds for monotone circuits were a surprise because they could be shown at all and because they didn’t also yield general circuit lower bounds. (They applied to functions, like clique and matching, with small minterms and big maxterms, that are the antithesis of slice functions. The matching example proved that the conceptual error was fundamental.) In your taxonomy this is either misjudging a problem or an approach, depending on whether the problem is monotone or non-monotone circuit lower bounds.

All three of these surprises are from the mid-1980′s. I can’t think of many questions since then where the conventional wisdom was proved wrong. Many of the more recent major results such as PRIMES is in P and SL=L only have confirmed that wisdom. The PCP Theorem in 1992 was a surprise of a very different sort. Five years earlier it would have been hardly conceivable, let alone something to guess about.

September 27, 2009 9:13 pm

Paul, there is an even tighter connection between monotone and non-monotone that I point out earlier. But you point is well made.

I also agree with Barrington. I definitely guessed wrong. We worked hard on lower bounds, and never thought for a minute about an upper bound.

Here is earlier better result: see monotone section.

• September 22, 2013 7:10 am

“Valiant pointed out that monotone and non-monotone circuit complexity of the n+1 “slice functions” into which one can decompose any function using only O(n^2) overhead are the same. The conclusion seemed to be that this meant that monotone and non-monotone circuit complexity would be essentially the same except for this overhead.”

I know that proving exp lower bound for monotone circuits for slice functions means P is not NP. Does the writer also say that poly upper bound for monotone circuits for slice functions means P=NP?

September 22, 2013 2:16 pm

J,

We can do better on relating monotone and non-montone. See old post here at . There show overhead is almost nothing. So non-linear for monotone would be enough for same on non-monotone.

Dick

• September 22, 2013 4:08 pm

I think you missed the link. Also regarding “……so non-linear for monotone would be enough for same on non-monotone.” But we already have some bounds by Razborov. I think I am missing the point. The missing link if provided would help.

• September 24, 2013 5:30 pm

the importance of monotone/boolean slice functions seems not to be widely appreciated. imho in the taxonomy there is both a misjudging of the problem and a misjudging of an approach here. or rather, a lack of judging the approach. the problem is P=?NP and suspect it does not require as advanced math as is currently being explored (eg GCT etc), believe monotone circuit/extremal set theory will eventually prove sufficient. the misjudging of the approach is on slice functions. despite their central importance there has been very little research on the subject since their inception.

it is not so much that they were judged, and found “probably insufficient”, it seems more that they were never given much attention at all. my personal idea to explain this is that Razborov seemed never to do any research on slice functions, and he seemed to give up on monotone functions as being sufficient to prove P vs NP (or lose interest), and being basically an “alpha male” of the area, his lead (or in a sense, abandoning it) was followed. more on the idea of further exploration of slice functions here, a NP vs P/Poly attack outline… have proposed it as a polymath project… hope to hear from anyone on the subj…

• September 24, 2013 9:23 pm

Lokham has worked on Slice functions. All these problems are equally hard or easy. There is no evidence either way. If P is not NP probably GCT is insufficient. Why go to P,NP? just look at integer multiplication. Try showing integer mult takes n^1+eps or just show a n^3 lower bound to permanent determinant problem.

5. September 27, 2009 8:10 pm

In 1946, John von Neumann wrote a long letter to Norbert Weiner, analyzing the fundamental physical limits to microscopy (this was back in the days when mathematicians were hardware builders too). It was obvious to von Neumann and Wiener … and everybody else of their generation … that higher-resolution microscopy required shorter-wavelength (hence higher-energy) probes. Not until the 1990s was it appreciated that this approach runs into intractable problems relating to radiation damage attendant to those same short wavelengths.

Fortunately, in 1973 the “flaming arrow” of magnetic resonance imaging arrived … and it came as a *big* surprise to many researchers … “Are they allowed to do that?”

Von Neumann and Wiener would have grasped its guiding principle, within seconds, from a one-sentence (and relatively non-technical) explanation: “By exploiting nuclear-spin magnetic resonance, radio-frequency photons having wavelengths of tens of meters can encode information about structure on length-scales of millimeters.”

The point is, it took twenty-seven years to conceive that one-sentence insight.

Why so long? Arguably it was because the foundations of microscopy were poorly understood. So even though plenty of smart people like von Neumann and Wiener (and Richard Feynamn too) worked on atomic-resolution microscopy—so that there was no shortage of creativity or technical ability among people thinking about the problem—their generation needed a better understanding of the mathematical and physical foundations of microscopy (in quantum information theory, basically); it was this understanding that required several decades to evolve.

Conversely, once the key insights were in-place, the discipline of magnetic resonance research expanded explosively, creating what Dirac defined as a Golden Era, namely, an era in which “ordinary people can make extraordinary contributions.”

Assuming that history repeats itself (not exactly, but more-or-less), then perhaps we may expect that the most surprising aspect of the (presumed) eventual resolution of P=NP will not be the surprising ingenuity of its proof, but rather, a surprising insight into the guiding principles of the problem … insights that help in evading technical barriers that now seem intractable.

If we are *very* lucky, this might conceivably be a one-sentence, non-technical insight.

And if we are *extraordinarily* lucky, these insights might initiate a Dirac-style Golden Era in complexity theory. Meaning, job creation!

After all, it has only been thirty-seven years since Karp’s article. History suggests that’s not much time.

—————-

@inproceedings{vonNeumann:46, Author = {J. Von Neumann}, Booktitle = {Proceedings of the Norbert Wiener Centenary Congress, 1994}, Editor = {V. Mandrekar and P. R. Masani}, Pages = {506–512}, Publisher = {American Mathematical Society, Providence, {RI}}, Series = {Proceedings of Symposia in Applied Mathematics}, Title = {Letter to {N}orbert {W}iener from {J}ohn von {N}eumann}, Volume = 52, Year = 1997}

• September 28, 2009 12:56 am

Meaning, job creation!

Or the very opposite…

• September 28, 2009 7:39 am

Kevembuangga, that passage you quoted (from Stephen Cook’s essay The P versus NP problem) asserts “[If p=NP then] the problem of finding intelligible proofs would be reduced to finding a recognition algorithm for intelligible proofs. Similar remarks apply to diverse creative human endeavors, such as designing airplane wings, creating physical theories, or even composing music.”

Can we not reasonably ask ourselves, whether that day has already arrived? Nowadays, few aeronautical engineers would design a wing without the help of a CFT code, few condensed matter physicists would design an experiment without the help of an in-depth quantum simulation, and few composers (anymore) choose to work without the help of a synthesizer.

As for once-human disciplines like grandmaster chess, it has become infeasible to play at the highest levels without computer help … by far the strongest chess-playing entities on the planet are human-machine centaurs (as they are called).

Metaphorically speaking, huge chunks of the once-mysterious “forest of NP” have fallen in recent decades to the algorithmic “chainsaws of P”. Furthermore, the rate of P-resource clear-cutting of the problem-solving landscape—and the nature-observing landscape too—is accelerating, with no obvious limits presently in view.

It’s exciting and challenging to imagine what the resulting clear-cut cognitive landscape is going to look like. Deciding what place human creativity and initiative will have in this landscape is a big challenge, and a big responsibility too.

Perhaps mathematicians, scientists, and engineers are already, to a pretty considerable extent, in the same situation as King Arthur, who in Monty Python and the Holy Grail receives from the naughty French knights the thought-provoking message “Grail? We’ve already got one, you see? It’s very nice!”

If so, then perhaps a Dirac-style Golden Era is beginning?

6. September 28, 2009 2:02 am

Luca Trevisan’s paper on Extractors and Pseudorandom generators was definitely a very interesting (and surprising) connection in pseudorandomness.

7. September 28, 2009 4:55 am

My comment is related with the complexity of the coloring algorithms used in the proofs of the 4CT and planar graphs. Since the proving of the theorem, efficient algorithms have been found for 4-coloring maps requiring only O(n^2) time, where n is the number of vertices. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic time algorithm, improving on a quartic algorithm based on Appel and Haken’s proof (Robertson et al. 1996). I have proposed a four coloring algorithm of maximal planar graphs requiring O(n)
( see “A Unified Spiral Chain Coloring Algorithm for Planar Graphs”, 2007, http://arxiv.org/abs/0710.2066 ). Furthermore no efficient algorithm known for three colorability of planar graphs but when a planar graph has no triangle we know that its chromatic number is 3 and O(n) coloring algorithm known. Similarly when planar graph is odd-cycle free it can be efficiently two colorable. Then my question is this:
What makes the three colorability of planar graphs inefficient?

September 28, 2009 12:33 pm

John Sidles: The problems in NP that have been felled by the “chainsaw of P” as you call it are interesting but I do not think they indicate that we are on the way to proving P=NP. Both linear programming and primality testing were mentioned in Ladner’s paper in which he proved that if P != NP then there are problems of intermediate complexity (that is, problems that are in NP, not in P, but not NP-complete either). So these specific problems have been targeted almost since the beginning. Moreover, there are still at least two major problems in NP that are not known to be in P: factoring and graph isomorphism. Although graph isomorphism seems to be doable efficiently in practice (viz. McKay’s “nauty” program), lots of people (literally millions) are betting seriously (literally billions of dollars) that factoring is not.

Furthermore, none of the above problems, nor the problems recently shown to be in P by Valiant’s surprising holographic algorithms, seem to have the “universal” flavor that NP-complete problems do.

On the other hand, if someone proved P=NP intersect coNP, *that* would be a serious step towards P=NP. Regardless of which way the conventional wisdom says these problems will go, I think even more-ingrained conventional wisdom thinks that we’re a long way from resolving any of them. (Although, in the spirit of this blog: maybe the more ingrained the conventional wisdom is, the more reason you have to doubt it?)

9. September 28, 2009 1:09 pm

something under “we never connected those areas”: Andrew Wiles proving FLT by applying Gerhard Frey’s suggestion the correspoding FLT elliptical curve would violate the Taniyama-Shimura conjecture which stated that every elliptic curve was modular and can be associated with a unique modular form. Who would have thought that the proof for FLT was a bridge between elliptical curves and modular forms? It wouldn’t have obviously fit in the margin as Fermat originally thought.

10. September 28, 2009 1:16 pm

I’ve maintained for a long time now that there’s a good chance P = NP, for a variety of reasons. Good to hear that there’s someone out there who thinks the same way I do.

http://markpneyer.com/images/NewPEqNPPlate.jpg

September 28, 2009 1:25 pm

Wow. Great picture.

11. September 28, 2009 1:19 pm

something under “we never connected those areas”: Andrew Wiles proving FLT by applying Gerhard Frey’s suggestion that the corresponding FLT elliptical curve would violate the Taniyama-Shimura conjecture which stated that every elliptic curve was modular and can be associated with a unique modular form. Who would have thought that the proof for FLT was a bridge between elliptical curves and modular forms? It wouldn’t have obviously fit in the margin as Fermat originally thought.

September 28, 2009 1:30 pm

Yes the Wiles proof was possible because of this connection. I also think that the connection served two purposes. First, Wiles could bring new tools to bear on the problem other than algebraic number theory, which while powerful had not been able to solve the whole problem.

Second, the connection showed that if FLT even had one failure, then a major conjecture of number theory would be wrong. I think, as a non-expert, that people felt that perhaps FLT could have isolated solutions. However, once it was realized that even one solution would destroy the beautiful conjecture of Taniyama-Shimura, then I think Wiles was more sure that FLT must be true.

If we could prove that P=NP would destroy some well believed other problem, that would be interesting evidence to me.

12. September 28, 2009 2:52 pm

Just a heads up… the ‘skimlinks’ javascript is breaking all the links on the page (open in new tab doesn’t work). Extremely annoying.

September 29, 2009 7:35 am

Sorry. I have no control over WordPress.com. Does this happen elsewhere?

September 29, 2009 12:58 am

Dear Prof. Lipton, not sure if you’d remember me.

I follow each of your blog, they’ve refreshing content both in terms of future potential ideas, and glimpses into fascinating history.

A few more surprises came to my mind.

Aperioding tiling (also known as Penrose tiling)

Michelson-Morley experiment to prove non-existence of aether. Then the understanding that light has absolute speed in all media.

Perhaps the biggest theory to date in science is Quantum theory itself.

September 29, 2009 12:59 am

I meant – perhaps the biggest surprise in science was the quantum theory itself.

14. September 29, 2009 9:59 am

Arnab’s suggestion of aperiodic tiling is indeed a wonderful example of an “Are they allowed to do that?” surprise — definitely the physics community was astounded by x-ray diffraction patterns having five-fold symmetry.

What were some early mathematical surprises? The discovery of irrational numbers by the Pythagoreans came as a huge surprise. Gauss’ Therema Egregium (the discovery that geometry could be defined intrinsically) was unlooked-for. More recently, KAM theory arguably qualifies.

These surprises were of the class “Wow!” or even “WTF!?” … rather than “conventional opinion is mistaken.”

In physics, the previously mention Michelson-Morley experiment and the discovery of parity violation both fall into the “WTF!? Oh Wow!” class.

In biology its harder to find examples … except for the “Oh Wow!” impact of Darwin’s Theory of Evolution itself. In biomedicine, the idea that many disorders (prion diseases for sure, Alzheimer’s?, osteoarthritis?) may have be disorders of molecular conformation (as contrasted with a genetic or microbial origin) may perhaps prove to be “Oh Wow!” surprise in coming years.

The history of engineering supplies dozens of “Oh Wow!” surprises. For example, control theory itself came as an “Oh Wow!” surprise … so much so that Norbert Weiner wrote a novel about it (The Tempter). An ongoing surprise in electrical engineering—sustained over many decades—is the amazingly durable power of Moore Scaling “As we make computer circuits smaller, they become faster, cheaper, more reliable, and uses less power.”

What are the possibilities of an “Oh Wow!” surprise in complexity theory? There are plenty, no doubt … but if we could foresee them, then they wouldn’t be surprises. New ways of classifying problem complexity, perhaps?

Even if P=NP is not solved for a long time to come, the steady expansion of the class of problems that are either provably in P, or heuristically in P, is already an “Oh Wow!” phenomenon having no obvious end in sight.

• September 29, 2009 2:17 pm

Here is my favourite “Oh Wow” surprise in complexity theory (of course in case of P=NP). The famous Levinthal’s paradox states that “How can a folding protein choose so quickly among so many possible foldings the one with minimum energy?”. Hence the algorithm used by the nature is in P. On the other hand Crescenzi et. al. and independently Berger and Leighton have shown that the protein folding problem even in the two dimensional H-P (hydrophilic-hydrophobic) model is NP-complete.

15. September 29, 2009 11:42 am

We can summarize the preceding post, and provide one answer Dick’s question “Are there other types of surprises?” by saying “There are two kinds of surprises in mathematics (and in science and engineering too). The first kind is “Hmmm … who would have thought that would turn out to be the right answer?”, and the second kind is “Oh wow … who would have thought that would turn out to be the right question?”

Most of Dick’s examples seem to belong in the first category of mathematical surprises, but his example of Arrow’s voting theorems falls definitely in the second.

How hard is it to construct a list of surprising mathematical predictions? Constructing a list of (possibly wrong) predictions regarding the first kind of surprise is trivially in complexity class P … simply flip a coin for every conjecture in the literature! Conceiving even an incomplete prediction as to the second kind of mathematical surprise is very much harder … seemingly there is no way to construct such predictions … unless P=NP of course.

16. September 29, 2009 3:59 pm

Another surprise:
Shannon’s theory – who would have thought that a random code would do the job?

• November 27, 2012 11:35 pm

Only “a man of infinite courage”, as Hamming so memorably put it in his lecture!

September 29, 2009 4:43 pm

In your “We Misjudged a Problem“ section, you could perhaps also include the recent solution of the finite field Kakeya problem, surveyed here: http://www.eccc.uni-trier.de/report/2009/077/ . The problem had been open for a while, but at the end was solved using only the quite well known “Polynomial method“.

September 30, 2009 6:20 am

Taking into account that the most (only?) interesting systems in nature (atoms, cells, brains, societies) are parallel systems and that computing engineering trends points to parallelism (could it be otherwise?), i´ve been surprised recently first by the high complexity of parallelism (in a wide sense wich includes concurrency theory and distributed-parallel computing) wich range from NC to undecidable (see for instance this two papers about the complexity of one of the prominent concurrency formal models, Petri Nets: “Decidability issues for Petri Nets” from J.Esparza and M.Nielsen and “Decidability and Complexity of Petri Nets problems-an introduction” from J.Esparza) and second by the lack of results regarding what would be the consequences of P=NP regarding these parallel issues.

Is this question so obvious that nobody find it worth to write about ?.

Little can be found on the web about this and a after quick search on my mind i could only find the following conclusions, wich i ignore if they are right or wrong:
–the P versus NC question is independent of the Pversus NP question unless the isomorphism conjecture is true.
–even if P=NP and P=NC the complexity of parallel problems in the worst case will remain high since many parallelism problems are beyond NP (PSPACE-complete, no need to call to undecidability).

September 30, 2009 8:13 am

Hi Everyone,

My Contribution:

SIMPLE MAX CUT for COMPLEMENT OF BIPARTITE GRAPHS is NP COMPLETE

Zelah

20. September 30, 2009 9:04 am

Another major mathematical surprise was Gurvit’s proof that deciding whether quantum entanglement is present is NP-hard (see arXiv:quant-ph/0603199v7 for a review)

Prior to this proof, many (most?) physicists had conceived the quantum entanglement of two systems as being a shared physical resource for which (plausibly) an efficient-to-compute, physically motivated entanglement measure would someday be discovered.

Gurvit’s proof eliminated this hope … which was similarly upsetting for some lines of QIT research as Godel’s theorem was to logicians.

This is perhaps a case in which altering the starting question—from “What is the physical nature of entanglement?” to “What is the complexity of computing a witness to entanglement?”—was essential to proving a surprising result.

21. October 1, 2009 8:37 am

Cahit posts: “The famous Levinthal’s Paradox states that ‘How can a folding protein choose so quickly among so many possible foldings the one with minimum energy?’

Recent advances in synthetic biology suggest that Levinthal’s Paradox may belong to the class of “paradoxes and mysteries that, surprisingly, turned out not to be not particularly paradoxical or mysterious.”

In practice, if we directly simulate classical protein dynamics (using PTIME programs like Rosetta and Anton), then we discover that some peptide chains fold quickly to a minimum-energy configuration … and others don’t. And this reflects physiological reality; some proteins have (relatively) fixed structures … and others don’t.

In particular, the Shaw group’s millisecond-long simulations with Anton (which they showed at FOMMMS) reveal a world of conformational molecular biology that is immensely more dynamical than x-ray crystallographic studies have led folks to expect.

Thus, our intuitions that Nature (somehow) requires more-than-P resources to physically fold proteins, and that synthetic biologists (somehow) will require more-than-P resources to simulate protein folding, and even our intuitions that large-scale biological assemblies generically have a fixed structure, all seem to be misguided.

Moreover, a similar lack of paradox and mystery is evident in the quantum domain also, in the sense that there is emerging evidence that condensed matter quantum systems that are at or near thermal equilibrium (a class that includes proteins) can be accurately simulated using PTIME resources; the key mathematical point is that thermal processes act to quench the high-order quantum correlations that otherwise would be exponentially hard to simulate.

The result is that, at synthetic biology conferences and seminars (the ones that I attend anyway), it is simply taken for granted—by graduate students especially!— that of course the classical and quantum aspects of molecular biology can be simulated with PTIME resources.

And what of Levinthal’s Paradox? I have never heard it mentioned in a synthetic biology seminar.

This illustrates a major source of scientific surprise: the human love of paradox and mystery can be so strong, as to lead us to perceive paradox and mystery even when they are not present, and when this is pointed out, we feel surprised.

22. October 1, 2009 8:41 am

Dear Dick, I would like to mention some surprised in linear programming that are maybe related to the discussion: (i) Dantzig’s discovery of simplex algorithm and it superb performences (2) The klee-Minty cube (3) The Katchian proof that LP is in P (4) The G-L-S discoveries that ellipsoid method lead to many many problems in P (5) Karmarkar’s algorithm and the superb performences of interior point methods.

• October 2, 2009 7:42 pm

All there five developements in linear programming had carried some surprises (by the way G-L-S stands for Groetschel Lovasz and Schrijver) and it can be interesting to analyze them compared to earlier common wisdom (and common sense). Let me give my impression regarding (5) which I remember when it happened.

After Katchian’s brilliant proof that LP is in P there was a sort of a tentative common wisdom that there is a trade off between LP algorithms which are good in theory (the ellipsoid method) and those which are good in practice (the simplex algorithm). People were careful to explain that while Katchian result was important theoretically it has no baring what so ever on practical linear programming. Karmarkar’s algorithm which was different from both these other algorithms, which was in P, and which was practically good, perhaps even better than the simplex algorithm, was truly amazing. (And led to major shift in theoretical and practical LP.)

23. October 1, 2009 1:25 pm

On Hilbert’s 10th problem: The MRDP (=DPRM) theorem to the effect that any computable function could be defined by a polynomial equation (which implies that H10 is ubsolvable) certainly did surprise the experts. What made people like Kreisel so sure that it couldn’t be true was not the much weaker statement that H10 is of the same degree of unsolvability as the halting problem, but rather the number theoretic consequence of MRDP to the effect that any set of natural numbers definable by a polynomial equation is definable by one in a fixed number of unknowns (actually 9) and a fixed degree. Also the result you attribute to Julia Robinson in 1950 is actually from a paper by her together with Hilary Putnam and me in 1962. This paper did make use of earlier work of Julia as well as a result from my 1950 dissertation.

24. November 19, 2009 12:17 am

June 30, 2010 5:56 pm

Hello.

What if, in a general theory of everything kind of way, some singular human conscious had used simple Yes(I should feel guilty) or No(I do not feel guilty) answers to answer every moral question that he could remember that had ever occurred in his life. In this way he could have become completely pure. He would have truly asked himself all the questions that could be asked of him. He would have emotionally evolved back to when he was a child.

What if he then assigned an ‘It doesn’t matter as life is only made to make mistakes’ label on every decision that he had analysed.

This would not make him God or the devil, but just very still and very exhausted. Anybody can do this but just for the purpose of this experiment lets say I have done this. Which I have.

There are no fears in me and if I died today I could deal with that because who can know me better than myself? Neither God or the Devil. I am consciously judging myself on ‘their’ level.

To make this work, despite my many faults, take ME as the ONLY universal constant. In this sense I have killed God and the Devil external to myself.The only place that they could exist is if I chose to believe they are internal.

This is obviously more a matter for a shrink more than a mathematician, but that would only be the case depending on what you believed the base operating system of the universe to be. Math / Physics / morals or some other concept.

As long I agreed to do NOTHING, to never move or think a thought, humanity would have something to peg all understanding on. Each person could send a moral choice and I would simply send back only one philosophy. ‘ LIFE IS ONLY FOR MAKING MISTAKES’.

People, for the purpose of this experiment could disbelief their belief system knowing they could go back to it at any time. It would give them an opportunity to unburden themselves to someone pure. A new Pandora’s box. Once everyone had finished I would simply format my drive and always leave adequate space for each individual to send any more thoughts that they thought were impure. People would then eventually end up with clear minds and have to be judged only on their physical efforts. Either good or bad. It would get rid of a lot of maybes which would speed lives along..

If we then assume that there are a finite(or at some point in the future, given a lot of hard work, there will be a finite amount) amount of topics that can be conceived of then we can realise that there will come to a time when we, as a people, will NOT have to work again.

Once we reach that point we will only have the option of doing the things we love or doing the things we hate as society will be completely catered for in every possible scenario. People will find their true path in life which will make them infinitely more happy, forever.

In this system there is no point in accounting for time in any context.

If we consider this to be The Grand Unified Theory then we can set the parameters as we please.

This will be our common goals for humanity. As before everyone would have their say. This could be a computer database that was completely updated in any given moment when a unique individual thought of a new idea / direction that may or may not have been thought before.

All that would be required is that every person on the planet have a mobile phone or access to the web and a self organising weighting algorithm biased on an initial set of parameters that someone has set to push the operation in a random direction.

As I’m speaking first I say we concentrate on GRAINE.

Genetics – Robotics – Artificial Intelligence – Nanotechnology and Zero Point Energy.

I have chosen these as I think the subjects will cross breed information(word of mouth first) at the present day optimum rate to get us to our finishing point, complete and finished mastery of all possible information.

Surely mastery of information in this way will lead us to the bottom of a fractal??? What if one end of the universes fractal was me??? What could we start to build with that concept???

As parameters, we can assume that we live only in 3 dimensions. We can place this box around The Milky Way galaxy as this gives us plenty of scope for all kinds of discoveries.

In this new system we can make time obsolete as it only making us contemplate our things that cannot be solved because up to now, no one has been willing to stand around for long enough. It has been holding us back.

All watches should be banned so that we find a natural rhythm with each other, those in our immediate area and then further afield.

An old clock watch in this system is should only be used when you think of an event that you need to go to. It is a compass, a modern day direction of thought.

A digital clock can be adapted to let you know where you are in relation to this new milky way boxed system.(x,y,z).

With these two types of clocks used in combination we can safely start taking small steps around the box by using the digital to see exactly where you are and then using the old watch to direct you as your thoughts come to you.

We can even build as many assign atomic clocks as are needed to individually track stars. Each and every star in the Milky Way galaxy.

I supposed a good idea would be to assume that I was inside the centre of the super-massive black hole at the centre of the galaxy. That would stop us from being so Earth centric.

We could even go as far as to say that we are each an individual star and that we project all our energies onto the super-massive black hole at the centre of the galaxy.

You can assume that I have stabilized the black hole into a non rotating perfect cube. 6 outputs /f aces in which we all see ‘the universe and earth, friends’ etc. This acts like a block hole mirror finish. Once you look it is very hard to look away from.

The 6 face cube should make the maths easier to run as you could construct the inner of the black hole with solid beams of condensed light(1 unit long) arranged into right angled triangles with set squares in for strength.

Some people would naturally say that if the universe is essentially unknowable as the more things we ‘discover’ the more things there are to combine with and therefore talk about. This is extremely fractal in both directions. There can be no true answers because there is no grounding point. Nothing for the people who are interested, to start building there own personal concepts, theories and designs on.

Is it possible that with just one man becoming conscious of a highly improbable possibility that all of universes fractals might collapse into one wave function that would answer all of humanities questions in a day? Could this be possible?

Answers to why we are here? What the point of life really is et al?

Is it possible that the insignificant possibility could become an escalating one that would at some point reach 100%???

Could it be at that point that the math would figure itself out so naturally that we would barely need to think about it. We would instantly understand Quantum theory and all.

Can anybody run the numbers on that probability?

26. March 19, 2011 9:17 pm

I´m not sure what other people think about this one, but to me Maier´s Theorem on primes in short intervals was and still is kind of surprising.