Skip to content

Math Is Like The Stock Market

August 18, 2011


It goes up, it goes down, it does not grow monotonically

Louis de Branges is an analyst, not a stock trader, but an expert in the theory of real and complex valued functions. He is best known for his proof of a long-standing open problem from complex analysis.

Today Ken and I want to talk not about any recent proof claim, but about the nature of them in general.

The growth of mathematical knowledge is not monotone. Sometimes mathematicians make errors and they are unfixable; sometimes they make errors and they are fixable; sometimes they make errors and they lead to whole new vistas. The growth and accumulation of knowledge is complex and chaotic—this has been the case in the past, and continues to be the case today. I cannot imagine it will not continue to be the case in the future.

One of the reasons for creating GLL is to examine and expose the human side of mathematics. It is done by people, by people who make errors, by people who discover errors, by people who fix errors, and by people who try their best to advance the field. I hope that GLL and other forums are places where we can discuss ideas in a positive manner. I do not think that attacks on researchers are appropriate, nor that they advance the free exchange of ideas.

I have already written a number of times on this very issue, but perhaps I can give a few more examples to make the point again. I love mathematics, I love to see new results, new insights, new attempts to solve hard problems. I hope you do too.

I think that failure must be an option—with apologizes to Gene Kranz—if the field is to continue to grow and be as exciting as it has for thousands of years. As an aside, Kranz apparently never actually said the exact words “Failure is not an option.” Recall Kranz was the mission leader in Houston for Apollo 13, which was the moon mission that had an explosion and almost lost the three astronauts that were aboard. One of the flight controllers who worked with Kranz on Apollo 13 was once asked:

“Weren’t there times when everybody, or at least a few people, just panicked?” His answer was “No, when bad things happened, we just calmly laid out all the options, and failure was not one of them.

Ken remembers his father telling him of IBM or AT&T having a program colloquially called the “Terrible Twenty”: they would fund 20 researchers freely with the attitude that if just one hit a breakthrough they would break even, and if two did, or three others at least produced workable products, they would show a nice profit. The 16 or 18 or 19 who failed were a necessary part of this equation, for if the company knew in advance what would hit big, they wouldn’t need the program. We cannot trace the name “terrible twenty” on Google, but it shows those researchers were paid great respect.

Gauss

Even Carl Gauss could make claims that were only partial proofs. The fundamental theorem of algebra is:

Theorem: Any non-trivial polynomial over the complex numbers has at least one root.

Gauss is usually credited with the first correct proof in his 1799 dissertation: “New Proof of the Theorem That Every Algebraic Rational Integral Function in One Variable Can be Resolved into Real Factors of the First or the Second Degree.” While his claimed proof was much closer to a rigorous one, it assumed some properties of curves that were not proved by him.

Cauchy

Augustin-Louis Cauchy made a number of “errors.” He seemed to confuse various types of continuity and made other similar claims that were not correct. This should take nothing away from his sweeping impact on mathematics, but is a reminder for us how knowledge is created in mathematics. One interesting claim that he made is featured recently in a lovely article by Marek Jarnicki and Peter Pflug: “Directional Regularity vs. Joint Regularity”.

Cauchy claimed, in his 1821 book Cours d’analse, that if {f(x,y)} is continuous in each variable, then it is continuous in both. That is if {f(x,b)} and {f(a,y)} are continuous for any fixed {a} and {b}, then {f(x,y)} is continuous as a function of two variables. This is false. As Jarnicki and Pflug say, “every student nowadays knows this statement is not correct.” It took almost fifty years for the discovery of counterexamples.

The best example, perhaps, is due to Giuseppe Peano:

de Branges

The famous Bieberbach Conjecture was solved by de Branges in 1984. The class {\cal S} consists of the functions

\displaystyle  f(z) = z + \sum_{n=2}^{\infty} c_{n} z^{n}

that are regular and one-to-one inside the unit disk, {|z| < 1}. Ludwig Bieberbach conjectured in 1916 that

\displaystyle  |c_{n}| \le n, \; n\ge 2,

for these functions. The conjecture was striking: the global property of one-to-one somehow controlled the size of all the coefficients of the functions.

It took over 70 years for de Branges to solve the full conjecture. Previously, the conjecture was proved for small values of {n}: the values proved in order were,

\displaystyle  3, 4, 6, 8, 5.

Note, this is not a typo: the value {5} came only in 1972, even though {6} was proved in 1969. Even results for one value of {n} often had quite long and difficult proofs, and it must have seemed highly unlikely that a uniform proof was near that would do all values of {n} at once. If de Branges have proved the conjecture for all {n \le 100}, that would probably would have been a major achievement. But he proved all {n \ge 2} at once.

His proof was initially rejected by the mathematical community. Few mathematicians—none?—would even look at his claimed proof, which initially was long and actually part of a whole book he was writing—385 pages. Though he was a professional mathematician, a full professor at Purdue, he had claimed proofs before of famous conjectures, and all previous ones had turned out to be wrong. His track record was not promising.

He eventually visited the Steklov Institute of Mathematics in Leningrad for an extended period of time. There was a seminar there that began to look at his manuscript and the proof in detail. The expected outcome was a fatal error. They expected to discover the proof was useless, uncorrectable.

Wrong. Gradually it started to be clear that while there were mistakes, many claims were essentially correct. Often it was easy to substitute a correct argument for de Branges’ original one. Over time, however, the “fog” began to lift and the experts at Steklov began to see that de Branges ideas could be made into a valid proof. The Bieberbach Conjecture was true: de Branges had succeeded where all before him had failed. An immense achievement.

There is a beautiful article that details the “Last 100 days of the Bieberbach Conjecture.”

Since de Branges great achievement he has continue to work and claim proofs of some of other great open problems. For example, for a time he claimed a proof of the Riemann Hypothesis, which was based on a novel approach through the theory of spaces of entire functions. The proof is wrong, but some interesting mathematics did come out of this failed attempt: There is now a class of special entire functions called de Brange functions and there is some continuing work looking into their properties.

Wiles

Andrew Wiles’ fix (with help from Richard Taylor) to his proof of Fermat’s Last Theorem is a case of an earlier failed proof attempt actually proving vital. As he described it on the 1997 NOVA special “The Proof” (scroll down):

In September [1994], I decided to go back and look one more time at the original structure of Flach and Kolyvagin to try and pinpoint exactly why it wasn’t working, try and formulate it precisely. One can never really do that in mathematics, but I just wanted to set my mind to rest that it really couldn’t be made to work. And I was sitting here at this desk. It was a Monday morning, September 19, and I was trying, convincing myself that it didn’t work, just seeing exactly what the problem was, when suddenly, totally unexpectedly, I had this incredible revelation. I realized what was holding me up was exactly what would resolve the problem I had had in my Iwasawa theory attempt three years earlier, was—It was the most—the most important moment of my working life. It was so indescribably beautiful; it was so simple and so elegant, and I just stared in disbelief for twenty minutes.

Then, during the day, I walked around the department. I’d keep coming back to my desk and looking to see if it was still there. It was still there. Almost what seemed to be stopping the method of Flach and Kolyvagin was exactly what would make horizontal Iwasawa theory [succeed]. My original approach to the problem from three years before would make exactly that work. So, out of the ashes seemed to rise the true answer to the problem. So, the first night, I went back and slept on it. I checked through it again the next morning, and by eleven o’clock, I was satisfied and I went down and told my wife, “I’ve got it. I think I’ve got it. I’ve found it.” And it was so unexpected, I think she thought I was talking about a children’s toy or something and said, “Got what?” And I said, “I’ve fixed my proof. I’ve got it.”

Cryptography

The area of cryptography is filled with many claims and errors. However, some of the mistakes have led to great new ideas. I believe that one of the issues in this area is how unintuitive the area is. Many of the simple rules that apply elsewhere in the design of algorithms fail completely when designing crypto protocols.

A simple example is the use of known algorithms as subroutines. In the design of data structures or algorithms, there is no problem with using another algorithm as a black box, one that can be used over and over. This fails completely in the crypto area: a major source of research is when and how to use known protocols to build new ones. This is still an active research direction.

Complexity Theory

Perhaps because the basic computational model is non-interactive and well understood, there have been fewer subtle errors in computational complexity. Rather than enumerate them, we’ll focus on one we think is important, involving probabilistic and oracle machines which are less well understood—the we is Ken and I.

Does {\mathsf{BPP}} have a time hierarchy? Same question for the quantum analogue, {\mathsf{BQP}}.

What we really mean is that we can define {\mathsf{BPTIME}[t(n)]} and {\mathsf{BQTIME}[t(n)]} for reasonable time functions {t(n)}. Then given a condition for {t_1(n)} to be significantly but not terribly less than {t_2(n)}, such as {t_1(n)\log t_1(n) = o(t_2(n))} which yields the Deterministic Time Hierarchy Theorem for Turing machines, can we show:

\displaystyle  \mathsf{BPTIME}[t_1(n)] \neq \mathsf{BPTIME}[t_2(n)] \quad\mbox{and/or}\quad \mathsf{BQTIME}[t_1(n)] \neq \mathsf{BQTIME}[t_2(n)]?

Lance Fortnow and Mike Sipser thought they had given surprising negative evidence by constructing an oracle relative to which {\mathsf{BPP}} would equal {\mathsf{BPTIME}[O(n)]}, i.e., bounded-error probabilistic linear time. Between polynomial and linear there would be no hierarchy provable with standard “relativizable” techniques. However, problems in their argument led to a retraction—even with truth-table use of the oracle there were “subtle problems in the way we combine machines.” Note this is like the issue with cryptography and algorithms above.

I, Dick, thought about this problem a while ago, and thought I had a proof that there was a hierarchy. After sleeping on it, I discovered the fatal flaw; some problems are very slippery.

Nevertheless, Fortnow and Sipser’s work led to new challenges and some new results:

  • Christer Berg and Johan Håstad wrote a paper explaining the issues clearly.
  • Jin-Yi Cai, Ajay Nerurkar, and D. Sivakumar proved a nice hierarchy for probabilistic polynomial time above polynomial, a Buffalo product.
  • Per Ken’s memory, Rutger Verbeek thought he had solved Lance and Mike’s problem, but the complicated proof again had flaws. However, this led him and Robert Rettinger to a valid solution (alternate link) to the truth-table case—which is enough to indicate that getting a true hierarchy will be hard.
  • Lance and Rahul Santhanam, however, proved a tight hierarchy when the probabilistic machines have one or more bits of nonuniform advice, or when they can fail on a small fraction of instances. Dieter van Melkebeek and Konstantin Pervyshev extended the former result to almost any “semantic” machine model, so it holds for {\mathsf{BQTIME}} with advice as well.
  • Recently there has been interest in whether {\mathsf{BPP}} hierarchies are tied up with de-randomization issues, such as indicated by this StackExchange item with a nice answer by Ryan Williams.

Open Problems

In summary: mathematics does not more forward in a perfect monotone fashion. Mathematicians make mistakes, and sometimes they are fixed and sometimes not. But one thing is certain: unless people try to prove great results they will never be resolved. I discussed this before in comparing math to mountain climbing.

22 Comments leave one →
  1. Anonymous permalink
    August 18, 2011 1:21 pm

    I have a theory: Most researchers who are sloppy in their write-ups and are error-prone, are mostly cases of undiagnosed Adult Attention Deficit Disorder. Albert Einstein (who made FAR more errors than most would guess from the popular image of him) and Stan Ulam are great candidates.

    Maybe with the right neurotropic compounds (ritalin?) the sciences will have fewer “downs”.

  2. Javaid Aslam permalink
    August 18, 2011 1:24 pm

    Dear Dick,
    I love your quote: “failure must be an option”.

  3. Colin Reid permalink
    August 18, 2011 4:59 pm

    There have certainly been times earlier in our history where knowledge has suffered a significant reverse, such as when a great mathematician died (without properly explaining her/his work to others), an advanced culture collapsed or degenerated, or a library full of one-of-a-kind manuscripts was destroyed. But in the modern era, one would hope that while there can be isolated mistakes, human knowledge of mathematics as a whole never experiences ‘crashes’ like the stock market!

    Are there any cases of large-scale failures in recent years, where a whole fallacious theory has arisen (not just the odd false claim) and sucked in many mathematicians for years at a time?

    • August 24, 2011 7:37 pm

      Hello Colin. I don’t know how familiar you are with the history of recursion (i.e. computation) theory. Basically, as the Entscheidungsproblem (Decision Problem – see http://en.wikipedia.org/wiki/Entscheidungsproblem ) was proven to be undecidable, coupled with Godel’s incompleteness theorems thus putting the gravestone on an intense effort of 30 years to completely formalize the axioms of mathematics, thus allowing automated proofs.
      It definitely worth studying how surprising those results were back then and how they affected mathematics. It is also my personal belief that versions of these problems with resource restrictions can help towards solving P?NP, although I haven’t researched this further and I don’t consider myself experienced enough to even make a well-defined conjecture on this.

  4. Chris Greinerson permalink
    August 18, 2011 6:00 pm

    A fascinating post, Dick. Truly fascinating. Thanks.

    Examples, examples, examples – I simply cannot stress this enough. Work through plenty of examples before AND after you write a proof. You might very well discover a counter-example after you have completed your so-called “nail on the head” proof.

    Your proof is so “obvious” that there is NO need to work through examples? WRONG! Countless “proofs” have been disproven by counterexamples AFTER publication.

    Happens even to established researchers. Actually this happens more with the established guys than their younger colleagues, since some reviewers just give the proof a “quick reading” or a “quick glance” if the author is well known – then after a few years, a graduate student will pick up the paper and come up with a neat counterexample!

  5. August 19, 2011 7:46 am

    This is a bit off topic, but I want to pick up on one non-central point you made. Although Peano’s example may be the most economical, I would hesitate to call it the best. The reason is that it misleads generations of mathematics students into thinking that the way to find such examples is to come up with formulae that miraculously work.

    An example I therefore prefer is this. Define f(x,y) to be 1 if x=y and x and y are positive and -1 if x=y and x and y are negative. Also define f to be zero along the two axes. The points where f has not yet been defined belong to four regions of the plane. In each region it is easy to fill in the values continuously. For example, take the two bounding lines and for each r interpolate linearly between the two points at distance r from the origin.

    The idea underlying this construction is the same as the idea underlying Peano’s, but here it is clearer that most of the details are inessential. And it’s also clear how we think of the example: first we force a discontinuity somewhere, and then we do our best to define the rest of the function as continuously as possible.

    Something that mathematics lacks is a carefully worked out formal way of explaining why counterexamples have to exist without having to make a lot of arbitrary choices. (For example, if we want to construct some functions that converge pointwise to zero but do not converge uniformly, a quick picture on the blackboard would convince somebody even if they then found it a difficult and time-consuming process to convert that picture into a formula for the nth function.)

    • August 25, 2011 8:50 pm

      I like this comment very much; and I like the idea of developing better tools to prove the existence of counterexamples. Sometimes topology can be used to prove the existence of counterexamples; suppose we have a family F of functions; if they all had some property P one could use this property to construct a map from F to some other family G; but perhaps the existence of this map is obstructed for robust (i.e. topological) reasons. Therefore some element of F does not satisfy P.

      A (not perfect) “example” sort of in this spirit is as follows. When studying codimension 1 foliations of 3-manifolds, it would be very convenient to be able to control the geometry of the leaves in terms of the geometry of the ambient manifold. So one would like to be able to “flow” the foliation (e.g. by some parabolic PDE) to make the leaves more regular; some evidence that one might be able to do this is that one can certainly flow (at least for short time) a single surface by mean curvature. However such a flow would necessarily immediately make things real analytic, and one knows for topological reasons (e.g. by Haefliger) that some 3-manifolds (for example, S^3) do not have real analytic codimension 1 foliations; hence no such flow can exist, and candidate flows must fail for some reason or other.

  6. August 19, 2011 8:13 am

    “Math Is Like The Stock Market”. One way for confirmation this statement is to look “Reads history for your arXiv-paper from the ADS Databases”. e.g., http://adsabs.harvard.edu/cgi-bin/nph-ref_history?refs=AR&bibcode=2005math……7127C

  7. Jim Blair permalink
    August 19, 2011 8:20 am

    Following up on Gower’s last post.

    According to Wikipedia:

    “Peano’s original formulation of the axioms used 1 instead of 0 as the “first” natural number.”

  8. August 19, 2011 9:28 am

    Imre Lakatos (http://rjlipton.wordpress.com/2011/06/06/types-of-papers/) noted that every error is not fatal error. Also he showed that mathematical work is not monotone: progress is possible only via proofs and refutations… But very often we see another approach: only one error is sufficient to reject a paper, even if there are useful ideas in the paper. A reader reads a paper before the first error and he stops his reading just he finds the error. So nobody will know about useful ideas, which may be described after the error.

    • anon permalink
      August 21, 2011 4:20 pm

      “A reader reads a paper before the first error and he stops his reading just he finds the error.”

      And this is true for many established readers/researchers.

  9. anon permalink
    August 19, 2011 12:29 pm

    “terrible twenty” is exactly how angels, VCs, and VC firms invest money in start-ups.

  10. August 20, 2011 12:24 pm

    Coincidentally, there’s a thread on this topic at cstheory.SE: http://cstheory.stackexchange.com/questions/7753/long-lasting-errors-in-computer-science

  11. fan permalink
    August 21, 2011 6:57 am

    Reminds me of this Onion article (contains 1 swear word).

  12. Anonymous permalink
    August 22, 2011 7:53 am

    Dear Professor Lipton,

    I briefly make some points on the following:

    “I hope that GLL and other forums are places where we can discuss ideas in a positive manner. I do not think that attacks on researchers are appropriate, nor that they advance the free exchange of ideas.”

    In mathematics, it is generally agreed that scientific truth carry considerably more weight than personal opinions. However, when I interact with some other discipline, for example philosophy and computer science, this becomes an issue.

    It is so irritating to the extent that pointing out a serious error is perceived as offensive and politically incorrect. While one the other hand, people almost never admit an error in public.

    I do not understand how exactly there is such dramatic cultural difference between mathematics and others, but one reason seems to be obvious: in mathematics if you make a serious error you just can’t get away with it, while in philosophy it is not that hard to bluff your way through, simply because nothing is definite in philosophy, and similarly in engineering you can usually hack around to make things “almost work”.

    For D and the like, I do have sympathy for them that they are brave to work on difficult problems, but not for being reluctant to admit a failure.

  13. October 5, 2011 6:15 pm

    Behind the expensive suits and the talking heads and the constant reassurances there is a simple truth. The Stock Market is and has always been a suckers game!

    Consider the following:

    1. The S&P 500 is a big barrel of money. When the money gets to a certain level, the Psychopaths drain it off. In order to get it to a certain level, you have to buy the B.S. and turn your life savings over to them.

    2. Only a few insiders actually buy and sell stocks. These institutional investors and market makers bring the market down by selling stock into the market.

    3. The S&P 500 represents 75 cents of every dollar invested in the stock market. That means 500 company stocks to sell when they want the market to crash. Not that hard to accomplish when you think about it.

    4. The other branch of the Satanic Psychopaths catch all that falling money buy buying Put Options against the S&P. AS the S&P goes down, they are capturing all of that lost wealth.

    5. My Clients have made anywhere from 20-50% returns over the last week by also shorting the stock market. One client made $2400 profit from a $3500 investment. Can your stock broker do that? If so, why aren’t they doing it? Instead, they are standing idly by while you lose money!

Trackbacks

  1. Weekly Picks « Mathblogging.org — the Blog
  2. Can We Prove Better Independence Theorems? | Gödel's Lost Letter and P=NP
  3. Graduate Student Traps | Gödel's Lost Letter and P=NP
  4. Microsoft Closes SVC | Gödel's Lost Letter and P=NP
  5. Missing Mate in Ten | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading