Skip to content

How Joe Traub Beat the Street

August 31, 2015


An insight into the computation of financial information

JoeTraubColumbia
Columbia memorial source

Joseph Traub passed away just a week ago, on August 24th. He is best known for his computer science leadership positions at CMU, Columbia, CSTB, the Journal of Complexity—they all start with “C.” CSTB is the Computer Science and Telecommunications Board of the National Academies of Science, Engineering, and Medicine. At each of these he was the head and for all except Carnegie-Mellon he was the first head—the founder.

Today Ken and I wish to highlight one technical result by Traub and his co-workers that you may not know about.

I knew Joe for many years, and he will be missed. He had a sharp mind and corresponding sharp wit. There was a slight twinkle in his eye that would have called to mind a leprechaun had he been Irish rather than German—from a family that left after the Nazis seized his father’s bank in 1938 when he was six. He was a delight to interact with on almost any topic.

Joe created an entire area of theory: information-based complexity theory. To quote its website:

Information-based complexity (IBC) is the branch of computational complexity that studies problems for which the information is partial, contaminated, and priced. [Functions over the unit cube] must be replaced by … finite sets (by, for example, evaluating the functions at a finite number of points). Therefore, we have only partial information about the functions. Furthermore, the function values may be contaminated by round-off error. Finally, evaluating the functions can be expensive, and so computing these values has a price.

Traub was helped greatly by a brilliant colleague, Henryk Woźniakowski, who developed key ideas alongside him in the 1970s that were distilled in their 1980 monograph, A General Theory of Optimal Algorithms. A shorter source is Joe’s seminal paper, “An Introduction to Information-Based Complexity.” A statement on Woźniakowski’s Columbia webpage shows his sustaining interest in IBC:

I am mostly interested in computational complexity of continuous problems. For most continuous problems, we have only partial information. For problems defined on spaces of functions, this partial information is usually given by a finite number of function values at some points which we can choose. One of the central issues is to determine how many pieces of information, or function values, are needed to solve the computational problem to within a prescribed accuracy.

IBC Under Attack

I must admit that years ago I opened my Bulletin of the AMS, in 1992 to be exact, and was shocked to see an article titled: “Some Basic Information On Information-Based Complexity Theory” written by the eminent numerical analyst Beresford Parlett. The abstract got right to the point:

…Why then do most numerical analysts turn a cold shoulder to IBCT? Close analysis of two representative papers reveals a mixture of nice new observations, error bounds repackaged in new language, misdirected examples, and misleading theorems.

What is interesting about this paper is that the attack on IBCT=IBC is on the idiosyncrasies of its framework. The claim is not that there are errors—technical errors—in the IBC papers, but rather that the model, the framework, is uninteresting and misleading. You should take a look at the very readable paper of Parlett to decide for yourself.

Traub of course disagreed and had a chance to fire back an answer. I believe that Parlett’s attack actually had little effect on IBC, except to make it better known in the mathematical community.

What was the tiff about? We can try to boil it down even more than Parlett’s article does.

Integrals and Algorithms

Integrals are fundamental to most of mathematics. And in many practical cases the integrals cannot be exactly computed. This leads to a huge area of research on how to approximate an integral within some {\epsilon>0} error. Even simple ones like

\displaystyle  \int_{a}^{b} f(x) dx

can be challenging for nasty functions {f(x)}. Worse still are multi-dimensional integrals

\displaystyle  \int \int \cdots \int_{S} f(x_{1},\dots,x_{d}) dx_{1}dx_{2}\dots dx_{d},

where {S} is a {d}-dimensional region. Speaking very roughly, let’s say {S = \{0,1\}^d} and what matters to {f(x)} is whether components {x_i} are “low” (toward {0}) or “high” (toward {1}) in each dimension. If the gradient of {f} in each dimension {i} behaves differently for each setting of other components being “low” or “high,” then we might need {2^d} samples just to get a basic outline of the function’s behavior. The IBC website fingers this exponential issue as the “curse of dimensionality.”

Now as ordinary complexity theorists, our first instinct would be to define properties intrinsic to the function {f} and try to prove they cause high complexity for any algorithm. Making a continuous analogy to concepts in discrete Boolean complexity, drawing on papers like this by Noam Nisan and Mario Szegedy, we would try to tailor an effective measure of “sensitivity.” We would talk about functions {f} that resemble the {n}-ary parity function in respect of sensitivity but don’t have a simple known integral. Notions of {f} being “isotropic” could cut both ways—they could make the sensitivity pervasive but could enable a good global estimate of the integral.

IBC, however, focuses on properties of algorithms and restrictions on the kind of inputs they are given. Parlett’s general objection is that doing so begs the question of a proper complexity theory and reverts to the standard—and hallowed enough—domain of ordinary numerical analysis of algorithms.

Ken and I find ourselves in the middle. Pure complexity theory as described by Parlett has run into barriers even more since his article. We wrote a pair of posts on bounds against algorithms that are “progressive.” The IBC website talks about settings for algorithms. This seems to us a fair compromise notion but it could use more definition.

One classic setting for approximating these high-dimensional integrals is the Monte-Carlo method. The common feature of this setting is that one samples points {x=x_{1}\dots x_{d}} from {S} and takes the average of the values of {f(x)}. This method yields a fair approximation, but the convergence is slow. For example, the dimension {d} is {360 = 30 \times 12}: the number of months in a 30 year collateralized mortgage obligation (CMO). To get error {\epsilon=10^{-2}} requires {10^{720}} evaluations of the function—clearly a completely hopeless task. Financial firms on Wall Street that needed to estimate fair market values for CMOs were using Monte Carlo methods as best they could.

IBC Strikes Back

Enter Traub with his Columbia IBC research group in 1994. They—his student Spassimir Paskov in particular—discovered and verified that a non-random method would beat the Monte-Carlo by many orders of magnitude. This was based on an old idea that uses low-discrepancy sequences. These are sequences {X} of points inside a region {S} that have good coverage of the space. For the unit cube {S} as above, one may specify a class {{\cal B}} of natural measurable subsets of {S}; then low discrepancy means that for every {B \in {\cal B}} the proportion of points in {X} that belong to {B} is close to the measure of {B}—and moreover this happens for every long enough consecutive subsequence of {X}.

In the worst case the resulting Quasi-Monte Carlo (QMC) methods would perform much poorer than Monte-Carlo methods. However, the surprise was that for actual financial calculations they worked well—really well.

As with any change in basic methodology, the work and claims of Traub’s group were met with great initial skepticism by the financial firms. However, the performance was and remains so good that many financial calculations still use QMC. This raised a further beautiful problem that loops back to the initial mission of IBC:

Why do the discrepancy-based methods perform so well?

That is, what properties of the particular high-dimensional integrals that occurred with CMOs and other applications that made non-random methods work so well?

There are some ideas of why this is true in the actual financial calculations, but the basic question remains open. Wikipedia’s IBC article picks up the story in a way that connects to Parlett’s point though neither he nor his article is mentioned:

These results are empirical; where does computational complexity come in? QMC is not a panacea for all high-dimensional integrals. What is special about financial derivatives?

It continues:

Here’s a possible explanation. The 360 dimensions in the CMO represent monthly future times. Due to the discounted value of money variables representing times for in the future are less important than the variables representing nearby times. Thus the integrals are non-isotropic.

Working with Ian Sloan, Woźniakowski introduced a notion of weighted spaces that leverages the above observation. Their 1998 paper laid down conditions on the integrals and showed that they are rendered tractable in the QMC setting, often in worst case where Monte Carlo gives average case at best. How far this positive result can be extended—for which other classes of integrals does QMC beat MC—is a concrete form of the wonderful problem.

Open Problems

Our condolences to the Traub family and to our colleagues at Columbia and Santa Fe. The Santa Fe Institute’s memorial includes a nice statement by our friend Joshua Grochow relating to workshops led by Joe and a joint paper on limits to knowledge in science.

Cancellation is a Pain

August 27, 2015


How to avoid the pain of estimating tough sums

Andrew-Granville-senior-pro
Cricketing source

Andrew Granville is a number theorist, who has written—besides his own terrific research—some beautiful expository papers, especially on analytic number theory.

Today Ken and I wish to talk about his survey paper earlier this year on the size of gaps between consecutive primes.
Read more…

Cryptography And Quicksand

August 14, 2015


A basic question about cryptography that we pretend is not there.

Unknown

Victor Shoup is one of the top experts in cryptography. He is well known for many things including a soon to be released book that is joint with Dan Boneh on, what else, cryptography; and the implementation of many of the basic functions of cryptography.

Today I want to talk about my recent visit to the Simons Institute in Berkeley where I heard Victor give a special lecture.
Read more…

Changing Definitions

August 10, 2015


How important is “backward compatibility” in math and CS?

Lebesgue
David Darling source

Henri Lebesgue came the closest of anyone we know to changing the value of a mathematical quantity. Of course he did not do this—it was not like defining π to be 3. What he did was change the accepted definition of integral so that the integral from {a} to {b} of the characteristic function of the rational numbers became a definite {0}. It remains {0} even when integrating over all of {\mathbb{R}}.

Today we talk about changing definitions in mathematics and computer programming and ask when it is important to give up continuity with past practice. Read more…

Four Weddings And A Puzzle

August 2, 2015


An unusual voting problem?

“Four Weddings” is a reality based TV show that appears in America on the cable channel TLC. Yes a TV show: not a researcher, not someone who has recently solved a long-standing open problem. Just a TV show.
22258_4W_302_Group_1

Today I want to discuss a curious math puzzle that underlines this show.

The show raises an interesting puzzle about voting schemes:
Read more…

Playing Chess With the Devil

July 28, 2015


A new kind of ‘liar’ puzzle using Freestyle chess

VierraeSmullyanOilWithBook
By permission of Vierrae (Katerina Suvorova), source

Raymond Smullyan is probably the world’s greatest expert on the logic of lying and the logic of chess. He is still writing books well into his 10th decade. Last year he published a new textbook, A Beginner’s Guide to Mathematical Logic, and in 2013 a new puzzle book named for Kurt Gödel. His 1979 book The Chess Mysteries of Sherlock Holmes introduced retrograde analysis—taking back moves from positions to tell how they could possibly have arisen—to a wide public.

Today Ken and I wish to talk about whether we can ever play perfect chess—or at least better chess than any one chess program—by combining output from multiple programs that sometimes might “lie.” Read more…

Alberto Apostolico, 1948–2015

July 22, 2015


Our condolences on the loss of a colleague

ApostolicoTCS2008cr
Cropped from TCS journal source

Alberto Apostolico was a Professor in the Georgia Tech College of Computing. He passed away on Monday after a long battle with cancer.

Today Ken and I offer our condolences to his family and friends, and our appreciation for his beautiful work.
Read more…

Follow

Get every new post delivered to your Inbox.

Join 2,545 other followers