# Empirical Humility

* How to keep the brain from running faster than light? *

Saul Perlmutter and Brian Schmidt and Adam Riess won the Nobel Prize in Physics last week, for discovering that the expansion of the Universe is accelerating. Perlmutter and Schmidt-Riess led distinct teams that in 1998 measured mutually corroborating small positive values for the cosmological constant, called . That is, they measured if General Relativity is correct, while it is not even clear that should be constant. What they did measure directly were the light curves and spectra of Type 1a supernovae, which showed greater recession velocity for their distance than many astronomers had expected.

Today Dick and I want to talk about reverses in scientific expectations, and what they say about the kind of world we live in.

One consequence of is that while things gravitate together, space gravitates apart. Ever since the expansion of space between galaxies was discovered by Edwin Hubble in 1929, the Big Bang Theory in various updates seemed to provide enough explanation without such extra help—while measurements available before 1998 were saying to dozens of decimal places in the natural units of physics. Indeed, as Brian Greene frankly states on page 130 of his book *The Hidden Reality*,

When these astronomers began their work, neither group was focused on measuring the cosmological constant. Instead, the teams had set their sights on measuring … the rate at which the expansion of space is

slowing. (our bold)

This CNN story noted that “their discovery did not fit any existing theory, so it had mind-blowing implications…”

Although the measured value is tiny in natural units, there are other senses in which it is large. It implies that about 73% of the mass-energy in the Universe is contributed by “empty space.” Again quoting CNN:

This stuff wasn’t predicted by any physics theory … yet there is more of it than anything else in the universe.

That isn’t quite true. There were some predictions of , but they came from Chicken Little type reasoning that the sky isn’t falling—which in relativity means that spacetime is *flat*—and that was somehow good for the existence of people. Plus it fixed a problem of the Universe appearing to be younger than its oldest stars. But they were up against **fundamental** reasons for .

## Einstein and Lambda

Albert Einstein introduced into the field equations of general relativity for two fundamental reasons. One is that the equations allowed it to be there. For a very rough analogy, the integral of a function like is not just but rather , where can be any constant. It may be simplest to think that would *cancel* to zero, but it could be anything. Usually one wishes to evaluate in a definite integral, and then will cancel itself out—it doesn’t affect anything. The cosmological doesn’t affect other parts of relativity theory, so it could come-and-go as it pleases, which was a reason for thinking it would go and stay gone.

The second is that Einstein before 1929 believed in a steady-state cosmos, and hoped that would balance the attractive effect of gravity just-so. The fundamental Copernican principle is to expect nothing special about the world or our place in it, and anything other than a static cosmos seemed too “special.” As Greene relates on pages 19–20, Einstein himself faulted the Big Bang’s original proposer, priest and physicist Georges Lemaître, for

…blindly following the mathematics and practicing the “abominable physics” of accepting an obviously absurd conclusion.

After Hubble’s proof of the expansion predicted by Lemaître and Alexander Friedmann and some others, Einstein termed his introduction of his “greatest blunder.” Lemaître himself became something of a computer scientist late in life, and it would have been nice for him to know this bit of self-reference: the statement renouncing may itself have been Einstein’s greatest blunder. But Einstein had lots of company. Again Greene, page 128:

…decades of previous observations and theoretical deductions [had] convinced the vast majority of researchers that the cosmological constant was 0…

Here is one possible such deduction: Fundamental reasoning in quantum field theory projects a value for the total absolute energy of fluctuations in space that is over times the value implied by any reasonable possibility for a non-zero . Way more than a googol bigger, that is. The only *clean* way out might seem for it to be a case like

where the terms diverge in absolute value but cancel when arranged just-so. If things don’t cancel cleanly, what fundamental reason can there be for such a huge cancellation taking place and yet leaving a residue after the decimal place?

## A Neutrino Eats, Shoots, and Arrives?

Physicists have also been abuzz since Sept. 22 over the claim of the OPERA experiment to have detected neutrinos moving faster than the speed of light, with **six-sigma** confidence based on their understanding of their measurement apparatus. Note, the humanly significant part is not the claimed result itself, but their claim to have six-sigma confidence, meaning about a one-in-a-billion chance of the result being wrong.

They are not the first to measure such a result. The MINOS experiment in 2007 got a faster-than-light reading, but with only 1.8-sigma confidence. By social convention one needs at least a 2-sigma deviation to claim something systematic. Thus the MINOS result was pronounced “statistically consistent with neutrinos traveling at lightspeed.” But the OPERA claim is deemed statistically inconsistent, so either it must go or Einstein must yield.

What would displace Einstein is not something going faster than light, but doing so **and** interacting in our slower-than-light world. Attempting to *cross* the speed of light expends unbounded energy, and since one cannot actually expend infinite energy, it is deemed impossible. Another reason is that a super- traveler can return to its own past. This has prompted a new twist on a generic joke: ” ‘We don’t serve neutrinos,’ says the bartender. Three neutrinos enter a bar….”

## Skepticism and Humility

Not surprisingly, the OPERA claim has met skepticism by many—including myself and friends. For instance, Lisa Randall doubts it even though her own work on extra dimensions provides a way it could nudge but not overturn Einstein’s theory. Now a paper by Andrew Cohen and Sheldon Glashow, the latter himself a Nobel laureate, gives evidence that faster-than-light neutrinos would have to “flame out” before they ever reached their claimed destination. This is explained and discussed here. Others have critiqued possible flaws in the OPERA team’s measuring apparatus or controlling for external factors.

Such skepticism is directed toward others, and is vital to peer review. But self-skepticism of one’s own projects is even more vital to the *doing* of science. When trained on oneself it is a form of humility. Of course in natural-world science, the results of experiments enforce an **empirical humility** on theorists. But what happens in economics and social sciences in one-off situations, or in mathematics and theory—or in all sciences when indications are desired before all results are in?

Having chosen the title of this post, I web-searched it and found a thematic tweet by Hans Gerwitz of frog, who kindly gave me the source that had prompted it. This is about applying “science” to the ongoing economic crisis, and we note especially Paul Romer’s riff on “complexity science.”

In cases like the economy where controlled experiments are impossible, and even in mathematics and theory, what we are left to confront is a “calculus of expectations.” What kind of figurative manifold underlies such a calculus? Is it smooth as we might expect, or bumpy? We have posted several times about bumps even in the controlled environment of pure math.

## My Own Sigma Case

I’ve had a case of partially-reversed expectations myself this calendar year. Today and tomorrow mark the five-year anniversary of the end of the world chess championship match between Vladimir Kramnik and Veselin Topalov, and of the 2006 Buffalo October Storm which knocked out the whole metro area.

The match was rocked by the Topalov team’s allegation that Kramnik was cheating by getting moves from a strong computer chess program during the games. That the program’s next version on ordinary PC hardware beat Kramnik himself two months later, indeed shockingly, shows why such cheating is alas a big deal. Subsequent allegations, including ones against Topalov himself and at other top events, have turned on the same question:

How much agreement with a strong program in choice of moves is “normal” for a (human) player of a given strength?

I have been developing a probabilistic model of move-choice, not inherently limited to chess, by which to answer this question and evaluate statistical claims of cheating. After tens of millions of pages of data, at the rate of typically 6–8 hours per processor core just to analyze one game, my model has been maturing this year, as described in this AAAI 2011 paper and new submission. Before this, however, I got involved in another case that has gone to various courts.

As with OPERA, the nub is claimed confidence intervals—how many sigmas? My model estimates a probability for each possible move in any chess position, depending on fitted parameters representing the player’s strength and deep computer analysis of *all* possible choices. Over a suite of positions this yields a projected mean number of agreements with any sequence of chosen moves, such as the ones favored by a given program. On the assumption that the probabilities for different positions are independent, this automatically also yields a projected standard deviation simply from the theory of multinomial Bernoulli trials. This assumption is not perfect since players have *plans* over several consecutive moves, but this is a sparse “nearest-neighbor” kind of dependence, which results I know in combinatorics and complexity lead me to regard as innocuous. The question, however, is:

Does the distribution of actual performances by non-cheating players fall according to these projections?

For various reasons besides the early stage of my work, the needed distributional testing was not done by the time the court case first needed answers. Exigencies of my “regular” research and this blog, and personal factors such as my chief student assistant being on-leave home in Asia and later the passing of my father, would push the tests into summer. Hence in March I wrote out long reasons why I expected any error to be “conservative.” These included the natural tendency of my fitting method to project higher when is the sequence of the computer’s first-choice moves, which guarded against false positives for this application.

Finally by July we settled the test-set generation procedure. Runs of 10,000 trials for each of various chess Elo rating levels, from long scripts produced by my student and e-mailed from Bangladesh, gave many results like this:

Z-score: | -4.x | -3.x | -2.x | -1.x | -0.x | +0.x | +1.x | +2.x | +3.x | +4.x |

Targets: | .3 | 13 | 214 | 1359 | 3413 | 3413 | 1359 | 214 | 13 | .3 |

Actual: | 1 | 29 | 330 | 1585 | 3350 | 3105 | 1304 | 278 | 18 | 0 |

That is, my predicted negative-bias and fail-safe effects showed up in all categories *except the 2-sigma-and-above ones that mattered*. Frequencies there were higher than my fundamental reasoning had projected. The results weren’t terrible, and may simply mean that my model faces an exchange rate of about 2.3 projected sigmas to buy 2.0 “real” ones. But they are not what I expected.

An alternative estimator specifically targeting the first-move sequences eliminates the skew and reveals that the empirical data are simply a little “flatter” than my projections expect. Thus my model may be capturing (only) about 85% of the phenomenon, not perfect but good enough for now. And once the 1.15 factor is paid, the results bolster confidence in the model overall.

## A Mathematical “Surprise Constant”?

In Math/CS theory it is debatable whether implementations of algorithms and protocols, observations of nature, and/or heuristic computing experiments contribute true empirical tests for our results. In theory we have full control over the validity of our proved theorems, and supposedly we have full information about what we’re doing. Nevertheless, our field seems to have a nontrivial Surprise Constant, call it . We have . How large is ?

That comes because theory, like science, is a human endeavor. Thus Einstein’s words to Lemaître about “blindly following the math” have double irony for us, since the math is what we follow. Our beliefs about unknown complexity facts must be guided by fundamental reasoning somewhere. But awareness of may keep our brains from running ahead too fast. On fundamental grounds, also denotes a constant rate of caution on answers that are demanded for timely policy reasons before the results are truly ready.

## Open Problems

Do the discrepancy between theory and reality on , and the super- numbers obtained on neutrinos, tell us more about the world or about ourselves?

Do mathematical proofs have their own landscape whose inherent real bumpiness dominates the we perceive? Is this objective or subjective?

### Trackbacks

- An Interview With Kurt Gödel « Gödel’s Lost Letter and P=NP
- The Higgs Confidence Game « Gödel’s Lost Letter and P=NP
- The Singularity Is Here In Chess « Gödel’s Lost Letter and P=NP
- The Election Outcome « Gödel’s Lost Letter and P=NP
- Thirteen Sigma | Gödel's Lost Letter and P=NP
- Littlewood’s Law | Gödel's Lost Letter and P=NP
- A Chess Firewall at Zero? | Gödel's Lost Letter and P=NP

Ken,

In your submission you write

José “Raoul” Capablanca (not a typo).

I cannot find any other spelling of Raúl as Raoul, you are sure that is not a typo?

The rating of 2936 at NY is impressive.

Alex

Once you have a program that detects cheating, isn’t the next step to write a program that analyzes a game-so-far and outputs the best move that doesn’t arouse suspicion?

Alex—I can’t find my old copy of Capa’s

Last Lecturesto verify, but Amazon’s versions all have “Raoul”. I’m pretty sure it’s a decades-old memory! [Added later: I agree Raúl is correct—and I am still amazed by the “2936” which “not a typo” referred to.]Gary: choosing the first move that doesn’t arouse suspicion would look suspicious, no? Berry, berry suspeecious, I theenk… In seriousness, I’m working on making the Average Error (AE) statistic into a test; this requires “taming the tail” but is less susceptible to small changes and will catch cases of “dealing seconds” or thirds…

Just noticed in a book that 1.15 (closer 1.151) is the conversion factor between nautical speeds in

knotsand miles-per-hour. So I can say my stats are in naval units.How about assuming conjectures. They seem to be in the same realm as assuming sigma>0. That is where we could run of in the wrong direction and a lot of theory seems to be developed based on a host of conjectures.

So when will we know if the OPERA experiment is analogous to the Michelson -Morley experiment or the cold fusion proclamation?

Gender,

No matter what happens this won’t be analogous to the cold fusion proclamations. This is a large number of careful experimentalists who have carefully gone over there results and then issued a paper saying “hey, something weird is going on here.” That’s very different than what happened with the cold fusion where no one was invited to look at it before-hand, and Pons and Fleishman immediately started talking about all the wonderful applications it would have. The OPERA people have been much more restrained and much more careful. They’ve followed the right process here.

The OPERA team is redoing their math after someone pointed out that the atomic clock they likely transported between the endpoints of the neutrino path to synchronize their local clocks was not in an inertial frame, and they should have included some corrections for General Relativity in their calculations.

I seriously doubt neutrinos are going faster than the speed of light.

If neutrinos are the lightest possible particles, then the so-called “speed of light” should be renamed “speed of neutrinos”.

Thanks, fnord—is this the most particular version of what you are referring to?

Faster-than-Light Neutrino Puzzle Claimed Solved by Special Relativity (the paper’s abstract does say SR).

Note by the way that Sheldon Glashow’s paper only seems to show that neutrinos can’t be tachyonic themselves. This was sort of already known since the SN 1987A neutrinos arrived when they should have, a few hours before the light arrived. (This doesn’t involve any FTL. The neutrinos from a supernova are made in the core at the beginning of the supernova and then immediately exist. The light from a supernova needs to take the slow way out so it gets a few hours behind.) If neutrinos traveled at the speed that a naive version of the OPERA data suggested then the SN 1987A neutrinos should have arrived years before.

Glashow’s point doesn’t rule out other possibilities. For example, there could be an intermediate tachyonic particle that’s produced that then decays quickly into the neutrino. If so, they don’t get to add more than a tiny bit to the effective speed of neutrinos.

But even given that, it still seems likely that the OPERA stuff is some sort of subtle mistake. The next possibility is interesting new physics that doesn’t actually let you move things faster than light.

(Disclaimer: Am a number theorist, not a physicist.)

I don’t see why this is all so surprising.

c is a theoretical constant, the speed of light is something we’ve measured.

Clearly there was always a chance that photons had a mass whatever small it might be, and hence only traveled with 99.999% of c.

It kinda messes up the meter and things, but once physicists have grown accustomed to the idea, they should quickly be able to fix any problems.

I agree with you. If the results of this experiment are confirmed, they will say more on the mass of the photon than on the validity of special relativity.

c being the maximum speed at which all energy, matter, and information can travel, the notion of information has close ties to that of mass. IMHO, many problems of complexity theory are as much questions of physics as of mathematics. In TCS the usual separation between these two sciences isn’t as relevant as usual. The logical independence and the physical impossibility of separating some complexity classes are maybe one and the same phenomenon.

The speed of light is a universal constant known to 9 decimal places which occurs in a plethora of physics formulae, many of which have absolutely nothing to do with photon propagation, and all of which give correct answers.

So there’s really no chance we’ve measured the speed of light incorrectly, due to some previously unnoticed rest mass of the photon.

I’m confident that once the Opera people introduce the correct relativistic corrections into their distance and time measurements, their neutrinos will again be acting like neutrinos should act in the Standard Model.

I think you misunderstood Thomas’s comment. I’m no physicist, but I don’t think there is anything in relativity theory that speaks about light. Sure there is some universal constant c, which is the maximum speed attainable, but it could be the speed of light or the speed of bananas, relativity itself has no say in the matter. So the point is: can it just be that the speed of photons is actually *less* than c?

Expertise generally is a good thing, yet it commonly happens that expert practitioners become cognitively bound to their expertise.

In medicine for example, skilled cardiac surgeons, radiologists, and chemotherapists all have a hard time accepting that skilled intervention may no yield improved long-term outcomes.

In mathematics, proof specialists (like Hilbert) found it difficult to accept that some statements are undecidable.

In geometry, 19th century sailors found it easy to conceive of non-Euclidean geometry in the practical context of navigation and surveying, and yet “Boeotian” academic geometers struggled to find a mathematically natural abstraction of these ideas.

Similarly, we can take the general thrust of Ken’s post to be that this struggle continues today. What 21st century Boeotian ideas are holding us back? Everyone has their own favorite possibilities.

For example, in practical engineering computations, no-one uses the full complexity class P … we deploy a restricted subset of algorithms whose run-times are (we hope)

provablyin P. Similarly, in practical dynamical simulations, no-one uses the full, flat Hilbert space H … we use only restricted (usually non-flat) submanifolds of H.Researchers (in many disciplines) are struggling to conceive of mathematically natural abstractions of these crude restrictions and approximations, with a view toward elevating them from

ad hocBoeotian assumptions to illuminating Platonic postulates.The new

Theoretical Physics StackExchangenow joinsMathOverflowandTCS StackExchangeas terrifically fun places to watch these creative struggles (and even better, to participate in them).Best … century … for research … *EVER* !

Very well analyzed John.

It puts lots of things in proper perspective.