How to keep the brain from running faster than light?
Saul Perlmutter and Brian Schmidt and Adam Riess won the Nobel Prize in Physics last week, for discovering that the expansion of the Universe is accelerating. Perlmutter and Schmidt-Riess led distinct teams that in 1998 measured mutually corroborating small positive values for the cosmological constant, called . That is, they measured if General Relativity is correct, while it is not even clear that should be constant. What they did measure directly were the light curves and spectra of Type 1a supernovae, which showed greater recession velocity for their distance than many astronomers had expected.
Today Dick and I want to talk about reverses in scientific expectations, and what they say about the kind of world we live in.
One consequence of is that while things gravitate together, space gravitates apart. Ever since the expansion of space between galaxies was discovered by Edwin Hubble in 1929, the Big Bang Theory in various updates seemed to provide enough explanation without such extra help—while measurements available before 1998 were saying to dozens of decimal places in the natural units of physics. Indeed, as Brian Greene frankly states on page 130 of his book The Hidden Reality,
When these astronomers began their work, neither group was focused on measuring the cosmological constant. Instead, the teams had set their sights on measuring … the rate at which the expansion of space is slowing. (our bold)
This CNN story noted that “their discovery did not fit any existing theory, so it had mind-blowing implications…”
Although the measured value is tiny in natural units, there are other senses in which it is large. It implies that about 73% of the mass-energy in the Universe is contributed by “empty space.” Again quoting CNN:
This stuff wasn’t predicted by any physics theory … yet there is more of it than anything else in the universe.
That isn’t quite true. There were some predictions of , but they came from Chicken Little type reasoning that the sky isn’t falling—which in relativity means that spacetime is flat—and that was somehow good for the existence of people. Plus it fixed a problem of the Universe appearing to be younger than its oldest stars. But they were up against fundamental reasons for .
Albert Einstein introduced into the field equations of general relativity for two fundamental reasons. One is that the equations allowed it to be there. For a very rough analogy, the integral of a function like is not just but rather , where can be any constant. It may be simplest to think that would cancel to zero, but it could be anything. Usually one wishes to evaluate in a definite integral, and then will cancel itself out—it doesn’t affect anything. The cosmological doesn’t affect other parts of relativity theory, so it could come-and-go as it pleases, which was a reason for thinking it would go and stay gone.
The second is that Einstein before 1929 believed in a steady-state cosmos, and hoped that would balance the attractive effect of gravity just-so. The fundamental Copernican principle is to expect nothing special about the world or our place in it, and anything other than a static cosmos seemed too “special.” As Greene relates on pages 19–20, Einstein himself faulted the Big Bang’s original proposer, priest and physicist Georges Lemaître, for
…blindly following the mathematics and practicing the “abominable physics” of accepting an obviously absurd conclusion.
After Hubble’s proof of the expansion predicted by Lemaître and Alexander Friedmann and some others, Einstein termed his introduction of his “greatest blunder.” Lemaître himself became something of a computer scientist late in life, and it would have been nice for him to know this bit of self-reference: the statement renouncing may itself have been Einstein’s greatest blunder. But Einstein had lots of company. Again Greene, page 128:
…decades of previous observations and theoretical deductions [had] convinced the vast majority of researchers that the cosmological constant was 0…
Here is one possible such deduction: Fundamental reasoning in quantum field theory projects a value for the total absolute energy of fluctuations in space that is over times the value implied by any reasonable possibility for a non-zero . Way more than a googol bigger, that is. The only clean way out might seem for it to be a case like
where the terms diverge in absolute value but cancel when arranged just-so. If things don’t cancel cleanly, what fundamental reason can there be for such a huge cancellation taking place and yet leaving a residue after the decimal place?
Physicists have also been abuzz since Sept. 22 over the claim of the OPERA experiment to have detected neutrinos moving faster than the speed of light, with six-sigma confidence based on their understanding of their measurement apparatus. Note, the humanly significant part is not the claimed result itself, but their claim to have six-sigma confidence, meaning about a one-in-a-billion chance of the result being wrong.
They are not the first to measure such a result. The MINOS experiment in 2007 got a faster-than-light reading, but with only 1.8-sigma confidence. By social convention one needs at least a 2-sigma deviation to claim something systematic. Thus the MINOS result was pronounced “statistically consistent with neutrinos traveling at lightspeed.” But the OPERA claim is deemed statistically inconsistent, so either it must go or Einstein must yield.
What would displace Einstein is not something going faster than light, but doing so and interacting in our slower-than-light world. Attempting to cross the speed of light expends unbounded energy, and since one cannot actually expend infinite energy, it is deemed impossible. Another reason is that a super- traveler can return to its own past. This has prompted a new twist on a generic joke: ” ‘We don’t serve neutrinos,’ says the bartender. Three neutrinos enter a bar….”
Not surprisingly, the OPERA claim has met skepticism by many—including myself and friends. For instance, Lisa Randall doubts it even though her own work on extra dimensions provides a way it could nudge but not overturn Einstein’s theory. Now a paper by Andrew Cohen and Sheldon Glashow, the latter himself a Nobel laureate, gives evidence that faster-than-light neutrinos would have to “flame out” before they ever reached their claimed destination. This is explained and discussed here. Others have critiqued possible flaws in the OPERA team’s measuring apparatus or controlling for external factors.
Such skepticism is directed toward others, and is vital to peer review. But self-skepticism of one’s own projects is even more vital to the doing of science. When trained on oneself it is a form of humility. Of course in natural-world science, the results of experiments enforce an empirical humility on theorists. But what happens in economics and social sciences in one-off situations, or in mathematics and theory—or in all sciences when indications are desired before all results are in?
Having chosen the title of this post, I web-searched it and found a thematic tweet by Hans Gerwitz of frog, who kindly gave me the source that had prompted it. This is about applying “science” to the ongoing economic crisis, and we note especially Paul Romer’s riff on “complexity science.”
In cases like the economy where controlled experiments are impossible, and even in mathematics and theory, what we are left to confront is a “calculus of expectations.” What kind of figurative manifold underlies such a calculus? Is it smooth as we might expect, or bumpy? We have posted several times about bumps even in the controlled environment of pure math.
I’ve had a case of partially-reversed expectations myself this calendar year. Today and tomorrow mark the five-year anniversary of the end of the world chess championship match between Vladimir Kramnik and Veselin Topalov, and of the 2006 Buffalo October Storm which knocked out the whole metro area.
The match was rocked by the Topalov team’s allegation that Kramnik was cheating by getting moves from a strong computer chess program during the games. That the program’s next version on ordinary PC hardware beat Kramnik himself two months later, indeed shockingly, shows why such cheating is alas a big deal. Subsequent allegations, including ones against Topalov himself and at other top events, have turned on the same question:
How much agreement with a strong program in choice of moves is “normal” for a (human) player of a given strength?
I have been developing a probabilistic model of move-choice, not inherently limited to chess, by which to answer this question and evaluate statistical claims of cheating. After tens of millions of pages of data, at the rate of typically 6–8 hours per processor core just to analyze one game, my model has been maturing this year, as described in this AAAI 2011 paper and new submission. Before this, however, I got involved in another case that has gone to various courts.
As with OPERA, the nub is claimed confidence intervals—how many sigmas? My model estimates a probability for each possible move in any chess position, depending on fitted parameters representing the player’s strength and deep computer analysis of all possible choices. Over a suite of positions this yields a projected mean number of agreements with any sequence of chosen moves, such as the ones favored by a given program. On the assumption that the probabilities for different positions are independent, this automatically also yields a projected standard deviation simply from the theory of multinomial Bernoulli trials. This assumption is not perfect since players have plans over several consecutive moves, but this is a sparse “nearest-neighbor” kind of dependence, which results I know in combinatorics and complexity lead me to regard as innocuous. The question, however, is:
Does the distribution of actual performances by non-cheating players fall according to these projections?
For various reasons besides the early stage of my work, the needed distributional testing was not done by the time the court case first needed answers. Exigencies of my “regular” research and this blog, and personal factors such as my chief student assistant being on-leave home in Asia and later the passing of my father, would push the tests into summer. Hence in March I wrote out long reasons why I expected any error to be “conservative.” These included the natural tendency of my fitting method to project higher when is the sequence of the computer’s first-choice moves, which guarded against false positives for this application.
Finally by July we settled the test-set generation procedure. Runs of 10,000 trials for each of various chess Elo rating levels, from long scripts produced by my student and e-mailed from Bangladesh, gave many results like this:
That is, my predicted negative-bias and fail-safe effects showed up in all categories except the 2-sigma-and-above ones that mattered. Frequencies there were higher than my fundamental reasoning had projected. The results weren’t terrible, and may simply mean that my model faces an exchange rate of about 2.3 projected sigmas to buy 2.0 “real” ones. But they are not what I expected.
An alternative estimator specifically targeting the first-move sequences eliminates the skew and reveals that the empirical data are simply a little “flatter” than my projections expect. Thus my model may be capturing (only) about 85% of the phenomenon, not perfect but good enough for now. And once the 1.15 factor is paid, the results bolster confidence in the model overall.
In Math/CS theory it is debatable whether implementations of algorithms and protocols, observations of nature, and/or heuristic computing experiments contribute true empirical tests for our results. In theory we have full control over the validity of our proved theorems, and supposedly we have full information about what we’re doing. Nevertheless, our field seems to have a nontrivial Surprise Constant, call it . We have . How large is ?
That comes because theory, like science, is a human endeavor. Thus Einstein’s words to Lemaître about “blindly following the math” have double irony for us, since the math is what we follow. Our beliefs about unknown complexity facts must be guided by fundamental reasoning somewhere. But awareness of may keep our brains from running ahead too fast. On fundamental grounds, also denotes a constant rate of caution on answers that are demanded for timely policy reasons before the results are truly ready.
Do the discrepancy between theory and reality on , and the super- numbers obtained on neutrinos, tell us more about the world or about ourselves?