Can Nature play games against us?

Peter Higgs once recorded an autobiographical lecture with the immortal title “My Life as a Boson.” This refers to the Higgs boson, which was named for him even though we don’t know whether it exists. We may, however, learn what is formally called evidence of its existence later today (Tuesday) from officials at CERN. This depends on assessment of confidence intervals projected by two teams of experimenters, and how the data from CERN’s Large Hadron Collider (LHC) measures up to them.

Today I (Ken) wish to talk about statistics and social convention in “hard science” such as particle physics. Dick and I are curious whether the assumptions behind the confidence intervals can be violated on both sides: by humans owing to unexpected selection bias, and by Nature possibly acting like a cheating prover in an interactive protocol.

Higgs is a Professor Emeritus of the University of Edinburgh in Scotland. He was actually beaten into print in 1964 on the Higgs mechanism by Robert Brout and François Englert. Higgs had tried to publish, but his paper had been rejected as “of no obvious relevance to physics.” He was, however, the first to point out that the prevailing theory of the mechanism required the existence of a new elementary particle. Note that there are other theories of the Higgs mechanism that lack or conceal the Higgs boson, as well as Higgsless models including this paper.

The Higgs is massive compared to other elementary particles, but ironically may not be massive enough. At about 125 giga-electron-volts (GeV/${c^2}$ with ${c = 1}$ in natural units) it would be a featherweight in boxing weight class stated in pounds. If the top quark weighs in over 175, which used to be heavyweight but is now cruiserweight, the vacuum with its dark energy could well be only meta-stable. This means it could quantum-tunnel to a lower energy configuration, which would knock out our cosmos at the speed of light. Forget the Mayan calendar for 2012 or the latest doomsday preacher—the real rapture may devolve upon Geneva during today’s press conference.

## The G-d Particle

Higgs himself believes neither the particle nor the mechanism should carry his sole name, and was happy that he, Brout, Englert, and the three authors of another 1964 paper (Gerald Guranik, Carl Hagen, and Tom Kibble) were all awarded the 2010 J.J. Sakurai Prize for this work. He may have gotten his wish, as the popular name “The God Particle” has stuck to the boson. This is the title of a 1993 book by Nobel prize-winning physicist Leon Lederman and science writer Dick Teresi.

According to Higgs, Lederman had wanted to title the book The G*d*mm Particle to emphasize how elusive the boson was. His publisher declined to have a swear word in the title, but thought it fine to use just “God.” However, they could have settled on the Orthodox Jewish practice of writing “G-d” to avoid situations where the fully-written name might be erased or discarded. The title The G-d Particle could then be read with Lederman’s original meaning or not. Higgs is said to join many scientists regretting the “God Particle” name, more from concern over hype than irreverence.

The Higgs mechanism explains how certain elementary particles acquire mass, via symmetry-breaking. It is not clear whether this determines mass for all particles. By wave-particle duality, the boson is a ripple in the Higgs field. If mass is good, then the boson could be called the “good particle” or the godfather of mass. Finding it—in the form hinted by the present experiments—would complete the set of building blocks for the Standard Model (SM) of particle physics.

There is much less doubt that the Higgs field exists and is good, indeed providential.

## Confidence in Discovery

What strikes us is that when and whether the SM Higgs boson is declared discovered depends on a social convention on assessed confidence intervals. Readings that are three standard deviations (${3\sigma}$) away from limits required by the “null hypothesis” of no Higgs presence give 99.7% confidence, but may only be called “evidence.” It takes a ${5\sigma}$ event to claim discovery. We have blogged about the sigmas before in connection with the ${6\sigma}$ claim for faster-than-light neutrinos.

Rumor has it that the ATLAS team will claim a ${3.5\sigma}$ deviation, and the CMS team will claim about ${2.5\sigma}$. These would aggregate to about ${4.3\sigma}$ if the results are independent. Whether independence holds between them is being argued, but we have a more basic question first: Where did the value(s) of “${\sigma}$” come from?

When an experiment is repeatable, we can soon pin down the standard deviation and interpret accordingly. A first-time or one-off or rare event, however, heightens the inconvenient question discussed here by physicist and blogger Sean Carroll:

What are the error bars on your error bars?

I’ve confronted this issue in my chess research as also described in the post mentioned above. My probabilistic model of move choice projects confidence intervals based on representing legal moves as multinomial Bernoulli trials, after fitting playing-skill parameters to training data. Besides the Bernoulli error there is modeling error from imperfections in the computer chess analysis used as data and from assumptions such as all move decisions being independent that are not-quite-true.

When applied to players accused of cheating by consulting computer programs during games, my model attempts to make judgments of the form, “for Player ${X}$ to have played 70% of the computer’s moves when his expectation was 61% over 200 relevant turns in his nine games is a ${3\sigma}$ deviation, hence 740-1 odds against the null hypothesis of no cheating.” In my case I can test the Bernoulli-projected ${\sigma}$ by generating (say) 10,000 random nine-game performances out of the 400-odd games (which make 800 game-plays) in the training data for a nearby skill level, and comparing how many actual agreement percentages with the computer fall within ${c\sigma}$ of their projections with ${z}$-intervals from normal distribution. These tests reveal that the projected ${\sigma}$ given by my current model needs to be divided by about ${1.15}$.

This would still allow claiming 2.61${\sigma}$ of “actual” deviation, for 220-1 odds. However, it can be argued that random subsets of nine games by various players are different in kind from nine games by the same player, which is why I am expanding the data-taking of performances by players. Supplementing this error analysis by other statistical methods of gauging confidence may be needed—note that this paper by Louis Lyons unabashedly presents the panoply of methods applicable to particle physics.

## The Littlewood and Look-Elsewhere Problems

Two other problems being discussed have analogies in the chess work. One is that if I run my analysis on a tournament with over 220 players, I am likely to find a performance that my model would judge to be a 220-1 outlier. This is an example of Littlewood’s Law, named for John E. Littlewood who famously collaborated with Godfrey H. Hardy. Hence the situation requires other distinguishing features to support an accusation of cheating, such as plausible physical evidence or observation of wrongdoing. To be sure, the experimentalists have accounted for these factors.

The related look-elsewhere problem, however, is being said to reduce the ATLAS deviation from ${3.5\sigma}$ to an effective ${2.2\sigma}$. This also has an analogy in my chess work. Suppose I have two 9-game tournaments with much the same roster of 200 players. Between the tournaments there are 10 ways I can take 9 consecutive games to observe. This is not the same as having 2,000 independent trials to range over, but it greatly improves the odds of 220-1 events or even 740-1 events being merely expected by normal chance.

In the Higgs case the spread effect comes from sliding scales of experimental parameters that dovetail with trying to pinpoint the mass of the particle. Here is an example by physicist and blogger Matt Strassler.

Even with these problems seemingly tamed, there are still many cases where reported ${3\sigma}$ phenomena “disappear” on further probing. This should happen only ${0.3\%}$ of the time, but seems to happen more often. The chess-playing quantum physicist Tommaso Dorigo detailed one recent important such case on his blog here. The explanation is that this was “really” only a ${1\sigma}$ deviation, which happens by chance 32% of the time, or 16% in a one-sided case.

The high rate of disappearing significance overall is still puzzling. Its extent in human sciences was detailed exactly one year ago in a disturbing article by Jonah Lehrer for The New Yorker. If 250 researchers try the same experiment, one would expect 2 or so of them to get ${2.5\sigma}$ deviations (in either direction). The world will then see 2 or so published papers from them, but nary a peep from the 248 who failed and gave up quickly and forgot about it. Thus a significant result may appear independently confirmed when it was actually just by chance, and those failing to reproduce it will then peep up loudly. The effect is equally pernicious with 250 different experiments, especially given a fair chance of a lower-confidence positive from a test deemed related enough to corroborate the original.

## Does Nature Play Games?

In physics, however, high-energy experiments are limited, and the problem of vanishing significance makes us consider stranger possibilities. We will not be the first. A cosmic conspiracy hypothesis involving the LHC and Higgs itself was featured in the New York Times in October 2009.

Our thought merely asks whether situations of the kind introduced by Christos Papadimitriou in the paper “Games Against Nature” are “real.” This famous paper is considered one of the forerunners of interactive protocols.

In one of several problems treated in the paper, Papadimitriou varied a standard model of random network faults by allowing the probability ${p(e,v)}$ of failure in a graph edge ${e}$ to depend also on the current vertex ${v}$ of a “Runner” on the graph. He showed that determining whether Runner has a 50% chance of reaching a goal node ${t}$ when the graph is allowed to conspire against him in this way is complete for ${\mathsf{PSPACE}}$. We ask simply,

Can Nature do this?

That is, can Nature alter probabilities ${p}$ of events ${e}$ after seeing information ${v}$ that is not directly local to ${e}$ but involves some goal ${t}$? Is the computational power needed to do this available? Note that the word “after” is suspect, but the time-symmetry of many processes involving particles also enhances the plausibility of the question.

We invite those who know more about the pertinent aspects of physics and information theory to comment. For support at least by allusion, however, we note an article in the Nov. 16, 2011 issue of Nature by Gilles Brassard on quantum protocols for attempting to prove spatial position that fail under unexpected attacks. We say unexpected because a paper last year had seemed to prove their security, but this was broken as we noted here. The paper surveyed by Brassard proves instead that in some general settings no secure protocols can exist; a followup paper is here.

Admittedly with little more than surmise, we are prompted to ask: can nature deliver unexpected attacks on the protocols involved in experiments? Can it induce us to accept false conclusions with high probability, in the manner of a “cheating prover”? Does it have the computational capability to do so, in real time?

## Open Problems

To state the more positive side of our question, is there a general computational way to “extract independence” from experimental data to maximize confidence in the results?

Does the Higgs boson exist? We note a poll mentioned here of several leading physicists before this month’s rumors, with widely varying and amusing responses.

Update: The results announced at CERN were fairly close to rumors and expectations; for summaries and reactions with varying degrees of intensity see Tommaso Dorigo, Peter Gibbs, Matt Strassler, Quantum Diaries, Peter Woit, Luboš Motl. MSNBC Cosmic Log has a nice simple summary of facts and statistical issues. We also thank commenter Surya for this note which was our first alert about the Higgs news.

Update 6 July 2012: The Higgs inched across the Discovery line in time for the 4th of July. John Huth wrote a humorous post for the blog “Quantum Diaries” channeling Mark Twain. This snippet expresses a point of my original post:

[The CERN presenter], who seemed quite earnest, moved to what I gathered was the ‘punch-line,’ as I could sense the rapt attention of the pilgrims in the room. When he combined the results of two kinds of fireworks, lo-and-behold!, a magic barrier was crossed.

Now, dear reader, you don’t need to know my opinion of statistics, but I will tell you that there is something called a ‘five-sigma’ effect. This manifestation of statistics is deemed by the cognoscenti to be a ‘discovery.’ My guide seemed to be glued to the screen, hanging on every word this gentleman uttered. When I inquired about the meaning of this ‘five-sigma’ miracle, he told me that if it was 4.9, it didn’t count. I was rather amazed that such a small fraction divides a miracle from a non-miracle, but he said it was thus.

Humour aside, it’s convincing to me, hail to the two teams and everyone who built the collider! Meanwhile I described a concrete situation to which my concerns about probabilities can be transferred here.

[Updated links post-press-conference, expanded sentence in intro on Higgsless models, July 6 update.]

30 Comments leave one →
1. December 13, 2011 9:29 am

As Press et al. put it in their estimable Numerical Recipes:

One would expect a measurement to be off by ${\pm 20\sigma}$ only one time out of ${2\times10^{88}}$. We all know that “glitches” are much more likely than that!

Whenever we make predictions based on models that can be falsified by what Gödel’s Lost Letter and P=NP calls “burning arrows,” our confidence in those predictions tends to be too great in proportion to our expertise in that model; this effect belongs to a broad class of ubiquitous cognitive phenomena that psychologists call Dunning-Kruger Effects.

For example, in the 1700s, what were the rational odds that Euclid’s Parallel Postulate would turn out to be false? In the 1800s, what were the rational odds that mathematics would turn out to be undecidable? In early 1900s, what were the rational odds that the state-space of Nature would turn out to be non-Newtonian? In the latter 1990s, what were the rational odds that the financial system would suffer an instability-driven meltdown? Nowadays, what are the rational odds that (e.g.) the state-space of Nature is non-Hilbert, or that oracle-defined complexity classes are non-natural?

In all of these cases, leading experts grossly underestimated the probability of burning-arrow model innovation, and in some cases leading experts even vehemently denied that burning-arrow model revisions were logically conceivable.

Thus one more humbling lesson of history is that Dunning-Kruger effects bar us all — experts especially! — from giving reliable quantitative estimates for the probability of burning-arrow revisions of even our most cherished models.

So is there a burning arrow that will falsify the physics of the Standard Model? Plenty of physicists hope so! What is the probability that this arrow will be found? Almost certainly human intelligence, no matter how expert, cannot reliably estimate that probability.

2. December 13, 2011 9:54 am

Ken asks: Can Nature induce us to accept false conclusions with high probability, in the manner of a “cheating prover”?

As a followup, historical examples of the false induction that Ken requests include:

Example 1: the Darwinian evolution of fitness tempts us to embrace the postulate of intelligent design.

Example 2: Cubic crystals exhibit isotropic thermodynamic properties even though the underlying atomic lattice is not isotropic.

Example 3: Particle detectors mimic coarse-grained quantum jumps via fine-grained unitary evolution (that is, “there are no jumps in Nature”).

An interesting question (that I cannot answer) is, are there any mathematical examples of Nature inducing us to accept false postulates? E.g.:

Example 4: Nature endows humans with an ability to prove theorems sufficient to induce the dual illusions that (a) mathematics is decidable and (v) theorem-proving is in P.

Even so gifted and expert a mathematician as David Hilbert (and many others) embraced the illusions of #4 … and even today, the enduring power of these illusions remains to be understood.

December 13, 2011 7:22 pm

Hi,

I am a big fan of your blog and have learned many interesting things from it.
A short comment that may be of historical interest- there is a controversy with
regards to priority in regards to the Higgs boson. The mechanism that gives rise
to the Higgs particle was also proposed by PW Anderson in 1962. There is
a nice discussion of this on Peter Woit’s blog:

http://www.math.columbia.edu/~woit/wordpress/?p=3282

• December 13, 2011 7:48 pm

Thank you for pointing that out. I was aware of it, and I believe Anderson taught my first term of freshman physics at Princeton in 1977—I know P.J. Peebles taught my second term and I wrote a term paper on monopoles. I chose to skip it because I felt I’d need to mention differences such as Woit points out and why the Sakurai Prize left Anderson out. I think my sentence saying what Higgs was first in is correctly constructed—I could have added “relativistic” somewhere—and I did emphasize that Higgs feels he should not have sole credit for the mechanism (either). I economized the intro for counterpoint to the recent priority case with another Edinburgh person over matrix multiplication.

4. December 14, 2011 8:19 pm

To state the obvious, if Nature turned out to behave like a “cheating prover”, that would be a vastly more important discovery than the Higgs boson itself! The whole fact that’s made physics possible since Galileo is that the universe appears to obey laws that are homogeneous across time and space, and that one can learn about via independently-repeated experiments. So suspecting a breakdown of those facts in the latest collider experiment, with not even a hint of motivation (so far, everything is perfectly consistent with a ~125GeV Higgs being there!), is sort of like telling someone: “Aha, you say you probably left your keys at the office, but what if you really can’t find them because they were stolen by trans-dimensional cyborgs?” I.e., it’s a logical possibility, but a ludicrously-unhelpful one, the sort of thing Sheldon from The Big Bang Theory might say.

• December 14, 2011 10:36 pm

Hi, Scott. Among things on my mind when I wrote this last section was your followup comment here to your NY Times blog item (still latest on your blog as I write):

“if both quantum mechanics and the prevailing conjectures in complexity theory are valid, then the physical universe can’t be feasibly simulated by a [classical] computer…”

—going on to argue that the classical cellular automaton model of Wolfram and Fredkin et al. under-powers the Universe. Your position seems to be that the Universe is computationally more powerful, but not in ways that can actually do boshaft computation. I might agree, and hope to, but click my link to see what Gian-Carlo Rota and Fabrizio Palombi go on to say about “duplicitous behavior” and the natural sciences just below.

My motivation comes not from the Higgs case in isolation but from the two paragraphs just above that section, about “disappearing significance” cases in physics (see e.g. this comment by Matt Strassler), and then in the human sciences where there’s an easier explanation. The nub is my attempt to expand a definition from Papadimitriou’s paper, trying to approach by simple means the way “emergent probability” is addressed for instance here, here, here (conservatively?), and by Bryce DeWitt here (pages).

• December 15, 2011 12:38 am

I should add: Rota and Palombi are not talking about Nature “cheating” but about whether Nature “reduces” in the way they find paradoxical for math. Underlying their book and some of my other sources is philosophical phenomenology.

• December 15, 2011 2:57 am

Yeah, as you point out, I think Rota and Palombi were talking about something completely different in that passage (which was lovely and interesting, though!).

To clarify, I enjoyed your discussion of some of the subtle philosophical questions involved with when the collider physicists are allowed to claim a “discovery.” However, I was worried that you downplayed a simple but important point: that within a few years, CERN hopes to rack up enough sigmas that all these philosophical questions will basically be moot! And I see no reasons from science or history to suspect they won’t succeed.

To put it more broadly: contrary to what’s often claimed, I don’t think skepticism is a universal good! To be productive, skepticism about an experiment should start with prosaic possibilities specific to that experiment—not speculations that could just as well overthrow millions of other experiments, like Nature “cheating” by solving a PSPACE-complete problem…

• December 15, 2011 3:26 am

What I want to know is why is a respected physicist and blogger like Strassler writing things like this—?

“…Most experimentalists I am talking to start to answer me by telling me a story about a 3 sigma result they know about that went away after more data was collected. Explain to me, please, how it could be, statistically, that the four ZZ events at CDF reported this summer at 325-327 GeV were a fluke. And yet it appears they were. ”

This was after I wrote the post, though I had seen other reference to the CDF events. Why weren’t more sigmas racked up on it? I’ll take the “over” on a Higgs at 124–126, but I wonder if anyone has done a study on the frequency of upholding 3-sigma events that people thought were important.

For a maybe-useful aspect, does the “Zoo” have a class C that quantifies the power needed to “cheat” a substantial protocol in some sense, maybe with evidence of C being disjoint from BQP? And just to state what I believe rather than speculate here, it goes (only) as far as suspecting that prior entanglements exist that affect probabilities on larger scales than commonly thought.

• December 15, 2011 6:44 am

Ken asks: “I wonder if anyone has done a study on the frequency of upholding 3-sigma events that people thought were important.”

Ken, two such studies (texts freely available on PubMed) are Emerson et al. “Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial” (PMID:21098355) and Lynch et al. “Commercially funded and United States-based research is more likely to be published; good-quality studies with negative outcomes are not” (PMID:17473138).

My esteemed Seattle medical colleague Seth Leopold played a leading role in both studies, and in many enjoyable conversations Seth taught me much about the extraordinary difficulties that are encountered in systematically investigating the cognitive biases associated to statistical likelihood estimation in the context of peer review.

A pure physics study that reaches similar conclusions to Leopold’s (regrettably it is paywalled) is Gillies’ “The Newtonian gravitational constant: recent measurements and related studies” (1998); Gillies’ Fig. 1 in particular presents multiple measurements of Newton’s ${G}$ that differ by 5-sigma and more; this study too provides concrete evidence that even highly skilled research teams commonly fare poorly in assessing confidence levels.

Despite these well-documented difficulties, almost everyone (including me) embraces the reasonable opinions that: (1) further observations likely will confirm a Higgs boson at 125 GeV — but maybe not! — and (2) almost certainly the Standard Model that predicted this boson is incomplete and will require major modifications in coming decades.

This balance between confidence and humility is important to the vitality of science. As Freeman Dyson has put it: “If science ceases to be a rebellion against authority, then it does not deserve the talents of our brightest children.” And yet the continuing creative rebellion that is science must not devolve into anarchy, but rather must grapple effectively with the “hunger, poverty, desperation, and chaos” that remain too widespread in the world.

• December 16, 2011 11:12 am

John, I love those quotes, thank you for them!

• December 14, 2011 11:10 pm

Scott, with a view toward cultivating holiday-season cheer here on Gödel’s Lost Letter and P=NP, please let me recount for your consideration the following analysis of Newcomb’s Paradox (as set forth in your lecture that your post linked-to) in light of the principle of Nature’s Benign Mendacity:

Smiling blithely, you walk into the room in which the Predictor has placed two boxes. As the proctors look on, you open both boxes, and to the proctors’ amazement, both boxes are filled with money.

The proctors ask you to explain, and you say: “The Predictor and I both are rational and benign beings, and therefore I foresaw that the Predictor would do precisely what I would have done myself, namely, fill both boxes with money (of course fully appreciating that this action would constitude a Lipton-Regan ‘burning arrow.’)”

The proctors press for a deeper explanation, and you say: “The Predictor too necessarily has free will — without it she cannot reliably predict my actions — and of course we both preferred your surprise to my disappointment. Truth is a trickster, you know!”

The proctors press for a still-deeper explanation, and smiling you say: “Look inside the box! Perhaps the Predictor has left a cheerful holiday message there for you!”

5. December 14, 2011 9:13 pm

Ken asks: Can Nature induce us to accept false conclusions with high probability, in the manner of a “nurturing parent”?

Here I have altered “cheating prover” to “nuturing parent” because Ken’s question becomes much more interesting — without alteration of its overall thrust — when we ascribe benign motives to Nature’s penchant for mendacity.

We discern in the past history of mathematics and science many instances of Nature’s benign mendacity; examples follow.

Nature seemingly is translationally invariant in space and time; to shield our growing understanding from various pathologies associated to spatial and temporal infinity she benignly induces general relativity.

Nature seemingly is relativistically (boost) invariant; to shield our growing understanding from various pathologies associated to infinite energy she benignly induces event horizons.

Nature seemingly is infinitely divisible; to shield our growing understanding from various pathologies associated to the collapse of matter she benignly induces (nonrelativistic) quantum mechanics.

Nature seemingly is infinitely causally separable and localizable; to shield our growing understanding from various pathologies associated to … uhhh … exponential dimensions and multi-universes of Hilbert space (?) … uhhh … she benignly induces … uhhh … well that bit’s not clear at present, eh?

But history gives us ample reason to hope and expect that Nature has some grand-yet-benign revelation prepared for us, of great mathematical subtlety and beauty, associated to the intersection of quantum mechanics and general relativity, that will help us grow out of our present too-naive understanding of locality, separability, and causality.

With luck, observing the Higgs boson carefully, and thinking deeply about what we are seeing, will help us appreciate the benign surprise(s) that Nature has prepared for us. Because beyond doubt, we do live in a wonderful universe.

And on that optimistic note, best wishes for a Happy Holiday Season are extended to all readers of Gödel’s Lost Letter and P=NP!, and thanks particularly to Dick and Ken for another great year of posts!

• December 16, 2011 11:18 am

I suppose Nature also might be shielding others from us (perhaps gently or not so gently later on), i.e., one several flavors of response to Fermi’s Paradox.

• December 16, 2011 12:43 pm

Delta, perhaps you will enjoy too Carl Sagan’s high-technology extension of the Judaic concept of gematria, from Sagan’s novel Contact (1995):

“Whoever makes the universe hides messages in transcendental numbers so they’ll be read 15 billion years later when intelligent life finally evolves.”

It’s fair to say that most mathematicians and scientists (and me too) think Sagan’s notion is mathematically nonsensical — an interesting idea pushed too far.

On the other hand, the publishing house Simon & Schuster gave Sagan a \$2 million advance based on his narrative idea, so maybe on some level Sagan’s hypergematria is not entirely crazy, eh?

Perhaps one broad lesson is that engineering, science, and mathematics all gain greatly when mathematical ideals of naturality are formalized and fused with humanistic ideals of narrative. This process of formalization and fusion is ongoing, progressive and irreversible, and now seems to be accelerating in the early 21st century. Good.

Indeed, it seems (to me) that there is scarcely any other avenue than a Sagan-style naturality-narrative fusion by which the STEM enterprise can “make sense.”

So, more Carl Sagans please!

• December 16, 2011 1:11 pm

John, I don’t know if Sagan would have liked to say he was extending “gematria”. However, I do allow that analogy when speaking of algorithmic probability, insofar as it is defined in terms of some textual representation of an object. I could add to what I said above about how much of this speculation I’ve actually adopted by saying I believe algorithmic probability plays a role in physical processes—and perhaps detectably in a digital domain such as the LHC chamber and data recording. There may, however, be obstacles to detecting it, by extension from this paper by Ulvi Yurtsever.

• December 16, 2011 6:23 pm

Ken, I for one had not previously encountered algorithmic (Solomonoff) probability, and so I am both grateful for and interested in the references you supplied. Perhaps (hopefully) in 2012 Gö del’s Lost Letter and P=NP will discuss these interesting ideas further.

• December 16, 2011 6:35 pm

Indeed!–the intent to do so was a motive for this post. I may even have a concrete connection to (faults in) Zobrist key schemes for hash tables. For now let me link just this nice primer on the concept.

• December 17, 2011 1:14 pm

Great, glad this was brought up — look forward to seeing more.

6. December 17, 2011 4:10 pm

There were some posts over my blog that are related to some interesting statistical questions raised here. Can we expect 95% of statistically-based results with significance level 95% are correct? Will it help raising the required significance level to 99.5%? How to add up p values of two independent experiments? Should we believe the same two results both with 4sigma significance where in one the effect is large and in the other the effect is small?