Predictions and Principles
With a year end thankyou to Ken
Ken Regan is the coeditor of GLL and is one of the internationallyranked chess players in the US. He has won the city championship of Buffalo every time he has played. He is also a great friend and great researcher. His specialty is of course computational complexity.
Today I want to thank Ken personally for joining the GLL team and making this year one of our best.
Ken brings a unique perspective to everything that we do here. He is a wonderful writer—he has it in his DNA—his late dad was a professional writer his whole life. And his wife, Debbie, is an English teacher. Beyond that he is schooled in language: he knows English and its crazy rules better then—oops than—I will ever, and is fluent to various degrees in more languages than I can recall.
If the pieces that we try to write on so many various topics are halfway readable it is all Ken’s doing.
Cheating and Predictions
One of Ken’s many projects outside of GLL is helping track down cases of players alleged to have cheated at chess. The basic point is that a standard laptop running a standard chess machine plays better than anyone in the world. So if a player can get access to a chess program during the match, they have a huge advantage.
Some players get caught “redhanded” using quite obvious methods of relaying moves from someone who has the laptop—one was even caught looking at his game on a smartphone. These cases are not very interesting from a technical point of view. The really interesting ones, the ones that Ken works on, are based at least partially on statistical scoring methods that Ken has developed and is still improving.
The simple idea is to see how often a player makes the same move as the machine. For the moment let’s assume that the machine is always making the best move. The critical question is this:
If the player agrees with the machine too often, does that mean that cheating is happening?
Well the difficulty is yes and no. The problem is manyfold, but here is one very interesting issue. Suppose that the player agrees with the machine in many positions where moves are forced or almost forced? Even I—a very weak player—would recapture a piece or move my king out of check. Of course if there are choices how to recapture or how to move the king, then there is some information available. Sometimes the wrong recapture or king move could be a disaster. But if there is only one, or even if there are several but they all are about equal, you can see how the scoring issue becomes difficult. How to quantify it is Ken’s basic idea.
Ken’s model falls broadly into the category of predictive analytics. This doesn’t mean that Ken tries to predict individual chess moves. Instead his model aims to calculate probabilities for the moves—based on the skill of the player making them—that enable predicting means and variances for large enough aggregates of moves. The rate of agreements with a computer is one stat he can predict—for a noncheating player. The rate of blunders is another. Predicting the rate of blunders, depending on the player and the positions faced on the board, is like how insurance and planning agencies use predictive analytics for risk assessment. He has posted a twopage nontechnical description and a sixpage overview.
A Recent Case
Ken has just been dealing yesterday and today with a case he was informed about on New Year’s Eve while on vacation. A player rated about 150 points lower than Ken beat four grandmasters rated 150 or more points higher than Ken. After his third such win he was searched to the skin, on suspicion that he had implanted chips, but no physical evidence was found.
Wow. Now that is getting pretty cool. More on this in the future, but I cannot resist thinking would I have a chip implanted if it helped me? Ken recalls that on his transatlantic flight to attend the 1994 IFIP conference in Hamburg, he was seated next to a Columbia University professor who was experimenting with embedded sending of images to the brains of blind people. Here is an article from last April on the state of this art.
The kind of chip I have in mind—and would love to have in my mind—would help with mathematics. Not only numerical and symbolic calculation, but also inference. We could even go further, as Stephen Wolfram did in progressing from his Mathematica program to his Wolfram Alpha inference engine. We can already put Wolfram Alpha on a chip; can we embed it? And if it were fair to use for math, could we use it for chess? These are interesting questions to think about, but perhaps best left for later.
Prediction Principles
Wolfram recently gave an interview on using his inference methods to give “A New Kind of Prediction.” The main idea is that rather than try to write the single best formal equations and combine solving and extrapolating from the data you have in hand, ask instead the question,
[What is] the space with all possible models that you can imagine using?
He begins with what one can learn from models expressed by the shortest and simplest computer programs, not necessarily the cellular automata featured in his book A New Kind of Science, but basic like that. I’ve talked with Ken about possible ways to simplify his chess model, but he already thinks it is the simplest one that captures his basic idea.
That said, we have a simple model for the predictions made at GLL: “De Summo Capite,” which Ken’s daughter gives as Latin for “Off the Top of the Head.” Here’s how we did last year, followed by our predictions for 2013.
Review of 2012 Predictions
Can we take credit for predicting that the world wouldn’t end, based on the tenor of last year’s intro? Well, we wouldn’t have been around to pay up if we’d lost.
 No circuit lower bound of or better will be proved for SAT. Of course.
 A Computer Scientist will win a Nobel Prize. Closer, but still a miss. Serge Haroche and David Wineland won in physics for experiments that include stepping stones toward quantum computers. Alvin Roth and Lloyd Shapley won in economics for stableallocation theory; Shapley has a Mathematics affiliation but not CS.
 Someone other than a Williams will release a headlinemaking result in November. Since our headlines don’t count, and the time matrixflow algorithm really dates to August, a miss.
 A new settheoretical relation will be found between a class in the general range of or and one just above in the polynomial hierarchy. Apparently not. Closest we find is this paper by Mohammad Mahmoody and David Xiao, “Languages with Efficient ZeroKnowledge PCPs are in ,” while the simplest new complexityclass relation seems to be this paper by Tsuyoshi Ito and Thomas Vidick showing that .
 Tangible new evidence against the Unique Games Conjecture will be posted (i.e., published at least on the arXiv) by Dec. 21, 2012. We claim yes, for this paper with many authors including Aram Harrow.
 Tangible new evidence supporting the Unique Games Conjecture will be posted by Dec. 21, 2012. Again we claim yes, for this FOCS 2012 paper—oh but cheating, it was on the ArXiv in late 2011. Our basis for both claims is this Sept. 2012 survey by Boaz Barak, especially on pages 4–5.
 Tangible new evidence against the Exponential Time Hypothesis will be posted (i.e., published at least on the arXiv) by Dec. 21, 2012. Unclear? We do not know how to interpret this SODA 2013 paper by Marek Cygan, Marcin Pilipczuk, and Michal Pilipczuk.

Tangible new evidence supporting the Exponential Time Hypothesis will be posted by Dec. 21, 2012. A miss?
For a general science bent, we offered:
 The Higgs boson will close the year with confidence above from a single team, but strictly less than from any one team. Too conservative. The July 4th announcement had 5.0 and 4.9 sigma from the respective teams, and stock in the Higgs closed the year at .
 An Earthsized planet will be detected orbiting within the habitable zone of its single star. (Currently “Earthsized” and “habitable zone” have been achieved by different exoplanets.) Not yet. We quote this December 19, 2012 press release by the University of Hertfordshire:
One of the planets lies in the habitable zone of the star and has a mass around five times that of Earth, making it the smallest planet found to be orbiting in the habitable zone of any Sunlike star.
Five times the Earth is too big. We could of course always claim Earth itself as an instance. Calling the last a miss, and letting partial credit on 7. offset 1. being a “gimme,” we say 3for10.
Predictions For 2013
Well we like to have the gimme, so we begin 2013 with the same first two, but expand to ten others. We decided not to repeat the uniquegames and exponentialtime ones.
 No circuit lower bound of or better will be proved for SAT. Actually let’s raise it to , in the spirit of problems found on the Putnam Exam: You can guess which year this was from.
 A Computer Scientist will win a Nobel Prize.
 An important advance will be made on understanding barriers either to proving or proving the opposite, .
 A new simple group will be discovered. By the classification theorem, such a group should be infinite, but will not be.
 Another gap will be found in Shinichi Mochizuki’s claimed proof of the ABC Conjecture. (Note that Mochizuki released updates to his papers on Dec. 16th, including acknowledgments to Vesselin Dimitrov and Akshay Venkatesh who had found a gap in two of them.)
 The ABC claimed proof will be found to be correct.
 Experiments scaling up boson sampling from to permanents will be published by year’s end.
 A new conditional relation will be found tying the difficulty of scaling up boson sampling to that of scaling up full universal quantum computation.
 A best seller will be published by a noted blogger on the question.
 Work will begin on a major motion picture whose plot is based on finding a polynomial time algorithm for factoring. It will audition Angelina Jolie as the brilliant mathematician who solves the problem and Tom Hanks as the villain.
 Noted progress toward solving another Clay Problem will be released on the ArXiv this year.
 A planet no more than twice Earth’s size will be detected orbiting within the habitable zone of its single star.
Open Problems
So how do you think we did with our predictions? Also make some of your own for the record—you probably will do better than we do.
No. 9: Are you planning one, to make it rather selffulfilling?😉
No. 12: Ambitious, but I hope for that one, too!
Sadly, I’m not a mathematician, so I fail to understand most of your Blog’s topics and cannot comment on those predictions.😉
No, but they are semicheating. There is a book (http://www.amazon.co.uk/TheGoldenTicketSearchImpossible/dp/0691156492) by Fortnow coming out soon, he’s a noted blogger and it’s likely that the book will sell a lot.
The book has a US version too– somehow compressed to 188 pages., same ISBN.
12 and 3 sound the same and makes us wonder if you have any scoops
I mean 11 and 3
11 and 3 are complementary. For instance, the TSP paper we covered can be an example of 3, but it really doesn’t make progress toward resolving P vs. NP either way. I’m not even sure people would regard it as substantial evidence for P != NP.
Yes, we have Lance in mind as a possibility for 9, but saying “best seller” still makes it a nontrivial prediction—it would have to hit the mark of other popular books on Math and CS to win out. Say Charles Seife’s book Zero, or John Derbyshire’s Prime Obsession, as benchmarks. Our wording also allows a moregeneral tech blogger to expound on P and NP.
Pretty exciting predictions! As a veteran skepticskeptic, I am skeptical of #8. I think the (theoretical) prospects of scaling up full universal QC are fine, and that making boson sampling fault tolerant will simply be a separate question.
I am happy to give a conjecture about the complexity of boson sampling (with or without noise), though, for which I don’t think I’m going out on much of a limb.
My conjecture is that:
BPP < BPP + NoisyBosonSampling < BPP + BosonSampling < BQP
where "<" means strict inclusion, BosonSampling is what's described in the AaronsonArkhipov paper and NoisyBosonSampling means the same with an arbitrarily small constant noise rate at each beamsplitter.
Note that this is a somewhat finergrained version of your conjecture. If my conjecture holds then NoisyBosonSampling can be "scaled up" in that it requires superpolynomial classical resources to simulate, but it cannot in turn requires superpolynomila overhead to simulate BosonSampling.
As a refinement of my conjecture, I further conjecture that none of my first conjecture will be proved or disproved in 2013. I'm pretty confident about this last conjecture.
Aram, I would be very interested in the role of verification and validation in your conjecture.
———————–
Broad Objective Increase the practical applicability of complexity theory.
———————–
Strategy (per Hartmanis) Reduce the formal role of (nonexistent) oracles in complexity classification, by increasing the formal role of verification and validation requirements (V&V per e.g. Boehm “Verifying and validating software requirements and design specifications” (IEEE Software, 1984).
———————–
Question What resources are required to strongly/weakly verify the dataset of a BosonSampling experiment (noisy or otherwise), where “verify” is understood two ways:
Strong verification A dataset of unbounded length, input to a decision TM of unbounded duration, is indistinguishable from a BosonSampling process.
Bounded verification A dataset of bounded length, input to a decision TM provably in P, is indistinguishable from a BosonSampling process.
———————–
Discussion One of Juris Hartmanis’ points (as I understand them) is that bounded verification (and computational simulations whose output is boundedly verifiable) suffices for practical complexitytheoretic purposes.
One of many great virtues of BosonSampling is that it lends itself outstandingly well to theoremdriven explorations of this V&V Complexity Zoo.
And who knows? The algorithmic animals of the V&V Complexity Zoo might prove to be far more open to view (and therefore, their cages easier to label!) than the stillmysterious animals of the Oracle Complexity Zoo.
To make the above point metaphorically, citizens who visit the V&V Complexity Zoo find that all of the algorithmic animals are completely visible, and the natural history of every algorithmic species in every cage is explained by rational curators who are eager (but boring) lecturers.
Whereas citizens who visit the Oracular Complexity Zoo find cages filled with fog, such that even the boundaries between cages cannot be discerned, and the natural history of the algorithmic animals within each cage is known only to oracular curators who cannot verifiably account (even in principle) for their knowledge. Ouch!
For purposes of learning about algorithmic animals, most citizens will prefer to visit the V&V Zoo. And yet, almost surely, the Oracular Complexity Zoo will always contain algorithmic species that the V&V Zoo cannot (yet? or ever?) display; thus *both* zoos are wellworth supporting.
One of the most enjoyable aspects of the BosonSampling problem is that, during 2013, *both* zoos will be developing highvisibility BosonSampling exhibits … it would be terrific if this could be done as a coordinated effort!
———
Prediction Much public interest in 2013 regarding the new BosonSampling exhibits in both the V&V Complexity Zoo and the Oracular Complexity Zoo!
John: I’m not sure I have any special insights here. I’m also not sure how much complexity theory resembles traditional (natural) sciences with their theoretical and experimental sides. I mean, there is a role for experiment, but it’s pretty clearly limited. And the BosonSampling experiments are nice, but I don’t think they tell us much (except for refuting some kinds of skepticism that I think were not very believable in the first place, and skeptics could always say that their arguments hold in the large n limit anyway). One could say something similar about factoring experiments. They tell us a lot about the engineering challenges, and confirm that the systems do work in the way our theory predicts, but are generally not probing new physics, and couldn’t even in principle examine new complexity theory.
Aram, although pointbypoint I largely agree, my global conclusion is the opposite!
Aram, It seems that it is not the case that BosonSampling <= BQP. Maybe you mean QuantumSampling rather than BQP. See the comments in http://cstheory.stackexchange.com/questions/14804/thecozyneighborhoodsofpandofnphard
Gil: Yes, thanks, good point.
I guess BPP/BQP should be replaced by their sampling versions throughout what I wrote. Feels like I’m quickly regressing to a physicist’s approach to complexity theory.
I too am a huge fan of Ken Regan as a broadspectrum researcher and outstandingly graceful writer. Thank you, Ken Regan! (and in particular, as a fan of computer chess, please let me selfishly hope for more of Ken’s outstandingly lucid chesscentric GLL essays in 2013).
—————–
Prediction The NIH, NSF, and DOE will increase the portion of their research budget that is allocated to smallbusiness enterprises from (the present) 5%, first to 10% in 2013, and to 15% by 2016.
Theoretical Rationale Far more familysupporting jobs for young STEM researchers, and familysupporting job for young citizens in general, of the type described by (for example) Richard Hamming’s essay You and your research, will be created by this research funding reallocation, than by any other strategy.
Political Rationale Congress will direct it because citizens want jobs.
Several small planets in the habitable zone including ones almost exactly the same size as Earth will be announced at the 221st AAS meeting which begins tomorrow Jan 6.
This prediction is based upon reading this link:
http://phl.upr.edu/pressreleases/mygoditsfullofplanetstheyshouldhavesentapoet
Ok, so as it turned out, 4 candidates less than twice the size of Earth in the HZ were announced. Three of them were around stars that are much cooler than the Sun but one of them at 1.5 times the size of Earth (radius not mass) was around a solarlike star with an orbital period of 242 days. However candidates require additional followup observations and analyses before they can be confirmed as planets so we’re not there yet.
A video of the presentation is at http://www.ustream.tv/recorded/28311736
Some further details of latest Kepler news at http://kepler.nasa.gov/news/nasakeplernews/index.cfm?FuseAction=ShowNews&NewsID=244
Let me add to this post’s opening that most of the posts in my quantum debate with Aram were authored by Ken, who edited what we wrote, added his own angle, and, at times, understood Aram and me better than we did ourselves. Thanks, Ken!
Gil,
Thanks for pointing this out…dick
As a regular reader of your blog I’m aware that your given name is Richard, and I presume that Dick is a nickname. Had I not this knowledge the above way of signing your post might appear somewhat uncongenial
marbo,
Yes that does cause some issues.
Richard (Dick)
Wish 7 was already fulfilled last year..to some extent. See http://arxiv.org/abs/1212.2622
Prediction #2: “A Computer Scientist will win a Nobel Prize.”
Is there really a Nobel Prize in CS?
Programs do share a definition with Poems, namely, Just The Words That Do so it could be in Literature. But not excluding the possibility of a Peace Prize It would most likely be lumped under Economics, the way most mathematical contributions are recognized.
I wonder which discovery would most deserve a Peace Prize: that secure cryptography is possible, or that it’s impossible? Maybe the first option, after all…
Operating on the theory that Knowledge Is Virtue, anything that makes us genuinely smarter should make us generally better people.
Previous prediction #7 is a miss, as far as I know.
In particular, the cited paper does nothing to contradict the ETH, but shows that a certain known algorithm is “ETHoptimal”.
A new finite simple group? Surely you jest?
Yeah, we made it a jest, adding that clause lastminute to the original version. Or maybe not a jest…
Ken, is this a hint that a GLL post is in the offing, regarding the recent formal proofcheck (in coq) of the FeitThompson Theorem, aka the OddOrder Theorem? For us verification and validation (V&V) fans, that would be terrific!
It is immensely reassuring regarding the integrity of modern mathematics, that very long/very hard proofs, that previously were verified by experts crucially upon their personal intuitionandexperience, now are being objectively verified by machine … with no big problems being uncovered (so far, anyway).
Beyond verification, no machine so far could have proved this theorem from scratch. And yet, it’s still not known whether such a machine is feasible. The status of full Artificial Intelligence is like that of P=NP: impossible to prove, impossible to disprove. It’s also reminiscent of the quantum debate…
My preceding comment wasn’t meant in the least to dismiss the breakthrough that Georges Gonthier and his team have recently made. I took advantage of their revolutionary achievement for mentioning Artificial Intelligence, as its historical goal of writing a fully intelligent program seems to be connected to the P=NP problem and even to the quantum computer debate. All three questions are expressions of some deep limitations of computer science which probably have roots in the fact that our brains are computers too.
Scientific method must observe its limitation — it is the limitation that makes the method.
Science, mathematics, and computer science each accepts the limitation inherent in a particular relation between the infinite and the finite — what can be known reproducibly, provably, and effectively in turn. Or is that saying the same thing three different ways?
Computer science accepts the limitation of effective procedure — of what can be done in a finite number of welldefined steps — this is the limitation that gives the discipline of programming its character.
The limitation of effective procedure seems to entail sideeffects.
1) Being intelligent prevents you from defining what intelligence is.
2) Finding the fastest algorithm for a hard problem prevents you from proving its correctness.
3) Being made yourself of quantum particles prevents you from building a quantum computer.
Not sure I’d bet on any of those — but I do tend to believe that inquiry into inquiry is capable of clarifying its own nature, and that seems to mean that intelligence can clarify its own character.
For the chess thing, could Bayes Theorem be applied?