Skip to content

Predictions and Principles

January 4, 2013


With a year end thank-you to Ken

Unknown

Ken Regan is the co-editor of GLL and is one of the internationally-ranked chess players in the US. He has won the city championship of Buffalo every time he has played. He is also a great friend and great researcher. His specialty is of course computational complexity.

Today I want to thank Ken personally for joining the GLL team and making this year one of our best.

Ken brings a unique perspective to everything that we do here. He is a wonderful writer—he has it in his DNA—his late dad was a professional writer his whole life. And his wife, Debbie, is an English teacher. Beyond that he is schooled in language: he knows English and its crazy rules better then—oops than—I will ever, and is fluent to various degrees in more languages than I can recall.

If the pieces that we try to write on so many various topics are half-way readable it is all Ken’s doing.

Cheating and Predictions

One of Ken’s many projects outside of GLL is helping track down cases of players alleged to have cheated at chess. The basic point is that a standard laptop running a standard chess machine plays better than anyone in the world. So if a player can get access to a chess program during the match, they have a huge advantage.

Some players get caught “red-handed” using quite obvious methods of relaying moves from someone who has the laptop—one was even caught looking at his game on a smartphone. These cases are not very interesting from a technical point of view. The really interesting ones, the ones that Ken works on, are based at least partially on statistical scoring methods that Ken has developed and is still improving.

The simple idea is to see how often a player makes the same move as the machine. For the moment let’s assume that the machine is always making the best move. The critical question is this:

If the player agrees with the machine too often, does that mean that cheating is happening?

Well the difficulty is yes and no. The problem is many-fold, but here is one very interesting issue. Suppose that the player agrees with the machine in many positions where moves are forced or almost forced? Even I—a very weak player—would recapture a piece or move my king out of check. Of course if there are choices how to recapture or how to move the king, then there is some information available. Sometimes the wrong recapture or king move could be a disaster. But if there is only one, or even if there are several but they all are about equal, you can see how the scoring issue becomes difficult. How to quantify it is Ken’s basic idea.

Ken’s model falls broadly into the category of predictive analytics. This doesn’t mean that Ken tries to predict individual chess moves. Instead his model aims to calculate probabilities for the moves—based on the skill of the player making them—that enable predicting means and variances for large enough aggregates of moves. The rate of agreements with a computer is one stat he can predict—for a non-cheating player. The rate of blunders is another. Predicting the rate of blunders, depending on the player and the positions faced on the board, is like how insurance and planning agencies use predictive analytics for risk assessment. He has posted a two-page non-technical description and a six-page overview.

A Recent Case

Ken has just been dealing yesterday and today with a case he was informed about on New Year’s Eve while on vacation. A player rated about 150 points lower than Ken beat four grandmasters rated 150 or more points higher than Ken. After his third such win he was searched to the skin, on suspicion that he had implanted chips, but no physical evidence was found.

Wow. Now that is getting pretty cool. More on this in the future, but I cannot resist thinking would I have a chip implanted if it helped me? Ken recalls that on his transatlantic flight to attend the 1994 IFIP conference in Hamburg, he was seated next to a Columbia University professor who was experimenting with embedded sending of images to the brains of blind people. Here is an article from last April on the state of this art.

The kind of chip I have in mind—and would love to have in my mind—would help with mathematics. Not only numerical and symbolic calculation, but also inference. We could even go further, as Stephen Wolfram did in progressing from his Mathematica program to his Wolfram Alpha inference engine. We can already put Wolfram Alpha on a chip; can we embed it? And if it were fair to use for math, could we use it for chess? These are interesting questions to think about, but perhaps best left for later.

Prediction Principles

Wolfram recently gave an interview on using his inference methods to give “A New Kind of Prediction.” The main idea is that rather than try to write the single best formal equations and combine solving and extrapolating from the data you have in hand, ask instead the question,

[What is] the space with all possible models that you can imagine using?

He begins with what one can learn from models expressed by the shortest and simplest computer programs, not necessarily the cellular automata featured in his book A New Kind of Science, but basic like that. I’ve talked with Ken about possible ways to simplify his chess model, but he already thinks it is the simplest one that captures his basic idea.

That said, we have a simple model for the predictions made at GLL: “De Summo Capite,” which Ken’s daughter gives as Latin for “Off the Top of the Head.” Here’s how we did last year, followed by our predictions for 2013.

Review of 2012 Predictions

Can we take credit for predicting that the world wouldn’t end, based on the tenor of last year’s intro? Well, we wouldn’t have been around to pay up if we’d lost.

  1. No circuit lower bound of { 1000n} or better will be proved for SAT. Of course.

  2. A Computer Scientist will win a Nobel Prize. Closer, but still a miss. Serge Haroche and David Wineland won in physics for experiments that include stepping stones toward quantum computers. Alvin Roth and Lloyd Shapley won in economics for stable-allocation theory; Shapley has a Mathematics affiliation but not CS.

  3. Someone other than a Williams will release a headline-making result in November. Since our headlines don’t count, and the {O(mn)}-time matrix-flow algorithm really dates to August, a miss.

  4. A new set-theoretical relation will be found between a class in the general range of {\mathsf{BQP}} or {\mathsf{SZK}} and one just above {\mathsf{NP}} in the polynomial hierarchy. Apparently not. Closest we find is this paper by Mohammad Mahmoody and David Xiao, “Languages with Efficient Zero-Knowledge PCPs are in {\mathsf{SZK}},” while the simplest new complexity-class relation seems to be this paper by Tsuyoshi Ito and Thomas Vidick showing that {\mathsf{MIP^* = NEXP}}.

  5. Tangible new evidence against the Unique Games Conjecture will be posted (i.e., published at least on the arXiv) by Dec. 21, 2012. We claim yes, for this paper with many authors including Aram Harrow.

  6. Tangible new evidence supporting the Unique Games Conjecture will be posted by Dec. 21, 2012. Again we claim yes, for this FOCS 2012 paper—oh but cheating, it was on the ArXiv in late 2011. Our basis for both claims is this Sept. 2012 survey by Boaz Barak, especially on pages 4–5.

  7. Tangible new evidence against the Exponential Time Hypothesis will be posted (i.e., published at least on the arXiv) by Dec. 21, 2012. Unclear? We do not know how to interpret this SODA 2013 paper by Marek Cygan, Marcin Pilipczuk, and Michal Pilipczuk.

  8. Tangible new evidence supporting the Exponential Time Hypothesis will be posted by Dec. 21, 2012. A miss?

    For a general science bent, we offered:

  9. The Higgs boson will close the year with confidence above {4\sigma} from a single team, but strictly less than {5\sigma} from any one team. Too conservative. The July 4th announcement had 5.0 and 4.9 sigma from the respective teams, and stock in the Higgs closed the year at {7\sigma}.

  10. An Earth-sized planet will be detected orbiting within the habitable zone of its single star. (Currently “Earth-sized” and “habitable zone” have been achieved by different exo-planets.) Not yet. We quote this December 19, 2012 press release by the University of Hertfordshire:

One of the planets lies in the habitable zone of the star and has a mass around five times that of Earth, making it the smallest planet found to be orbiting in the habitable zone of any Sun-like star.

Five times the Earth is too big. We could of course always claim Earth itself as an instance. Calling the last a miss, and letting partial credit on 7. offset 1. being a “gimme,” we say 3-for-10.

Predictions For 2013

Well we like to have the gimme, so we begin 2013 with the same first two, but expand to ten others. We decided not to repeat the unique-games and exponential-time ones.

  1. No circuit lower bound of { 1000n} or better will be proved for SAT. Actually let’s raise it to {2013n}, in the spirit of problems found on the Putnam Exam: You can guess which year this was from.

    exam2010

  2. A Computer Scientist will win a Nobel Prize.

  3. An important advance will be made on understanding barriers either to proving {\mathsf{P = NP}} or proving the opposite, {\mathsf{P \neq NP}}.

  4. A new simple group will be discovered. By the classification theorem, such a group should be infinite, but will not be.
  5. Another gap will be found in Shinichi Mochizuki’s claimed proof of the ABC Conjecture. (Note that Mochizuki released updates to his papers on Dec. 16th, including acknowledgments to Vesselin Dimitrov and Akshay Venkatesh who had found a gap in two of them.)
  6. The ABC claimed proof will be found to be correct.

  7. Experiments scaling up boson sampling from {3 \times 3} to {4 \times 4} permanents will be published by year’s end.

  8. A new conditional relation will be found tying the difficulty of scaling up boson sampling to that of scaling up full universal quantum computation.

  9. A best seller will be published by a noted blogger on the {\mathsf{P = NP}} question.
  10. Work will begin on a major motion picture whose plot is based on finding a polynomial time algorithm for factoring. It will audition Angelina Jolie as the brilliant mathematician who solves the problem and Tom Hanks as the villain.
  11. Noted progress toward solving another Clay Problem will be released on the ArXiv this year.
  12. A planet no more than twice Earth’s size will be detected orbiting within the habitable zone of its single star.

Open Problems

So how do you think we did with our predictions? Also make some of your own for the record—you probably will do better than we do.

39 Comments leave one →
  1. holysunlight permalink
    January 4, 2013 8:23 pm

    No. 9: Are you planning one, to make it rather self-fulfilling? ;)
    No. 12: Ambitious, but I hope for that one, too!

    Sadly, I’m not a mathematician, so I fail to understand most of your Blog’s topics and cannot comment on those predictions. ;)

  2. January 5, 2013 1:48 am

    12 and 3 sound the same and makes us wonder if you have any scoops

  3. January 5, 2013 1:48 am

    I mean 11 and 3

    • January 5, 2013 10:17 pm

      11 and 3 are complementary. For instance, the TSP paper we covered can be an example of 3, but it really doesn’t make progress toward resolving P vs. NP either way. I’m not even sure people would regard it as substantial evidence for P != NP.

      Yes, we have Lance in mind as a possibility for 9, but saying “best seller” still makes it a nontrivial prediction—it would have to hit the mark of other popular books on Math and CS to win out. Say Charles Seife’s book Zero, or John Derbyshire’s Prime Obsession, as benchmarks. Our wording also allows a more-general tech blogger to expound on P and NP.

  4. January 5, 2013 2:39 am

    Pretty exciting predictions! As a veteran skeptic-skeptic, I am skeptical of #8. I think the (theoretical) prospects of scaling up full universal QC are fine, and that making boson sampling fault tolerant will simply be a separate question.

    I am happy to give a conjecture about the complexity of boson sampling (with or without noise), though, for which I don’t think I’m going out on much of a limb.

    My conjecture is that:
    BPP < BPP + NoisyBosonSampling < BPP + BosonSampling < BQP
    where "<" means strict inclusion, BosonSampling is what's described in the Aaronson-Arkhipov paper and NoisyBosonSampling means the same with an arbitrarily small constant noise rate at each beam-splitter.

    Note that this is a somewhat finer-grained version of your conjecture. If my conjecture holds then NoisyBosonSampling can be "scaled up" in that it requires superpolynomial classical resources to simulate, but it cannot in turn requires superpolynomila overhead to simulate BosonSampling.

    As a refinement of my conjecture, I further conjecture that none of my first conjecture will be proved or disproved in 2013. I'm pretty confident about this last conjecture.

    • John Sidles permalink
      January 5, 2013 9:32 am

      Aram, I would be very interested in the role of verification and validation in your conjecture.

      ———————–
      Broad Objective  Increase the practical applicability of complexity theory.

      ———————–
      Strategy (per Hartmanis)  Reduce the formal role of (non-existent) oracles in complexity classification, by increasing the formal role of verification and validation requirements (V&V per e.g. Boehm “Verifying and validating software requirements and design specifications” (IEEE Software, 1984).

      ———————–
      Question  What resources are required to strongly/weakly verify the data-set of a BosonSampling experiment (noisy or otherwise), where “verify” is understood two ways:

      Strong verification  A data-set of unbounded length, input to a decision TM of unbounded duration, is indistinguishable from a BosonSampling process.

      Bounded verification  A data-set of bounded length, input to a decision TM provably in P, is indistinguishable from a BosonSampling process.

      ———————–
      Discussion  One of Juris Hartmanis’ points (as I understand them) is that bounded verification (and computational simulations whose output is boundedly verifiable) suffices for practical complexity-theoretic purposes.

      One of many great virtues of BosonSampling is that it lends itself outstandingly well to theorem-driven explorations of this V&V Complexity Zoo.

      And who knows? The algorithmic animals of the V&V Complexity Zoo might prove to be far more open to view (and therefore, their cages easier to label!) than the still-mysterious animals of the Oracle Complexity Zoo.

      • John Sidles permalink
        January 5, 2013 12:40 pm

        To make the above point metaphorically, citizens who visit the V&V Complexity Zoo find that all of the algorithmic animals are completely visible, and the natural history of every algorithmic species in every cage is explained by rational curators who are eager (but boring) lecturers.

        Whereas citizens who visit the Oracular Complexity Zoo find cages filled with fog, such that even the boundaries between cages cannot be discerned, and the natural history of the algorithmic animals within each cage is known only to oracular curators who cannot verifiably account (even in principle) for their knowledge. Ouch!

        For purposes of learning about algorithmic animals, most citizens will prefer to visit the V&V Zoo. And yet, almost surely, the Oracular Complexity Zoo will always contain algorithmic species that the V&V Zoo cannot (yet? or ever?) display; thus *both* zoos are well-worth supporting.

        One of the most enjoyable aspects of the BosonSampling problem is that, during 2013, *both* zoos will be developing high-visibility BosonSampling exhibits … it would be terrific if this could be done as a coordinated effort!

        ———
        Prediction  Much public interest in 2013 regarding the new BosonSampling exhibits in both the V&V Complexity Zoo and the Oracular Complexity Zoo! :)

      • January 6, 2013 1:31 pm

        John: I’m not sure I have any special insights here. I’m also not sure how much complexity theory resembles traditional (natural) sciences with their theoretical and experimental sides. I mean, there is a role for experiment, but it’s pretty clearly limited. And the BosonSampling experiments are nice, but I don’t think they tell us much (except for refuting some kinds of skepticism that I think were not very believable in the first place, and skeptics could always say that their arguments hold in the large n limit anyway). One could say something similar about factoring experiments. They tell us a lot about the engineering challenges, and confirm that the systems do work in the way our theory predicts, but are generally not probing new physics, and couldn’t even in principle examine new complexity theory.

      • John Sidles permalink
        January 6, 2013 5:13 pm

        Aram, although point-by-point I largely agree, my global conclusion is the opposite! :)

    • January 5, 2013 10:52 am

      Aram, It seems that it is not the case that BosonSampling <= BQP. Maybe you mean QuantumSampling rather than BQP. See the comments in http://cstheory.stackexchange.com/questions/14804/the-cozy-neighborhoods-of-p-and-of-np-hard

      • January 6, 2013 1:24 pm

        Gil: Yes, thanks, good point.
        I guess BPP/BQP should be replaced by their sampling versions throughout what I wrote. Feels like I’m quickly regressing to a physicist’s approach to complexity theory. :)

  5. John Sidles permalink
    January 5, 2013 8:47 am

    I too am a huge fan of Ken Regan as a broad-spectrum researcher and outstandingly graceful writer. Thank you, Ken Regan! (and in particular, as a fan of computer chess, please let me selfishly hope for more of Ken’s outstandingly lucid chess-centric GLL essays in 2013).

    —————–

    Prediction  The NIH, NSF, and DOE will increase the portion of their research budget that is allocated to small-business enterprises from (the present) 5%, first to 10% in 2013, and to 15% by 2016.

    Theoretical Rationale  Far more family-supporting jobs for young STEM researchers, and family-supporting job for young citizens in general, of the type described by (for example) Richard Hamming’s essay You and your research, will be created by this research funding reallocation, than by any other strategy.

    Political Rationale  Congress will direct it because citizens want jobs.

  6. iengiregibiegeinkw permalink
    January 5, 2013 10:11 am

    Several small planets in the habitable zone including ones almost exactly the same size as Earth will be announced at the 221st AAS meeting which begins tomorrow Jan 6.

    This prediction is based upon reading this link:
    http://phl.upr.edu/press-releases/mygoditsfullofplanetstheyshouldhavesentapoet

    • etgbghjgwrekghb permalink
      January 7, 2013 6:30 pm

      Ok, so as it turned out, 4 candidates less than twice the size of Earth in the HZ were announced. Three of them were around stars that are much cooler than the Sun but one of them at 1.5 times the size of Earth (radius not mass) was around a solar-like star with an orbital period of 242 days. However candidates require additional follow-up observations and analyses before they can be confirmed as planets so we’re not there yet.

      A video of the presentation is at http://www.ustream.tv/recorded/28311736

      Some further details of latest Kepler news at http://kepler.nasa.gov/news/nasakeplernews/index.cfm?FuseAction=ShowNews&NewsID=244

  7. January 5, 2013 11:47 am

    Let me add to this post’s opening that most of the posts in my quantum debate with Aram were authored by Ken, who edited what we wrote, added his own angle, and, at times, understood Aram and me better than we did ourselves. Thanks, Ken!

    • rjlipton permalink*
      January 6, 2013 10:05 am

      Gil,

      Thanks for pointing this out…dick

      • marbo permalink
        January 8, 2013 7:21 am

        As a regular reader of your blog I’m aware that your given name is Richard, and I presume that Dick is a nickname. Had I not this knowledge the above way of signing your post might appear somewhat uncongenial :)

      • rjlipton permalink*
        January 9, 2013 9:30 am

        marbo,

        Yes that does cause some issues.

        Richard (Dick)

  8. January 5, 2013 4:31 pm

    Wish 7 was already fulfilled last year..to some extent. See http://arxiv.org/abs/1212.2622

  9. Javaid Aslam permalink
    January 5, 2013 4:35 pm

    Prediction #2: “A Computer Scientist will win a Nobel Prize.”
    Is there really a Nobel Prize in CS?

    • January 5, 2013 7:38 pm

      Programs do share a definition with Poems, namely, Just The Words That Do so it could be in Literature. But not excluding the possibility of a Peace Prize It would most likely be lumped under Economics, the way most mathematical contributions are recognized.

      • Serge permalink
        January 7, 2013 12:12 pm

        I wonder which discovery would most deserve a Peace Prize: that secure cryptography is possible, or that it’s impossible? Maybe the first option, after all…

      • January 7, 2013 12:30 pm

        Operating on the theory that Knowledge Is Virtue, anything that makes us genuinely smarter should make us generally better people.

  10. Magnus W permalink
    January 6, 2013 11:48 am

    Previous prediction #7 is a miss, as far as I know.

    In particular, the cited paper does nothing to contradict the ETH, but shows that a certain known algorithm is “ETH-optimal”.

  11. January 6, 2013 6:09 pm

    A new finite simple group? Surely you jest?

    • January 7, 2013 1:23 am

      Yeah, we made it a jest, adding that clause last-minute to the original version :-). Or maybe not a jest…

      • John Sidles permalink
        January 7, 2013 5:07 am

        Ken, is this a hint that a GLL post is in the offing, regarding the recent formal proof-check (in coq) of the Feit-Thompson Theorem, aka the Odd-Order Theorem? For us verification and validation (V&V) fans, that would be terrific! :)

        It is immensely reassuring regarding the integrity of modern mathematics, that very long/very hard proofs, that previously were verified by experts crucially upon their personal intuition-and-experience, now are being objectively verified by machine … with no big problems being uncovered (so far, anyway).

      • Serge permalink
        January 7, 2013 8:14 pm

        Beyond verification, no machine so far could have proved this theorem from scratch. And yet, it’s still not known whether such a machine is feasible. The status of full Artificial Intelligence is like that of P=NP: impossible to prove, impossible to disprove. It’s also reminiscent of the quantum debate…

  12. Serge permalink
    January 12, 2013 7:41 am

    My preceding comment wasn’t meant in the least to dismiss the breakthrough that Georges Gonthier and his team have recently made. I took advantage of their revolutionary achievement for mentioning Artificial Intelligence, as its historical goal of writing a fully intelligent program seems to be connected to the P=NP problem and even to the quantum computer debate. All three questions are expressions of some deep limitations of computer science which probably have roots in the fact that our brains are computers too.

    • January 13, 2013 12:30 am

      Scientific method must observe its limitation — it is the limitation that makes the method.

      Science, mathematics, and computer science each accepts the limitation inherent in a particular relation between the infinite and the finite — what can be known reproducibly, provably, and effectively in turn. Or is that saying the same thing three different ways?

      Computer science accepts the limitation of effective procedure — of what can be done in a finite number of well-defined steps — this is the limitation that gives the discipline of programming its character.

      • Serge permalink
        January 13, 2013 2:24 pm

        The limitation of effective procedure seems to entail side-effects.
        1) Being intelligent prevents you from defining what intelligence is.
        2) Finding the fastest algorithm for a hard problem prevents you from proving its correctness.
        3) Being made yourself of quantum particles prevents you from building a quantum computer.

      • January 13, 2013 3:00 pm

        Not sure I’d bet on any of those — but I do tend to believe that inquiry into inquiry is capable of clarifying its own nature, and that seems to mean that intelligence can clarify its own character.

  13. January 16, 2013 1:04 pm

    For the chess thing, could Bayes Theorem be applied?

Trackbacks

  1. Game On, Garth « Pink Iguana
  2. The Crown Game Affair « Gödel’s Lost Letter and P=NP
  3. Finite simple groups | Peter Cameron's Blog
  4. Teaching Helps Research | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,231 other followers