Skip to content

Predictions: Past And Future

January 2, 2011


Predictions: the 2010 results and the new ones for 2011

Michel de Nostradamus is famous for making a thousand predictions. Some claim that he got about half right; so he got about a half wrong.

Today I want to look over last year’s predictions and also make predictions for the new year, 2011.

In 1555 Nostradamus published the first of ten books; each book has 100 predictions. They were written in verse, in a form called quatrains. His brilliant insight was to make predictions that were vague, open to many interpretations, and often about the far future. These are great ideas if you wish make predictions, but not make incorrect ones.

I am unafraid and will make my predictions about the next year. Even better, I will go over the list of predictions from last year and see how I did.

I have often been on committees, of various kinds, to help make a ten-year strategic plan. These committees are usually fun, and they allow you to think about the future, about the directions of the field, essentially making predictions. One thing that I have learned is that the report from the previous committees, which was done ten years earlier, is never available. I have asked many times to see how well the previous committee did, to help gauge how we might do. The answer is always a polite no—or a we will get the report to you soon, but we never actually get it. Oh well.

My 2010 Predictions

Here are the predictions from last year, together with the results.

  1. A problem on Stephen Smale’s list of open problems will be solved. Yes—essentially: See this paper.
  2. No circuit lower bound of { {1000n}} or better will be proved for SAT. Yes—this was an easy prediction.
  3. A quantum computer will solve a non-trivial problem. No.
  4. A Computer Scientist will win a Nobel Prize. No.
  5. The Goldbach Conjecture will finally be solved—negatively. The number

    \displaystyle  31^{119^{25253}} - 519^{17^{98373}} + 1731^{{636627}^{9111}} - 19

    will be shown not to be the sum of two primes. No—but could be true.

  6. I will have a paper rejected by a conference. Yes and yes.
  7. Spam will grow to be 100.001 per cent; of all e-mail. The 0.001 is due to sampling error. Yes—essentially.
  8. Someone will prove that {\mathsf{NTIME}(2^n)} is not contained in {\mathsf{ACC}}. Yes.
  9. Someone will discover a polynomial time factoring algorithm—but will not tell anyone. Yes/No—unclear.
  10. On April { {1^{st}}} over 3 billion people will type “google” into the Google search box, then precisely at 12:00 GMT they will all hit the enter key. This will cause the Internet to crash, and it will be down for 17 hours and 12 minutes. No.

  11. At least one of my predictions will be wrong, but not all of them. Yes.

Okay, I cheated. I never made the prediction about the {\mathsf{ACC}} lower bound, but added it to make my point about the need for checking the previous predictions.

Total outcome out of the ten predictions: four yes’s, three no’s, three on the fence. Not great, not terrible, a fair score. No? Per the main point of my partner’s ongoing chess research, one needs to compare against some reasonable “prior probability” of each prediction. Some of the hits were like obvious chess moves, but I’ll take credit for the Smale prediction, and give back some on the quantum computer and Nobel Prize misses.

My 2011 Predictions

My predictions for the coming year are the following:

  1. No circuit lower bound of {{1000n}} or better will be proved for SAT.

  2. A Computer Scientist will win a Nobel Prize.

These are two holdovers from last year. I will let them “ride” and make them again. Note there is no CS Nobel, so what the latter means is that among those cited for science or economics will be someone trained or currently employed in computer science. Note also that with prediction 10. below I am “doubling down” on the past success.

  1. The complexity class {\mathsf{BQP}}, bounded-error quantum polynomial time, will be shown to lie in the polynomial hierarchy. This is rather nervy, since contrary to the clear drift of this paper, this major followup, and this recently revised paper.
  2. A conceptually new algorithm will be discovered for an important practical problem; it will be a breakthrough in asymptotic time but in concrete terms will be even more galactic than previous ones.
  3. At least five claims that {\mathsf{P}=\mathsf{NP}} and five that {\mathsf{P} \neq \mathsf{NP}} will be made.
  4. Graph Isomorphism will be proved to be reducible to the Graph Reconstruction Problem. For background, see this paper.
  5. Apple will invent a new type of alarm clock: the iClock. Pressing the “snooze” button will take you back one hour via your own personal time travel device. It will require a monthly subscription fee and will be wildly successful—however, see this for a caveat.
  6. The world’s population at exactly 12:00am GMT on April {1^{st}} will be even.
  7. A “provably” secure crypto-system will be broken.

  8. A Clay problem will be solved by two independent researchers at about the same time.

Open Problems

What are your best predictions? I further predict that next year many of my predictions will be wrong.

Happy New Year.

[fixed typos]

28 Comments leave one →
  1. January 2, 2011 10:29 pm

    Oops—looks like my first New Year’s resolution will have to be to itemize limitations of the LaTeX-conversion scripts. They didn’t understand my use of fixed “enumerate” labels [1.]–[2.] and [3.]–[10.].

    The predictions are all Dick’s—he took out my qualifying suggestion of saying “Clay or Smale problem” in 10., so I’d say he’s well more than doubling down… 🙂

  2. Ross Snider permalink
    January 2, 2011 10:37 pm

    Concerning your prediction “4. 6.”:

    I have had the wonderful experience of being taught about the graph reconstruction problem by Professor Radziszowski himself (who is perhaps the best professor in TCS I’ve had the blessing to have). It was very interesting to learn that a Master’s Thesis he advised dealt with computing existential and universal graph reconstruction numbers for all graphs up to a certain size (I think somewhere around 10 vertices). In this computation, a table is built composed of graph cards of G and graphs of size |G|. Each entry of the table is a subgraph isomorphism problem and to create the rows and columns of the table (checking for redundancies) one must compute graph isomorphism.

    https://ritdml.rit.edu/handle/1850/2773

    If you look closely at the existential reconstruction number table you should note that for all graphs up to cardinality 10, their existential reconstruction number is always three if and only if |G| is prime. Of course this small table is hardly enough to make a conjecture, but there we have it. My prediction for the next year is that someone will compute the existential graph reconstruction numbers for all graphs will 11 vertices and that every graph will have a number equal to three.

  3. delta permalink
    January 2, 2011 11:50 pm

    Perhaps I’m missing some very arch humor, but (re: 2010 #7) my understanding is that spam actually “significantly declined” last year, to a low of 74% in December.

    http://www.net-security.org/secworld.php?id=10381

    • rjlipton permalink*
      January 3, 2011 9:12 am

      delta,

      I think you are right. But it is still pretty large.

  4. January 3, 2011 1:09 am

    2010 #3 has been done. It just hasn’t been published yet, since it’s waiting on a precursor paper to get published first (the one proving it is both quantum and a computer that uses that quantumness to compute), and/or for me to finish writing up the supplementary material. Okay, maybe “non-trivial” is a stretch, but it’s solved other, significantly less trivial problems that are just way too practical for journals to care about.

    In other news, you may be interested in this off-the-wall, satirical take on peer review, what I’m calling “DAMN Cool Science”. 🙂

    • rjlipton permalink*
      January 3, 2011 9:14 am

      Neil,

      Sounds great. But the prediction was for last year. Still love to see the result.

      • January 4, 2011 2:23 am

        Aye, it was done last year, and one of the two papers was submitted last year; they just haven’t gotten published yet due to slow peer review.

        Alternatively, if random, 8-qubit problems count as non-trivial, there was this paper published, showing results for solving 800 random, 8-spin, 4 bit-of-precision Ising model instances at multiple temperatures. It doesn’t prove that it’s non-classical (though acting classically is physically impossible), but shows an excellent match to the quantum model simulations.

        Also, I’ve only just recently gotten accurate simulation of up to 16 qubits working, so comparing against accurate models for systems > 8 qubits wasn’t feasible before.

  5. Anonymous permalink
    January 3, 2011 1:50 am

    I will make a very easy prediction: there will be lots of wonderful posts on GLL. 🙂

    Happy New Year

  6. Arno Nym permalink
    January 3, 2011 3:43 am

    What is your motivation to predict 1. 3.? Is it based on some deep intuition of yours or just a challenge thrown out to get people away from conventional wisdom and to think through the contrarian point of view?

    • January 3, 2011 4:59 pm

      Dick and I have separate intuitions about this; I’ve backed off mine a little but we’ve just started talking about his. The basic problem is that Stockmeyer approximation of f and g does not help you approximate a difference (f(x) – g(x))/D, especially when the denominator D is small and you’re relying on a “promise” that the difference (is non-negative and) never exceeds D. My idea is to “morph” the representation in ways that may be quantumly illegal but conserve the range of f and g while raising D; Dick’s is more substantial relating to recent sparse-approximation papers. In both cases the hope is similar to the IP = PSPACE proof: concrete arithmetic avoids relativization issues.

      • January 4, 2011 3:23 am

        Complexity theorists are starting to sound like economists.

        We physicists never disagree. Does who do mysteriously disappear.

      • January 8, 2011 11:49 pm

        Dr. Tucci—
        Predictions of the Higgs mass? 🙂

        OK, I see convergence of opinion it’s between 114 and 153 GeV/c^2. But individual experiments give that only with 95 percent confidence—just like in economics :-).

        [note: Dr. Tucci meant to write “Those who do mysteriously disappear…”]

      • January 9, 2011 7:29 am

        “does who do” meant “those who do” but was used intentionally. I was trying to sound like a gangster, Damon Runyon like. Physicists often behave like gangsters 🙂 Also, I find the phrase “does who do” funny (I’m by no means the first one to use it) because it uses the verb “do” twice, and it’s a tautology.

  7. Carsten Milkau permalink
    January 3, 2011 4:24 am

    Happy New Year =)

    I might be telling something already well-known since it received noticeable press attention, but I’d like to add that last year a physical implementation of quantum cryptography has been broken (http://arxiv.org/abs/1005.2376v1). The attack exploited weaknesses of the underlying hardware so the cryptosystem wasn’t actually broken of course, but it shows that even such strong notions of security need to be warily concerned in practice.

    • rjlipton permalink*
      January 3, 2011 9:13 am

      Carsten,

      Believe we have talked about that last year.

      • Carsten Milkau permalink
        January 3, 2011 12:29 pm

        Shoot! Sorry, I should’ve checked. Oh well, next time!

  8. January 3, 2011 7:06 am

    Here are two predictions that are reasonably well-supported by the literature:

    (1) Simulation capabilities (both classical and quantum) will continue to double every nine months, not only for the coming year of 2011, but throughout the coming two decades of 2011-30.

    (2) In 2011, in physics, chemistry, and molecular biology, the ratio of “simulationist” hires of post-docs and faculty, to “wet-bench” hires of post-docs and faculty, will rise above unity in North America and Europe … and for at least the next twenty years this ratio will never again drop below unity.

    Prediction (1) is inspired largely by Robert Bixby’s Solving real-world linear programs: a decade and more of progress (2002), which documents a speed-up of 1.9×106 in linear programming capability between 1988-2002: the speed-up from better hardware is a factor of 800× and the speed-up from better algorithms is 2360×.

    Here the natural size-of-problem metric is the number of atoms/electrons/spins in the simulation, times the Schmidt rank of the state-space on which dynamical trajectories are simulated; it matters little whether the state-space is classical or quantum, discrete or continuous. In essence, Prediction (1) simply recognizes that well-established trends in simulation/computation capability are likely to continue.

    Prediction (2) similarly recognizes well-established trends whose raison d’être is well-summarized by a (wonderful) article that Dick and Ken reference in their Prediction (3), namely Scott Aaronson and Alex Arkhipov’s arXiv:1011.3245. As Scott and Alex remind us:

    Simulating quantum systems is one of the central computational problems of modern science, with applications from drug design to nanofabrication to nuclear physics …

    … The difficulty is that it is not clear how to interpret these systems as solving computational problems. For example, what is the “input” to a Bose-Einstein condensate?

    In other words, while these systems might be hard easy to simulate, we would not know how to justify that conclusion using the one formal tool (reductions) that is currently available to us.

    (emphasis as in the original). Here Scott and Alex’s original phrase “hard to simulate” has been amended to “easy to simulate” to accord with Bixby’s findings. Since Scott and Alex are expressing a Great Truth about simulation-versus-computation, we need not view this hard → easy amendment as a substantial alteration to their main idea … because an outstanding virtue of arXiv:1011.3245 (IMHO) is that it can be read in very many fruitful ways.

    The point is simply that physical systems (in physics, chemistry, or biology) that are not reducibly performing computations seem to be generically easy to simulate. Moreover, empirically it makes surprisingly little difference whether their nonreducible dynamics is classical, quantum, or a hybrid mix of the two. This is true of the specific instances that Scott and Alex cite (namely drug design, nanofabrication, and nuclear physics), and it is broadly true of condensed matter physics, physical chemistry, systems and synthetic biology, and most forms of classical and quantum systems engineering.

    Accompanying this inexorably exponentiating increase in simulation capability is an accelerating shift in hiring practices throughout the STEM enterprise. Whereas once the worst mistake a post-doc could make was to botch an experiment … nowadays an even worse mistake is to botch the simulations that plan a whole batch of experiments … experiments that will be carried out (in practice) largely by robots. Thus experimental groups that lack a skilled simulationist now find themselves unable to compete with experimental groups that have one.

    These shifts are creating a major opportunity for CS/CT/QIT to play a leading role throughout the STEM enterprise, providing that young people (especially) do not define their career options too narrowly. It seems to me that blogs like Gödel’s Lost Letter and P=NP are doing a terrific job of broadening the horizons of computer science and complexity theory, for which they deserve all of our appreciation & thanks.

  9. January 3, 2011 10:51 am

    Dick Lipton says: I have often been on committees, of various kinds, to help make a ten-year strategic plan. … I have asked many times to see how well the previous committee did, to help gauge how we might do. The answer is always a polite no–or a we will get the report to you soon, but we never actually get it. Oh well.

    For me, and I think for many folks, STEM roadmaps are deeply fascinating documents … here is a link to a directory containing two older ones: (1) The year 2003 International Technology Roadmap for Semiconductors (ITRS), and (2) The year 2002 and 2004 Quantum Information Science and Technology Roadmaps (QIST).

    The 2003 ITRS Roadmap looked ahead to technologies that would change the world:

    Since its inception in 1992, a basic premise of the [ITRS] Roadmap has been that continued scaling of microelectronics would further reduce the cost per function (historically, ~25% per year) and promote market growth for integrated circuits (averaging ~17% per year). Thus, the [ITRS] Roadmap has been put together in the spirit of a challenge–essentially, “What technical capabilities need to be developed for us to stay on Moore’s Law and the other trends?”

    With respect to multiple key technology milestones (for example half-pitch optical lithographic resolution of 32 nm by 2012), the 2003 ITRS Roadmap got it pretty much right.

    At first sight, the 2002 and 2004 QIST Roadmaps were less accurate. From the 2002 QIST Roadmap:

    By the year 2012, to implement a concatenated quantum error-correcting code. [This requires] on the order of 50 physical qubits, exercises multiple logical qubits through the full range of operations required for fault-tolerant QC in order to perform a simple instance of a relevant quantum algorithm, and approaches a natural experimental QC benchmark: the limits of full-scale simulation of a quantum computer by a conventional computer.

    If we think about it, we begin to appreciate that progress toward the QIST 50-qubit objective has been slower than foreseen, for a reason whose converse has similar fundamental significance for 21st century STEM enterprises as Moore’s Law progress in classical computation. Namely: the feasibility of full-scale simulation of quantum dynamical systems by conventional computers has turned out to be much greater than was foreseen by the 2002 and 2004 QIST Roadmaps.

    As students and entrepreneurs, in particular, arrive at a QIST-driven appreciation that noisy quantum dynamical systems are (empirically at least) similarly easy to simulate with classical computers as noisy classical systems … well … there is no obvious limit to the scale of enterprises that we can now begin to contemplate, with simulation-enabled vision, planning, and confidence similar to the 2003 ITRS Roadmap.

    This wholly unforeseen, global-scale, enterprise-enabling simulation capability is IMHO a tremendously valuable payoff from QIST-related research … in the near-term this capability is likely to prove to be substantially more valuable than any 50-qubit quantum computer.

  10. Curious permalink
    January 3, 2011 11:48 pm

    My prediction: There will be at least 10 BREAKTHROUGH result announcements by Complexity Theory bloggers in 2011.

  11. joro permalink
    January 4, 2011 5:50 am

    i would bet on a meta prediction: at least 1 of your predictions will be wrong.

  12. joro permalink
    January 4, 2011 6:12 am

    dick, you if want to make your predictions a spectacular sport you can make online polls for them. 2 minutes of search turned up this example: http://www.twiigs.com/recent/Science/290000

  13. C.Curious permalink
    January 5, 2011 4:21 am

    Hi,

    May I ask about 2010 #9 : Someone will discover a polynomial time factoring algorithm—but will not tell anyone. Yes/No—unclear.

    I can understand why you wrote no. Except if you have reasons that you did not reveal in your blog. I can understand what this is unclear if the Yes and No statements coexist. But what let you think that this is a yes even partially?

    Thank you.

    • rjlipton permalink*
      January 5, 2011 9:09 pm

      C,

      I think factor can be easy. So I think it is a maybe.

  14. January 5, 2011 5:42 am

    Throughout mathematical history, one hallmark of great theorems has been that they inspire higher levels of mathematical abstraction, in which broader & deeper versions of the theorem find a natural expression.

    Now the same trend toward higher levels of abstraction is becoming evident in the prediction business. Specifically, the International Roadmap Committee (IRC), which is the executive steering committee of the most fabulously successful STEM prediction enterprise in history, the International Technology Roadmap for Semiconductors (ITRS) has abstracted their best practices for prediction as an on-line document The “More-than-Moore” White Paper.

    To summarize the IRC’s findings, the following five elements, known in the roadmap business as “FOM-LEP-WAT-SHR-ECO”, allow predictions that are sufficiently reliable and encompassing as to constitute a roadmap:

    (1) Figures Of [mathematical] Merit (FOM)
    (2) Laws of Expected [mathematical] Progress (LEP)
    (3) Wide Applicability of [proof/algorithm] Technology (WAT)
    (4) Willingness to SHaRe (SHR)
    (5) An Existing COmmunity (ECO)

    The role of the resulting roadmap is to establish, stabilize, and expand a “virtuous circle” of confidence, progress, and enterprise.

    In mathematics and computer science, are the elements necessary for reliably predictive roadmaps all present? Is there a already in-place, a virtuous circle of mathematical confidence, progress, and enterprise? We can hope so! 🙂

  15. January 5, 2011 10:14 pm

    I’m waiting for the 2011 prediction about wikileaks? will it survive?

Trackbacks

  1. Polls And Predictions And P=NP « Gödel’s Lost Letter and P=NP
  2. Predictions For 2012 « Gödel’s Lost Letter and P=NP
  3. Predictions We Didn’t Make | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading