Predictions We Didn’t Make
With wishes for a memorable New Year 2018
Muhammad Afzal Upal is Chair of the Computing and Information Science Department at Mercyhurst University. He works in machine learning and cognitive science, most specifically making inferences from textual data. In joint papers he has refined a quantitative approach to the idea of postdiction originally suggested by Walter Kintsch in the 1980s.
Today we review some postdictions from 2017 and wish everyone a Happy New Year 2018.
In a 2007 paper, the postdictability of a concept is defined as “the ease with which a concept’s inclusion in the text can be justified after the textual unit containing that concept has been read.” This contrasts with “the ease with which the occurrence of the concept can be predicted prior to the concept having been read.” The main equation defines the extent to which the concept—or event, I may add—is memorable by
where is the prior likelihood of emerging and is a constant. It says that the concept is most memorable if you couldn’t have predicted it but after you see it you slap your forehead and say, “Ah, of course!” It relates to what makes ideas “stick.”
Mercyhurst is in Erie, Pennsylvania. Erie had lots of snow this past week. Record–breaking snow. More than Buffalo usually gets. We had several relatives and friends who had to drive through it on their way to Michigan and Pittsburgh and points further south. Was that karma? coincidence with this post? memorable in a way that fits the framework?
And how about the Buffalo Bills making the playoffs after a miracle touchdown by the Cincinnati Bengals on fourth-and-12 from midfield in the final minute knocked out the Baltimore Ravens? In designing a two-week unit on data science for my department’s new team-taught Freshman Seminar on “The Internet,” I had the foresight to use years-since-a-team’s-last-playoff-win (not last playoff appearance) as the definition of “playoff drought” in activity examples. Hence—unless the Bills upset the Jacksonville Jaguars next Sunday—the local “nudge” of my materials will work equally well for next fall’s offering. Can one quantify my prescience as prediction? postdiction? Let’s consider some more-germane examples.
Some Standing Predictions
Last January we did not do a predictions or year-in-review post as we had done in all seven previous years. We were caught up in questions over László Babai’s graph isomorphism theorem and other matters. Several predictions were recurring, so let’s suppose we made them also for 2017:
- No circuit lower bound of 1000n or better will be proved for SAT: Well that’s a freebie, but note this about how P=NP implies strong circuit lower bounds elsewhere.
- A computer scientist will win a Nobel Prize. No—indeed, less close than other years.
- At least five claims that and five that will be made. We know of one of the latter and will say more about it below. Alas, Gerhard Woeginger’s page tracking these claims was not updated in 2017. We wish him the best.
- A “provably” secure crypto-system will be broken. For this one we don’t have to check any claims. We just pocket the ‘yes’ answer. Really, could you ever prove the opposite?
- An Earth-sized planet will be detected orbiting within the habitable zone of its single star. In 2017 they came in bunches.
- A Clay problem will be solved, or at least notable progress made. We are saddened to note that the institute founder Landon Clay passed away in August. For 2016 we might have noted a correspondence shown between part of the Yang-Mills problem and the ABC Conjecture. Burt Totaro wrote a nice survey last June about progress on the Tate Conjecture with implications for the Hodge and Birch-Swinnerton Dyer problems. There was an interesting new approach to the Riemann Hypothesis via quantum Hamiltonians published last spring. Some very recent news on the Navier-Stokes equations might instead be a kind of “regress.” Maybe we should look for progress on our own problem, P=NP? You can tell us how notable it has been.
Postdictions From 2017
Since some of our perennial questions have entered a steady state, it is time to find new categories. A week-plus ago we noticed that Altmetric publishes a top-100 list based on their “Altmetric Attention Score” every November 15. So it is natural to suppose we postulated:
- A paper in computational complexity or algorithms will make the Altmetric Top 100 list.
The answer with regard to the 2017 list is “yes” but the reason is unfortunate—it is Norbert Blum’s incorrect paper coming in at #38. Blum gave a formal retraction and subsequent explanation, which we added as updates to our own item on the claim. The only (other) paper labeled “Information and Computer Sciences” is the AlphaGo Zero paper at #74. Actually, Blum’s paper was tagged “Research and Reproducibility.”
- An algorithm with zero supervision will unexpectedly demonstrate superiority in a human-centric capability.
AlphaGo Zero and most recently AlphaZero spring to mind. With much swallowing of pride from my having started out as a chess-player in the early 1970s when computers were minimal, I’m not sure that games of perfect information should ultimately be regarded as “human-centric.” Based on my current understanding of the AlphaZero paper and comments by Gwern Branwen in our previous post, what strikes me as the most stunning fact is the following:
Chess can be encoded into a ‘kernel’ of well under 1GB such that + small search comprehensively outperforms an almost 1,000x larger search.
More on the human-centric side, however—and allowing supervision—the most surprise and attention seems to have gone to the 2017 Stanford study adapting off-the-shelf facial-analysis software to distinguish sexual orientation from photographs with accuracy upwards of 90%, compared to human subjects at 52% from a balanced sample, which is barely better than random. For utility we would nominate LipNet, which achieves over 95% accuracy in lip-reading from video data, but the paper dates to December 2016.
The lip-reading success may be the more predictable. The extent to which it and the “gaydar”application are postdictable appears to be the same as our reaching a community understanding of what deep neural nets are capable of—which does not require being able to explain how they work. Setting up grounds beforehand for the justification by which Upal and his co-authors define postdiction might be a fair way of “giving credit for a postdiction.”
- Advances on the P=NP question will be channeled through the concept of dichotomy.
Per Lance Fortnow in his own 2017 review, the complexity result of the year is split between two papers claiming to prove full dichotomy for nonuniform CSPs—where dichotomy means that they are either in P or NP-complete. Meanwhile we have devoted numerous posts to Jin-Yi Cai’s work on dichotomy between P and #P-completeness, including recently. So can we get some credit for prediction? or for postdiction? Anyway, we make it a prime prediction for 2018 that there will be notable further progress in this line.
- ‘Quantum Supremacy’ will be claimed and rebutted in the same year.
We specify quantum supremacy to mean mean building a physical device that achieves a useful algorithmic task that cannot be performed in equal time by classical devices using commensurate resources. The words “useful” and “commensurate” are subjective but the former rules out stating the task as “simulating natural quantum systems” and furthers John Preskill’s emphasis in his original 2012 supremacy paper that the quantum device must be controllable. The latter rules out using whole server farms to match what a refrigerator-sized quantum device can do. The notion involves concrete rather than asymptotic complexity, so we are not positing anything about the hardness of factoring, and intensional tasks like Simon’s Problem don’t count—not to mention our doubts on the fairness of its classical-quantum comparison. Aram Harrow and Ashley Montanaro said more about supremacy requirements in this paper.
Our “postdiction” gets a yes for 2017 from the claims in this Google-led paper that 50-qubit devices would suffice to achieve supremacy and are nearly at hand, versus this IBM-led rebuttal showing that classical computers can emulate a representative set of 50-qubit computations. The notion of emulation allows polling for state information of the quantum circuit computation being emulated, so this is not even confronting the question of solving the task by other means—or proving that classical resources of some concrete size cannot solve all length- cases of the task at all. Recent views on the controversy are expressed in this November 14 column in Nature, which links this October 22 post by Scott Aaronson (see also his paper with Lijie Chen, “Complexity-Theoretic Foundations of Quantum Supremacy Experiments”) and this December 4 paper by Cristian and Elena Calude which evokes the Google-IBM case.
- Algorithms will increasingly be seen as interactive processes with multiple parties and goals.
That is, the notions of algorithm and protocol will fuse into greater structures with multiple objectives besides solving a task. In 2016 we noted Noam Nisan’s 2012 Gödel Prize-winning paper with Amir Ronen titled “Algorithm Mechanism Design.” Noam’s 2016 Knuth Prize citation stated, “A mechanism is an algorithm or protocol that is explicitly designed so that rational participants, motivated purely by their self-interest, will achieve the designer’s goals.” In November we covered mechanisms for algorithmic fairness. There is a nicely accessible survey titled “Algorithms versus mechanisms: how to cope with strategic input?” by Rad Niazadeh who works in this area. It alloys techniques from many veins of theory and has a practical gold mine of applications. What we are watching for is the emergence of single powerful new ideas from this pursuit.
- Mathematical logic and descriptive complexity will play a larger role in advances.
We see some sign of this in the dichotomy-for-CSPs result, but we have thoughts that we will talk more about in a later post.
Open Problems
What concepts do you think will have the highest in 2018?
[some word and spacing changes]
Prediction: 2018 will be the year in which mainstream scientists will start to seriously question whether large-scale quantum computing is even possible, given that Google and IBM will fail to produce the working 50 qubit universal quantum computer that they have tried to create.
Already the media is becoming a little bit skeptical. The title of the article is “The Era of Quantum Computing Is Here. Outlook: Cloudy”. See https://www.quantamagazine.org/the-era-of-quantum-computing-is-here-outlook-cloudy-20180124/
thx Clay for your scientific philanthropy and semi advertent “funding” of one of modern sciences greatest problems namely P vs NP, RIP. who would have guessed itd be unclaimed after nearly 2 decades? coming up “shortly” on its ½ century mark. 😮 😐
Any guesses as to why NP != P proof papers get 2 orders of magnitude more attention than those proving NP==P?
After Devolalikar this is the send paper which has got so much of attention.
I meant Norbert Blum’s paper.
Because if the contention is that P=NP the author can reasonably be asked to provide an implementation of their algorithm. Nothing shows up flaws in an algorithmic idea like trying to implement it!
So the assumption is that every polynomial time algorithm will have a feasible implementation, even if the exponent is 100?
And an implementation w/o a demonstration for a meaningful size, would it have any meaning?
Wonder if anyone has tried to estimate the run-time even for O(n^20) kind of time complexity for n as small as 100, on a fastest available CPU.
@javaid aslam
Keep in mind that for n = 100, n^20 > 2^n.
I believe the hope is to get it down to less than n^10. Much more than that, and there’s not much practical difference between polynomial and exponential.
Finally, I found an elegant solution of P vs NP problem
(based on Cook-Hartmann theory):
http://vixra.org/abs/1801.0100