Predictions For 2019
The problem of predicting ‘when’ not just ‘what’
Cropped from Toronto Star source |
Isaac Asimov was a prolific writer of science fiction and nonfiction. Thirty-five years ago, on the eve of the year 1984, he noted that 35 years had passed since the publication of George Orwell’s 1984. He wrote an exclusive feature for the Toronto Star newspaper predicting what the world would be like 35 years hence, that is, in 2019.
Today we give our take on his predictions and make our own for the rest of 2019.
Asimov’s essay began by presupposing the absence of nuclear holocaust without predicting it. It then focused on two subjects: computerization and use of outer space. On the spectrum of evaluations subtended by this laudatory BBC piece and this critical column in the Toronto Star itself, we’re closer to the latter. On space he predicted we’d be mining the Moon by now; instead nothing more landed on the Moon until the Chinese Chang’e 3 mission in 2013 and Chang’e 4 happening now. His 35-year span should be lengthened to over a century.
On computerization and robotics he was mostly right except again for the timespan: he said the transition would be “about over” by 2019 whereas it may be entering its period of greatest flux only now. However, for the end of 1983 we think the “whats” of his predictions were easy. Personal computers had already been around for almost a decade. Computer systems for business were plentiful. The Internet was already a proclaimed goal and the text-based Usenet was already operating. Asimov’s essay seems to miss how the combination of these three would soon move points of control outward to end-users.
We still think what he wrote about space and robots will happen. This shows the problem of predictions is not just ‘what’ but ‘when.’ For another instance of being wrong on ‘when’ too soon, Ken told a Harvard Law graduate who visited him in Oxford in 1984 that what we now call deepfake videos were imminent. We’ll make the rest of this post more about ‘when’ than ‘what.’
Predictions in Past Years
Here are some predictions that we have made before. Seems we did not make any new predictions last year—oh well—but see this.
No circuit lower bound of or better will be proved for SAT. Well that’s a freebie.
A computer scientist will win a Nobel Prize. No—indeed, less close than other years.
At least five claims that and five that will be made.
A “provably” secure crypto-system will be broken. For this one we don’t have to check any claims. We just pocket the ‘yes’ answer. Really, could you ever prove the opposite? How about the attack on Diffie-Hellman in the current CACM?
An Earth-sized planet will be detected orbiting within the habitable zone of its single star. The “when” for this one came in 2017 already. We are retiring it.
A Clay problem will be solved, or at least notable progress made. Again we sense that the answer on progress is “no.” This includes saying that nothing substantial seems to have emerged from Sir Michael Atiyah’s claim of proving the Riemann Hypothesis. However, we note via Gil Kalai’s blog that a longstanding problem called the -conjecture for spheres has been solved by Karim Adiprasito.
Predictions This Year
We will add some new predictions—it seems unfair to keep repeating sure winners.
Deep learning methods will be found able to solve integer factoring. This will place current cryptography is trouble.
Deep learning methods will be found to help prove that factoring is hard.
These may not be as contradictory as they seem. There is a long-known connection between certain learning algorithms and the natural proofs of Alexander Razborov and Stephen Rudich. The hardness predicate at the core of a natural proof is a classifier to distinguish (succinct) hard Boolean functions from easy ones. There is a duality between upper and lower bounds that in particular leads to the unconditional result that the discrete log problem, which is related to factoring and equally amenable to Peter Shor’s famous polynomial-time quantum algorithm, does not have natural proofs of hardness—because their existence would make discrete log relatively easy.
Talking about quantum, we predict:
Quantum supremacy will be proved—finally. But be careful: there is a problem with this whole direction. See the next section.
An algorithm originating in a theoretical model will be enshrined in law.
There are several near-term opportunities for this. The Supreme Court yesterday agreed to hear two cases on partisan gerrymandering, at least one of which promises to codify an algorithmic criterion for excessive vote dilution. Maine adopted a automatic-runoff voting system whose dependence on computer implementation gave grounds for an unsuccessful lawsuit. Algorithmic fairness is a burgeoning area which we discussed a year-plus ago. Use of differential privacy by the U.S. Census could involve legislation. We distinguish legal provisions from the myriad problematic uses of algorithmic models in public and private policy ranging from credit evaluations to parole decisions to college admissions and much else.
The lines between heuristically solvable and really hard problems will become clearer. We have previously opined that the great success of SAT solvers in particular renders the question moot for many purposes. Well, now we say the opposite: SAT solvers will hit a wall.
Quantum Supremacy and Advantage
Ken recently attended a workshop in central New York that aimed to bring together researchers in many fields working on quantum devices. Materials for the workshop led off with the question of building quantum computers and highlighted Gil Kalai’s skeptical position in particular. An eight–part debate between him and Aram Harrow which we hosted in 2012 involved also John Preskill and ended with a discussion of quantum supremacy, a term advanced that year by Preskill. The workshop preferred the term quantum advantage. We interpret these terms as having the following distinction:
- (a) Quantum supremacy means that a quantum device can perform general-purpose computations that no classical program or device can emulate in comparably feasible time.
- (b) Quantum advantage means that some particular practical task can be achieved by available quantum devices at lower costs than near-term available classical devices.
As theoreticians we tend to think about (a) but many businesses and public-sector organizations would be ecstatic to have (b) in important applications.
A new angle on (a) was shown by the new construction by Ran Raz and Avishay Tal of an oracle such that is not in . This was hailed as the “result of the year” by Lance Fortnow (his second and our first is this progress on the Unique Games Conjecture), and Scott Aaronson furnished a great discussion of its genesis and further ramifications in complexity theory. Several popular articles tried to pump this as non-oracle evidence for (a). But there is the over-arching problem:
We know but we don’t know .
So how are we ever going to be able to prove any form of supremacy? Even if we replace ‘polynomial time’ as our definition of ‘feasible’ by something more concrete, how can we prove that successful classical heuristics do not exist? On a certain practical problem of general import, Ewin Tang, a teenager in Texas advised by Scott, designed an improved classical algorithm for low-rank matrix completion that eliminated a previous quantum exponential advantage in the time dependence on the rank parameter. It is not just a case of whether we can prove supremacy, but judging when general quantum computers will be built to realize it.
Whereas, the when involved in (b) is now. If a quantum device can do something useful now that classical methods are not delivering now, then it does not matter if the latter could be improved at greater hardware and development cost to work a year from now. This has been the gung-ho tenor of many responses to the recently-signed National Quantum Initiative Act. We do, however, still need to find and build said devices…
As for the status of (a), we don’t know any better thought for January than the Janus-like title of this paper by Igor Markov, Aneeqa Fatima, Sergei Isakov, and Sergio Boixo:
“Quantum Supremacy Is Both Closer and Farther than It Appears.”
Open Problems
What are your predictions for 2019? What are the most important matters we’ve left unsaid?
[added some words to end of intro]
Last year on this blog, I predicted that Google and IBM would fail to produce the working 50 qubit quantum computer that they aimed for. I was right about this.
I also predicted that mainstream scientists would start to seriously question whether large scale quantum computing is even possible. I was wrong about this. I think it will be a while until scientists give up on QC.
Maybe will be solved that P is not equal to PSPACE this year. I hope this paper could help on that
As someone familiar with deep learning but not integer factorization, I’m a surprised by the first prediction. Would you be willing to share the intuition behind it?
am a big asimov fan and like his prognostications spanning decades. his earlier ones/ stories about the rise of robotics dating to early 1950s or so are maybe more prescient. here is one increasingly glaring concept that nearly all the technologists of the 20th century missed: the rise of economic/ social inequality. inequality counteracts/ zaps many technological gains/ egalitarian benefits. asimov and others seemed to assume progress measured by social egalitarianism was consistent/ inevitable. however, for example, if factory robotics spreads but workers are out of jobs, what mass social benefit actually accrued? it does look like Huxley had some of this accurately nailed in his highly class-based society of Brave New World. do think Huxley deserves more attention than he gets esp wrt Orwell. Huxley was really dead-on in some general predictions, its eerie. Orwell seemed to anticipate globalization with his 3 warring superstates. in fact many political analysts now agree we live in a tripolar world: US/ China/ Russia. etc… it would be great to see a detailed analysis of these (20th century) predictions vs reality, it could easily fill a very interesting book.
What? This is totally false. Decentralized systems are inherently less efficient than centralized ones (cf, “price of anarchy”), and since faster computers allow a bigger scale of effective bureaucratic control, centralization has been the relentless leitmotif of the computer era.
For example, discussion boards have moved from Usenet (a completely decentralized network of servers) to web bulletin boards (centralized fora for each topic) to Facebook groups (totally centralized). In the world we actually live in, if you buy a CPAP machine for sleep apnea, your insurance company will cut your insurance if the remote telemetry says your usage levels doesn’t meet their requirements. This is the furthest possible thing from pushing control outwards.
We did get the future 1984 SF predicted, though — that was the year William Gibson’s Neuromancer was published, and sure enough we live in a cyberpunk dystopia.
The thought was more along lines of consumer systems like online reservations, banking, ride-sharing, local currencies…
The Good News: P=NP
See https://www.academia.edu/29841631/A_Universal_Algorithm_For_A_Hypothetical_Machine_Learning
Happy 2019!!!
You may have addressed this already, but can a reinforced learning engine like AlphaGo Zero and (somehow) casting the generation of proofs as a game be used for proving mathematical theorems?
I have not had any definite thoughts beyond what I wrote a year ago here—see also its comments section and the subsequent predictions-for-2018 post. The full AlphaZero paper was not accompanied by releasing the full 100-game match against Stockfish or any full sample of games. I am hoping to get time to investigate these matters with the analogous public project, Leela Chess Zero.
I’ve invented Quaternionic Optimization(Programming)
http://vixra.org/abs/1812.0174
I predict that the GLL will stop thinking I am a robot and print my comments without me having to email them `I posted a comment but it didn’t show up’. We’ll see if this comment shows up. I avoided spelling mistakes and capitol letters which may have caused the problem in the first place.
More seriously- ML solves factoring would be very interesting.
My prediction- not sure if it will be this year or later but there will come a time when ML hits a wall.
Hi, Bill: you could predict whether we will have to change the moderation settings. To answer also John S above about factoring, in fact it is the first item in the ToughSat source of hard problems for heuristics. Those two predictions were originally written by Dick and came off as jocular, but I added the paragraph below them to make them less so.
I predict that your moderation settings are fine.
A more interesting question: ML has been very good at detecting spam. Will the tide turn in this battle? Will spammers get better? At this point do so few people fall for scams that spamming is not worthwhile anymore? I hope so; however, soon I won’t have to worry about these issues since I’ll be getting 10,000,000 dollars from a Nigerian trillionaire!
New version update(March 10. 2019)
P versus NP is considered as one of the great open problems of science. This consists in knowing the answer of the following question: Is P equal to NP? This problem was first mentioned in a letter written by John Nash to the National Security Agency in 1955. However, a precise statement of the P versus NP problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this huge problem have failed. Another major complexity class is coNP. Whether NP = coNP is another fundamental question that it is as important as it is unresolved. We prove there is a problem in coNP that is not in P. In this way, we show that P is not equal to coNP. Since P = NP implies P = coNP, then we also demonstrate that P is not equal to NP.
https://zenodo.org/record/2589001