Skip to content

Predictions For 2019

January 6, 2019


The problem of predicting ‘when’ not just ‘what’

Cropped from Toronto Star source

Isaac Asimov was a prolific writer of science fiction and nonfiction. Thirty-five years ago, on the eve of the year 1984, he noted that 35 years had passed since the publication of George Orwell’s 1984. He wrote an exclusive feature for the Toronto Star newspaper predicting what the world would be like 35 years hence, that is, in 2019.

Today we give our take on his predictions and make our own for the rest of 2019.

Asimov’s essay began by presupposing the absence of nuclear holocaust without predicting it. It then focused on two subjects: computerization and use of outer space. On the spectrum of evaluations subtended by this laudatory BBC piece and this critical column in the Toronto Star itself, we’re closer to the latter. On space he predicted we’d be mining the Moon by now; instead nothing more landed on the Moon until the Chinese Chang’e 3 mission in 2013 and Chang’e 4 happening now. His 35-year span should be lengthened to over a century.

On computerization and robotics he was mostly right except again for the timespan: he said the transition would be “about over” by 2019 whereas it may be entering its period of greatest flux only now. However, for the end of 1983 we think the “whats” of his predictions were easy. Personal computers had already been around for almost a decade. Computer systems for business were plentiful. The Internet was already a proclaimed goal and the text-based Usenet was already operating. Asimov’s essay seems to miss how the combination of these three would soon move points of control outward to end-users.

We still think what he wrote about space and robots will happen. This shows the problem of predictions is not just ‘what’ but ‘when.’ For another instance of being wrong on ‘when’ too soon, Ken told a Harvard Law graduate who visited him in Oxford in 1984 that what we now call deepfake videos were imminent. We’ll make the rest of this post more about ‘when’ than ‘what.’

Predictions in Past Years

Here are some predictions that we have made before. Seems we did not make any new predictions last year—oh well—but see this.

{\bullet } No circuit lower bound of {1000n} or better will be proved for SAT. Well that’s a freebie.

{\bullet } A computer scientist will win a Nobel Prize. No—indeed, less close than other years.

{\bullet } At least five claims that {\mathsf{P}=\mathsf{NP}} and five that {\mathsf{P} \neq \mathsf{NP}} will be made.

{\bullet } A “provably” secure crypto-system will be broken. For this one we don’t have to check any claims. We just pocket the ‘yes’ answer. Really, could you ever prove the opposite? How about the attack on Diffie-Hellman in the current CACM?

{\bullet} An Earth-sized planet will be detected orbiting within the habitable zone of its single star. The “when” for this one came in 2017 already. We are retiring it.

{\bullet} A Clay problem will be solved, or at least notable progress made. Again we sense that the answer on progress is “no.” This includes saying that nothing substantial seems to have emerged from Sir Michael Atiyah’s claim of proving the Riemann Hypothesis. However, we note via Gil Kalai’s blog that a longstanding problem called the {g}-conjecture for spheres has been solved by Karim Adiprasito.

Predictions This Year

We will add some new predictions—it seems unfair to keep repeating sure winners.

{\bullet } Deep learning methods will be found able to solve integer factoring. This will place current cryptography is trouble.

{\bullet } Deep learning methods will be found to help prove that factoring is hard.

These may not be as contradictory as they seem. There is a long-known connection between certain learning algorithms and the natural proofs of Alexander Razborov and Stephen Rudich. The hardness predicate at the core of a natural proof is a classifier to distinguish (succinct) hard Boolean functions from easy ones. There is a duality between upper and lower bounds that in particular leads to the unconditional result that the discrete log problem, which is related to factoring and equally amenable to Peter Shor’s famous polynomial-time quantum algorithm, does not have natural proofs of hardness—because their existence would make discrete log relatively easy.

Talking about quantum, we predict:

{\bullet } Quantum supremacy will be proved—finally. But be careful: there is a problem with this whole direction. See the next section.

{\bullet } An algorithm originating in a theoretical model will be enshrined in law.

There are several near-term opportunities for this. The Supreme Court yesterday agreed to hear two cases on partisan gerrymandering, at least one of which promises to codify an algorithmic criterion for excessive vote dilution. Maine adopted a automatic-runoff voting system whose dependence on computer implementation gave grounds for an unsuccessful lawsuit. Algorithmic fairness is a burgeoning area which we discussed a year-plus ago. Use of differential privacy by the U.S. Census could involve legislation. We distinguish legal provisions from the myriad problematic uses of algorithmic models in public and private policy ranging from credit evaluations to parole decisions to college admissions and much else.

{\bullet} The lines between heuristically solvable and really hard problems will become clearer. We have previously opined that the great success of SAT solvers in particular renders the {\mathsf{P=NP}} question moot for many purposes. Well, now we say the opposite: SAT solvers will hit a wall.

Quantum Supremacy and Advantage

Ken recently attended a workshop in central New York that aimed to bring together researchers in many fields working on quantum devices. Materials for the workshop led off with the question of building quantum computers and highlighted Gil Kalai’s skeptical position in particular. An eightpart debate between him and Aram Harrow which we hosted in 2012 involved also John Preskill and ended with a discussion of quantum supremacy, a term advanced that year by Preskill. The workshop preferred the term quantum advantage. We interpret these terms as having the following distinction:

  • (a) Quantum supremacy means that a quantum device can perform general-purpose computations that no classical program or device can emulate in comparably feasible time.

  • (b) Quantum advantage means that some particular practical task can be achieved by available quantum devices at lower costs than near-term available classical devices.

As theoreticians we tend to think about (a) but many businesses and public-sector organizations would be ecstatic to have (b) in important applications.

A new angle on (a) was shown by the new construction by Ran Raz and Avishay Tal of an oracle {A} such that {\mathsf{BQP}^A} is not in {\mathsf{PH}^A}. This was hailed as the “result of the year” by Lance Fortnow (his second and our first is this progress on the Unique Games Conjecture), and Scott Aaronson furnished a great discussion of its genesis and further ramifications in complexity theory. Several popular articles tried to pump this as non-oracle evidence for (a). But there is the over-arching problem:

We know {\mathsf{P \subseteq BPP \subseteq BQP \subseteq PP \subseteq P^{\#P} \subseteq PSPACE}} but we don’t know {\mathsf{P \neq PSPACE}}.

So how are we ever going to be able to prove any form of supremacy? Even if we replace ‘polynomial time’ as our definition of ‘feasible’ by something more concrete, how can we prove that successful classical heuristics do not exist? On a certain practical problem of general import, Ewin Tang, a teenager in Texas advised by Scott, designed an improved classical algorithm for low-rank matrix completion that eliminated a previous quantum exponential advantage in the time dependence on the rank parameter. It is not just a case of whether we can prove supremacy, but judging when general quantum computers will be built to realize it.

Whereas, the when involved in (b) is now. If a quantum device can do something useful now that classical methods are not delivering now, then it does not matter if the latter could be improved at greater hardware and development cost to work a year from now. This has been the gung-ho tenor of many responses to the recently-signed National Quantum Initiative Act. We do, however, still need to find and build said devices…

As for the status of (a), we don’t know any better thought for January than the Janus-like title of this paper by Igor Markov, Aneeqa Fatima, Sergei Isakov, and Sergio Boixo:

“Quantum Supremacy Is Both Closer and Farther than It Appears.”

Open Problems

What are your predictions for 2019? What are the most important matters we’ve left unsaid?

[added some words to end of intro]

18 Comments leave one →
  1. Craig permalink
    January 6, 2019 2:37 pm

    Last year on this blog, I predicted that Google and IBM would fail to produce the working 50 qubit quantum computer that they aimed for. I was right about this.

    I also predicted that mainstream scientists would start to seriously question whether large scale quantum computing is even possible. I was wrong about this. I think it will be a while until scientists give up on QC.

  2. January 6, 2019 3:39 pm

    Maybe will be solved that P is not equal to PSPACE this year. I hope this paper could help on that

    https://www.reddit.com/r/compsci/comments/ad21ux/requestreview_for_logspace_vs_p_paper

  3. John S permalink
    January 6, 2019 5:50 pm

    As someone familiar with deep learning but not integer factorization, I’m a surprised by the first prediction. Would you be willing to share the intuition behind it?

  4. January 7, 2019 12:24 pm

    am a big asimov fan and like his prognostications spanning decades. his earlier ones/ stories about the rise of robotics dating to early 1950s or so are maybe more prescient. here is one increasingly glaring concept that nearly all the technologists of the 20th century missed: the rise of economic/ social inequality. inequality counteracts/ zaps many technological gains/ egalitarian benefits. asimov and others seemed to assume progress measured by social egalitarianism was consistent/ inevitable. however, for example, if factory robotics spreads but workers are out of jobs, what mass social benefit actually accrued? it does look like Huxley had some of this accurately nailed in his highly class-based society of Brave New World. do think Huxley deserves more attention than he gets esp wrt Orwell. Huxley was really dead-on in some general predictions, its eerie. Orwell seemed to anticipate globalization with his 3 warring superstates. in fact many political analysts now agree we live in a tripolar world: US/ China/ Russia. etc… it would be great to see a detailed analysis of these (20th century) predictions vs reality, it could easily fill a very interesting book.

  5. January 8, 2019 6:05 am

    Asimov’s essay seems to miss how the combination of these three would soon move points of control outward to end-users.

    What? This is totally false. Decentralized systems are inherently less efficient than centralized ones (cf, “price of anarchy”), and since faster computers allow a bigger scale of effective bureaucratic control, centralization has been the relentless leitmotif of the computer era.

    For example, discussion boards have moved from Usenet (a completely decentralized network of servers) to web bulletin boards (centralized fora for each topic) to Facebook groups (totally centralized). In the world we actually live in, if you buy a CPAP machine for sleep apnea, your insurance company will cut your insurance if the remote telemetry says your usage levels doesn’t meet their requirements. This is the furthest possible thing from pushing control outwards.

    We did get the future 1984 SF predicted, though — that was the year William Gibson’s Neuromancer was published, and sure enough we live in a cyberpunk dystopia.

    • January 8, 2019 11:21 am

      The thought was more along lines of consumer systems like online reservations, banking, ride-sharing, local currencies…

  6. January 9, 2019 1:30 pm

    You may have addressed this already, but can a reinforced learning engine like AlphaGo Zero and (somehow) casting the generation of proofs as a game be used for proving mathematical theorems?

    • January 9, 2019 4:11 pm

      I have not had any definite thoughts beyond what I wrote a year ago here—see also its comments section and the subsequent predictions-for-2018 post. The full AlphaZero paper was not accompanied by releasing the full 100-game match against Stockfish or any full sample of games. I am hoping to get time to investigate these matters with the analogous public project, Leela Chess Zero.

  7. January 11, 2019 5:24 pm

    I’ve invented Quaternionic Optimization(Programming)

    http://vixra.org/abs/1812.0174

  8. Bill Gasarch permalink
    January 12, 2019 2:00 pm

    I predict that the GLL will stop thinking I am a robot and print my comments without me having to email them `I posted a comment but it didn’t show up’. We’ll see if this comment shows up. I avoided spelling mistakes and capitol letters which may have caused the problem in the first place.

    More seriously- ML solves factoring would be very interesting.

    My prediction- not sure if it will be this year or later but there will come a time when ML hits a wall.

    • January 12, 2019 2:31 pm

      Hi, Bill: you could predict whether we will have to change the moderation settings. To answer also John S above about factoring, in fact it is the first item in the ToughSat source of hard problems for heuristics. Those two predictions were originally written by Dick and came off as jocular, but I added the paragraph below them to make them less so.

      • Bill Gasarch permalink
        January 13, 2019 5:08 pm

        I predict that your moderation settings are fine.
        A more interesting question: ML has been very good at detecting spam. Will the tide turn in this battle? Will spammers get better? At this point do so few people fall for scams that spamming is not worthwhile anymore? I hope so; however, soon I won’t have to worry about these issues since I’ll be getting 10,000,000 dollars from a Nigerian trillionaire!

  9. March 10, 2019 7:51 pm

    New version update(March 10. 2019)

    P versus NP is considered as one of the great open problems of science. This consists in knowing the answer of the following question: Is P equal to NP? This problem was first mentioned in a letter written by John Nash to the National Security Agency in 1955. However, a precise statement of the P versus NP problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this huge problem have failed. Another major complexity class is coNP. Whether NP = coNP is another fundamental question that it is as important as it is unresolved. We prove there is a problem in coNP that is not in P. In this way, we show that P is not equal to coNP. Since P = NP implies P = coNP, then we also demonstrate that P is not equal to NP.

    https://zenodo.org/record/2589001

Trackbacks

  1. /lib References – Choice Context « Pink Iguana
  2. Jean Bourgain 1954–2018 and Michael Atiyah 1929–2019 | Gödel's Lost Letter and P=NP
  3. Predicting Predictions For the 20s | Gödel's Lost Letter and P=NP
  4. Predictions For 2021 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading