Predictions For 2021
This year is a Blum integer, 43*47
2017 article, not via Zoom |
Allan Lichtman correctly predicted the 2020 presidential election, based on a 7-6 edge in “keys” to Joe Biden. His model, which he developed in 1981 with the late Vladimir Keilis-Borok, simply says that the incumbent party will lose the presidential election if 6 or more keys go against them. It has been right in every election since 1984.
Today we make some predictions for the new year, 2021.
Lichtman has not ventured a prediction for today’s Senate runoff elections in Georgia. Last November 19, he penned an article eviscerating the performance of poll aggregation in the November 3 election. We have considered writing a post about that and about statistical predictions during the pandemic, in the direction of taking more moderate assessments of how various polls, predictions, and policies have fared.
Polls of today’s races currently show a 1–2 percentage point edge for the Democratic candidates, but there are multiple caveats: The difference is just about one standard deviation, well within the margin of error even after aggregation. The polls have gyrated with multiple lead flips this past month; two polls from the past few days collected here show both Republican candidates tied or ahead. Many pollsters have avoided these races, and the five most recent polls all have sample sizes well under 1,000. The count beginning tonight may show a zigzag blue-to-red-to-“blur” shift for reasons of which votes are tallied when that we discussed generally here.
With that said, let us address non-election predictions.
Previous Predictions
Back in 2019 we made some predictions that were easy:
- No circuit lower bound or better will be proved for SAT. Well that’s a freebie.
- A computer scientist will win a Nobel Prize. No—indeed, less close than other years.
- At least five claims that and five that will be made. Gerhard Woeginger’s claims page is still paused at 2016, so we can only judge from our own window of communications made to us or commented in the blog. All we can say is that the number in each case is “order-of” .
- A “provably” secure crypto-system will be broken. For this one we don’t have to check any claims. We just pocket the “yes” answer.
We skipped 2020—maybe that was smart of us? Our point about predictions in 2019 was the importance of “when” as opposed to “what.” With regard to some predictions, we can take the pandemic is as a global postponement of “when.”
We claim complete and ongoing timeliness with another 2019 prediction, however. In full, we wrote:
- Quantum supremacy will be proved—finally. But be careful: there is a problem with this whole direction. See the next section.
The section two years ago talked about the difficulties of proving such a claim. A claim was duly made in October 2019, which we evaluated here, but arguments over “proof” have not abated. They have just recently been summarized by Gil Kalai here, and also here by Scott Aaronson in regard to an independent claim by researchers from USTC in China.
Predictions for 2021
Here are some new predictions for 2021. We will try and skip easy ones like the above. We will try to make some that are surprising. We’ve made predictions about evidence for life on exoplanets before; we do something different there too. We do not venture any about the course of the pandemic.
- Quantum algorithms: will be proved to be in sub-exponential time.
- Further work with AlphaFold will lead to a proof of a sub-exponential time algorithm for protein folding.
- In consequence or independently, a SAT algorithm will be discovered that runs in sub-exponential time, thereby refuting various forms of the Exponential Time Hypothesis. What do we mean by “sub-exponential” time here? Dick says the time for the SAT algorithm will be . Ken more modestly says time .
- Another Clay prize problem (besides versus ) will be claimed as solved, and then retracted.
- A new kind of sorting network of size will be discovered, inspired by a connection to integer multiplication.
- A new factoring algorithm will be discovered that beats the number field sieve.
- A lower bound argument will show that clique requires formula size over the basis.
- Significant chemical evidence will be observed for the existence of life on a non–exoplanet.
- Dogs will reach 25% of the proficiency of cats at navigating home obstacle courses, but not before the third quarter of 2021.
- Privacy-preserving apps for doodling during Zoom meetings will become hot sellers by midyear.
Open Problems
What are your predictions for 2021?
I don’t claim to know alot about this, but wouldn’t AlphaFold need to be able to handle arbitrarily sized inputs in order to have any hope of helping out with SAT?
I mean, couldn’t we train a SAT solver to some input size that just ends up memorizing within the neural net all of the solutions up to that bound?
Dear Dick & Ken, Happy New Year 2021 ❢
Riffs and Rotes are structures I discovered while playing around with Gödel numberings of graphs and digraphs. To my way of thinking they bear a deep connection to the mathematical infrastructure of logic. Here’s a link —
🙞 Riff and Rote for 2021
Riff 2021
Rote 2021
I’m surprised by this one:
> Further work with AlphaFold will lead to a proof of a sub-exponential time algorithm for protein folding.
From what I’ve read AlphaFold doesn’t contain any more “non heuristic” ideas than AlphaZero did for go. And it doesn’t look like we’ll ever have sub-exponential algorithms for (generalized) go or chess…
A generic idea (replying also to steveb3210 above) is to prove that you can come close enough in a structured-enough similarity metric so that the search space for an exact match is reduced below for all . Getting 90% or even 99% match alone does not suffice, but one of our running themes has been that lots of experience with large-scale concrete simulations can send up theoretical insights. For possible relevant details and interaction with SETH, see our memorial post for Alberto Apostolico and Dick’s older post on protein folding.
My predictions will be:
Some researcher would probably discover that my proof of FP for #CLAUSES-3UNSAT will be used to show this problem is P_{N} complete (P_{N}: counting with weight as a natural number for each accepting path) and since we know that #P = P_{N} (this person will use #MONOTONE 2UNSAT with weight in each unsatisfying truth assignment instead of my approach that uses 3UNSAT), then we would obtain a non-constructive proof of P = NP, In this way, my proof will help with that:
https://doi.org/10.5281/zenodo.3355776
My proof of the Riemann Hypothesis would be probably accepted after major changes and it will be the first claim that may be published by a respectful peer-reviewed site. Or probably it will be rejected many times until some researcher might help me to make clearer the arguments and finally we will be published in some flagship journal at the end of this year. In this way, this will be the another claim of Clay prize problem, but maybe without a retract:
https://doi.org/10.5281/zenodo.3648511
8. should include “other than Earth”.
🙂
>Further work with AlphaFold will lead to a proof of a sub-exponential time algorithm for protein folding.
Is there any precedent of an ML algorithm contributing insights that help in the worst-case time complexity model? (especially for NP-hard problems) I think that even the celebrated successes of practical SAT and ILP solvers have helped very little or not at all in improving worst-case bounds for these problems.
The keys prediction method isn’t impeccable, it failed in 2000: https://en.wikipedia.org/wiki/The_Keys_to_the_White_House#Lichtman's_prediction_record
I forgot to elaborate the “with the exception of 2000” part. I had also originally intended to mention “survivorship bias” and why I thought Lichtman’s model escapes it. The post was delayed a couple days into when I felt I should mention the Georgia runoffs. My elaboration in brief is that for a similarity metric it does very well, noting also that the key values were 5 or 6 on four of ten occasions.
uh, did you guys actually look at this closely? a quick look/ skim at the “keys” shows they are interesting and probably meaningful, but on other hand its kind of like a rorschach test or seeing shapes in clouds. they are not really “true/false” questions. they can be ANSWERED true/false but theyre highly subjective.
just eg: “12. Charismatic incumbent: The incumbent party candidate is charismatic or a national hero.”
oh, so its true/false whether someone is “Charismatic”, lol!
many other objections can be made on similar grounds.
so, its almost like a seemingly semi-scientific version of nostradamus predictions, lol!
but not knocking nostradamus either exactly, have read a lot of interpretations and find them entertaining/ intriguing, lol!
its a little like poetry… hey, Taoism was basically an entire religion inspired by poetry…! right? 😀
deepmind is incredible and hope you write more about them, eg the protein folding breakthru etc… how about how they are burning thru nearly ~½B per year?
looks like hassabis motto is nearly “A(G)I or bust”!
personally think time is running out! not sure alphabet is going to have those deep pockets forever/ much longer! AGI is achievable but not necessarily even with mountains of cash!
you at least gotta start with the right idea, and they still havent contacted me yet/ offered me any funding! 😛
speaking of AI, hey JA, nice diagrams, what did you use to draw them? are they automated? and uh, do those diagrams have anything whatsoever to do with the contents of this post? kinda looks like math spam to me 😛
Dear
That was partly a spin-off from the allusion to Blum Integers and partly my customary way of celebrating New Years, though it looks like I’ve missed a few lately. There is a connection to models of computation, lambda calculus, etc.
https://www.researchgate.net/publication/348178657_Essential_Context_Free_Expression_part_1
A preprint on a system of string pattern specification,
notation ECFX for Essential Context Free Expression.
The approach of ECFX is seen in three methodological principles:
1. (syntax) maintaining context free syntax as kernel structure
2. (semantics) allowing optional semantic conditions for finer logic controls
3. (complexity) imposing a universal complexity bound over all semantic conditions
Keywords: String pattern match, Pattern specification, complexity completeness
Stringology, Context free grammar, Back referencing,
Semantic condition, Semi syntactic, Cumulative time
Comments and feedback appreciated.
Any predictions on results by novices/hobbyists?
There has been no progress on the theory front of factoring since the NFS. I asked Sam Wagstaff (author of The Joy of Factoring) why this is and he responded and I blogged about it here:
https://blog.computationalcomplexity.org/2019/08/obstacles-to-improving-classical.html
NFS was in the 1990’s (I think) so thats been a long time.
There have been better hardware and better implementations and some tricks to speed it up, (which is great) but no math breakthroughts.
So why do you think we are poised for an advance this year?
There has to be some year!
Unless we have some data to predict a higher probability for any specific year, this year is as likely as many more years to come.
On the other hand, the probability should keep increasing every year… or not?
Lindy’s Effect in this context: If a problem is open for x years then guess that it will take x years to solve it. So the prob of solving it goes down. Of course this has to stop someplace.
Actually I can see both arguments:
There has been so much work on factoring since the 1990’s that they could be just one good idea away from a better (though likely not in P) factoring algorithm ,and all this work will help, so prob it will happen decreases
OR
Despite all this good work we have made no progress. This must be a really hard problem, and it may even be that there is no better algorithm than the NFS.
Only if we live long enough to track and witness…