Skip to content

Predicting When P=NP is Resolved

December 22, 2019


Has it outlasted the ability to estimate when?

Composite of src1, src2

Ryohei Hisano and Didier Sornette wrote in 2012 a paper titled, “On the distribution of time-to-proof of mathematical conjectures.”

Today Ken and I discuss predicting the end to mathematical conjectures. This is apart from considering odds on which way they will go, which we also talk about.

Nine years ago the Christmas issue of the New Scientist magazine analyzed a small set of solved mathematical conjectures and used it to forecast when the P vs. NP conjecture would be solved. The article estimated that the “probability for the P vs. NP problem to be solved by the year 2024 is roughly 50%”.

New Scientist 2019 “biggest problem” source

This was done without any special insight into P vs. NP, just its prominence among the world’s best mathematicians and time since inception. How can one make the case that this argument is not silly, that it is not a wild guess?

Time to Solve a Conjecture

The 2010 New Scientist article could be considered a bagatelle since they looked at only 18 conjectures, but Hisano and Sornette took the idea seriously. They widened the data field to Wikipedia’s list of mathematical conjectures. They removed a dozen-plus conjectures whose origin dates and/or solution dates are not firmly known, leaving 144 conjectures: 60 solved and 84 still open as of 2012.

It should be understood that their conclusions about the whole enterprise are negative. They say in their abstract:

We find however evidence that the mathematical process of creation is too much non-stationary, with too little data and constraints, to allow for a meaningful conclusion. … In conclusion we cannot really reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset that we have studied, translating into an average waiting time to proof of 100 years.

They found exponential growth in notable conjectures being formulated, which they ascribe to growth in the number of mathematicians. They try adjusting for this by invoking the overall growth curve of the world population.

Then they find a dearth of conjectures with midrange solution times, compared to the number needed to give strong fits to simple models. They try cutting quickly-resolved conjectures (ones solved in under 20 years) from the dataset to improve the fits, and obtain only an enhancement of evidence for a slow exponential curve of solution time. Here is a figure from their paper showing the exponential curves:



In the end they are not able to find many hard distinctions between possible models. Most in particular, they conclude by calling the simple {0.01}-solution-chance-per-year model “our best model.”

But that model in turn, without memory of the age of the conjecture, struck me—Ken writing here—as running counter to a famous controversial argument of recent vintage. So Dick and I put our heads together again…

Doomsday For Conjectures?

The Doomsday Argument can be phrased as a truism:

If your hearing about {X} is a uniformly random event in the lifespan of {X}, which began {t} years ago, then the odds are 50-50 that the future lifetime {T} of {X} will be no less than {\frac{t}{3}} years and no more than {3t} years.

This extends to say the probability is 90% that {T} will be no less than {\frac{t}{19}} and no more than {19t}. The numbers come from considering the middle half or middle 90% of the span.

The key word in our phrasing is if—if your cognizance is a uniformly random event of the lifespan. The argument’s force comes from taking your birthday as a random event in the span of humanity. We can adjust for the population growth curve and our ignorance of prehistoric times by taking {t} to be your ordinal in human lives ordered by birthday and projecting {T} in the same units. If there were 50 billion people before you, this speaks 95% confidence that the sum of human lives will not reach a trillion.

We can try the same reasoning on conjectures. Suppose you just learned about the P vs. NP problem from catching a reference to this blog. By Kurt Gödel’s letter to John von Neumann we date the problem to 1956, which makes {t = 63} years. Presuming your encounter is random, you can conclude a 50-50 chance of the conjecture being solved between the years 2040 (which is 21 years from now) and 2208 (which is still shorter than Fermat’s Last Theorem lasted). The less you knew about P vs. NP, the more random the encounter—and the stronger the inference.

Perhaps Wikipedia’s 1971 origin for P vs. NP from its precise statement by Steve Cook is more realistic. Was my learning about it in 1979 ({t=8}) a random event? If so, I should have been 75% confident of a solution by 2003. Its lasting 16 more years and counting busts the 83.3% confidence interval. Well, Dick learned about it within weeks, at most months, after inception. If that was random then its longevity counts as a 99.9% miss. Of course, Dick’s learning about it was far from random. But.

Doomsday Conflict

Our point, however, is this: Consider any conjecture that has lasted {t = 87} years. That’s the point at which the {0.01} exponential and Doomsday models come into conflict:

  • By Doomsday, there is 75% confidence of the conjecture lasting at least {\frac{t}{3} = 29} more years.

  • But since {0.99^{29} < 0.75}, by the H-S best model, one should have less confidence.

The longer a conjecture lasts, the greater this conflict. We are not addressing the issue of uniform sampling over the lifespan of the conjecture, but this conflict applies to all observers past the 87-year mark. The honeycomb conjecture was documented by 36 BCE and finally solved by Thomas Hales in 1999.

Note that since {0.99^{87} = 0.417\dots}, one cannot evade our model conflict by saying only a small fraction of conjectures last that long. Despite H-S noting that the honeycomb conjecture and Fermat and some others break their curve, their curve at left below shows excellent agreement up to and beyond the 100-year mark.


The curve at right shows that a {0.02} rate fits even better when short-lived conjectures are deleted. Since {0.98^{14.33} < 0.75}, that makes the conflict set in at {t = 43} years old. Then it applies even for P vs. NP dated to Cook. At {t = 100} the conflict is considerable: Doomsday 66.7% of its lasting at least 50 more years, versus {0.98^{50} \approx \frac{1}{e} \approx 36.4\%} chance of its lasting that long by H-S.

This conflict says there must be a factor that invalidates either or both lines of reasoning. But the Hisano-Sornette estimates are supported by data, while our Doomsday representation is quite generic.

Do Odds Matter?

A further wrinkle is whether the collective balance of opinion on whether a conjecture is true or false has any bearing. If the field is evenly divided, does that argue that the conjecture will take longer to resolve?

The idea of “odds” is hard to quantify and has come into play only recently. Despite lots of chat about betting on P vs. NP, and Bill Gasarch’s wonderful polls on which side is true, data for other problems is scant.

This prompts us to mention that the Springer LNCS 10,000 anniversary volume came out in October. It includes a contribution by Ken titled, “Rating Computer Science Via Chess”—the title refers both to how the chess ratings of the best chess computers mirrored Moore’s Law and how they draw on many general CS achievements in both hardware and software. For now we note also the contribution by Ryan Williams titled, “Some Estimated Likelihoods for Computational Complexity.” Ryan places only 80% on {\mathsf{P \neq NP}} and puts {\mathsf{NC}^1} versus {\mathsf{TC^0}} at 50-50.

There are also non-betting and non-opinion senses in which conjectures are regarded as “almost certainly true.” We recently posted about this. Perhaps something more can be gleaned from highlighting those conjectures in the H-S data set to which such criteria apply.

All of this speculation may be less idle and less isolated if it can be carried over to unsolved problems in other walks of life. Corporations and consortia doing open-ended research may already need to quantify the average time to achieve potential breakthroughs—or dismiss them—because they deal with many “feelers” at a time. The Doomsday assumption of random encounters probably doesn’t apply there. But there could be large-scale tests of it by “punters” seeking out random lifespans they can bet on, such as runs of shows or the time to make a movie of Cats, and profit by applying it.

Open Problems

What do you think about attempts to forecast the time to solve P vs. NP, or to forecast breakthroughs on larger scales?

There are many rebuttals to the Doomsday Argument, some general and some context-specific. Is our argument that it conflicts generally with memoryless exponential models subsumed by any of them?

32 Comments leave one →
  1. December 22, 2019 1:52 pm

    P vs NP was already solved in:

    https://www.preprints.org/manuscript/201908.0037

    • rjlipton permalink*
      December 22, 2019 3:38 pm

      Dear vegafrank:

      I do not agree that you have solved P=NP. I believe that it is still open. Hence the whole point of this post.

      Thanks for your continued interest in P=NP. But I do not see that it is solved yet.

      Best

      dick

      • December 25, 2019 3:57 pm

        Dear dick,

        I am the one who owe you the thanks for revising some of my several past intents and answering some of my emails. I can say without hesitation, that you are one of the few top researchers who take a piece of his time to answer common people who are deep interested in P=NP problem. I see your answer in this post does not exactly affirm that my arguments are correct or not, so I hope some day I could have the opportunity to explain you this last work: maybe after a major revision in some journal or conference where you could see better after a clean of fixable details (who knows!!!).

        Best

        frank

  2. December 22, 2019 3:41 pm

    Why Grisha Perelman or Peter Scholze didn’t try to solve it?

    • Ivan Gusev permalink
      December 29, 2019 6:40 am

      Grisha is busy, he is working somewhere in northern europe

  3. December 22, 2019 5:51 pm

    Heh, the year 2024 appears from another (tongue in cheek) argument: from empirical observations, it seems that the formula holds

    (Δ t-historic) = (Δ t-today)^(log 5/ log 2)

    where
    Δ t-historic is the time in decades from the invention of the printing press (1436) and
    Δ t-today is the time in decades from the opening of the ARPANET (1969).

    According to this formula, for

    Δ t-historic = 1969 – 1436

    we get

    Δ t-today = 2024 – 1969

  4. javaid aslam permalink
    December 23, 2019 12:22 am

    Unless the TCS community becomes more open to reading the “conjecture proofs”, I doubt if there can be any reasonable model to make any prediction at all.

    This “simple .01-solution-chance-per-year model” does not make much sense because there are some tacit assumptions here:
    — The solution will come from only recognized computer scientists/mathematicians.
    — The solution is more likely to be NP != P. When 95+% of the researchers do not believe
    NP ==P, how can the solution (if indeed NP == P) be created by a much smaller subset of “recognized believers”?
    I submit that even those in the smaller subset are not disclosing their work for the apparent fear.

    Who is going to proof-read the solution, especially if it is P == NP, from an unrecognized researcher?

    I was hoping to see some prediction on the graph isomorphism being in P, but nothing has been mentioned.
    What if the graph isomorphism and NP = P proofs are tied together? Is there no possibility that the foundation of NP-complete (and specially #P-complete) proofs have some subtle bug? A surprise may be only months away for the open minded readers. Or, perhaps never!

    • rjlipton permalink*
      December 23, 2019 8:49 am

      Dear javaid aslam;

      What a thoughtful and interesting comment. You make several interesting points. Let me go over them.

      Who is going to read a proof that P=NP? I agree. There are many claimers and few readers. This could be an issue. We have talked about this issue before. Of course it remains a problem. I think you have raised a problem that is basic and has no easy answer.

      Relationship to Graph Isomorphism? I was a bit surprised to see this brought up. You could have also added factoring integers. We could work out a prediction on these. This might be quite interesting since both have been around much longer that P=NP? Interesting suggestion.

      Bug in whole structure of P=NP world? This is least likely. Of course even some strong mathematicians have claimed that Peano arithmetic itself is inconsistent. So it is possible that there is a problem. But unlikely.

      Thanks again.

      Best

      Dick

      • javaid aslam permalink
        December 23, 2019 4:47 pm

        Dear Prof. Dick Lipton,
        Thank you for taking the time to read and respond with details.

        Following your past advice, I have been working on applying my “FP = #P” construction to the graph isomorphism problem.

        Ironically, my generators of the perfect matchings in K_{n,n} apply so naturally to the bijection between the vertex sets– an insight from the previous symmetric group generators based isomorphism solution.

        Yes it is just a claim as of now, but my solution is going to show that the isomorphism (and the associated counting) problem is no harder than counting all the perfect matchings in a bipartite graph.
        Best Regards,

      • javaid aslam permalink
        December 23, 2019 5:34 pm

        oops.. ” isomorphism (and the associated counting) problem is no harder than counting all the perfect matchings”..

        I meant isomorphism (and the associated counting) is as easy as “counting all the perfect matchings” ..

      • January 6, 2020 11:25 am

        On the topic of verifying proofs –
        Wish we had robust automated proof checkers so that a proof’s author can bear the burden of verifying his or her own proof.
        Motivating folks to work on automated proof checkers seems to be a perennial challenge. It seems the work is too mathy for software developers and not enough mathy for mathematicians☹️

    • December 23, 2019 8:49 am

      Just become recognized :-))

  5. December 25, 2019 4:02 am

    The title “Predicting When P=NP…” implies that the problem is already solved. I think, rather, that a study should be done on the prediction of when a solution of the P vs NP problem will be accepted as true. Happy New Year!!

    • December 27, 2019 12:46 am

      This is a case where the lack of a subjunctive in English causes confusion. Changing “is” to “Will Be” might have been clearer. Mind you, “never” is a possibility…

      • December 27, 2019 4:03 am

        I read it like this: “We know that P=NP and it will be solved x year”. This moved me to read Gödel’s unpublished essay Is Mathematics Syntax of Language? (Gödel Works Vol.III). Interesting in the case that a machine can make distinctions.

  6. Ivan Gusev permalink
    December 29, 2019 12:44 pm

    I would like to add that this statistics misses the state of framework developed before every problem was solved. Instruments and tools are pretty important. It would be more honest to take graph of “being ready” for the framework and increase it from zer0 to one when problem is solved and then to take a look at its average growth-rate instead of just time. Maybe they would find that after every “spike”(means “new tool arrival”) the chance of problem being solved increases dramatically and after some time it fades away. This is pretty obvious why. And I would like to ask you exactly about tools, moreover about mechanical part of mathematical language in proofs. Suppose one sees the graphic and one can notice that it is above zero on right half and below zero on left half it also crosses zero and does not appear to end at all nor does approach anything. One could conclude a lot from the graphic but without epsilon-delta tool one would struggle applying his observations rigorously in any proof. So, if I have cool vision about some objects and I have new philosophical basis for them and I developed some fancy tricks no one thought about, how do I invent such mechanical tools for them like epsilon-delta mechanism(sorry, I don’t know how it is called in English) that could be a basis of mechanical work for my constructions?

    I suggest tl;dr for the following.

    For example,
    I have logic called “Logic of mutual determinations” it asks how one would rationally think in his world where EVERYTHING would be a job of hidden parameters and he would be ensealed in his world under his observations. How would such object conceive the world and how would he be able to gather information about it. I started to use prime numbers(first tool) to describe “asymmetrical property” of every object, because if the object is symmetrical it doesn’t affect anything at all(you just can’t detect it in your observations), hence object does exist only if it has asymmetrical property of its appearance. Imagine HMM(hidden markov model) with quantized randomness or absolutely deterministic quantized quantum chaos(lol what a tautology). In HMM after every observation we don’t have to work with randomness because we can use prime numbers to quantize the system and deduce with our actions applied to the system what it is made of, simply by basic arithmetic operations. This Logic is pretty useful for P/Poly vs NP, because for every instance of the problem we can create its asymmetrical property under the mechanics of TMs and then ask questions about the set of strings that we got. I call such sets mutually-determined sets and the strings extracted from them “asymmetrical property” under the vision of (*name of another mathematical model*). I just saw that some strings must be infinite in case of NP-problems or the complexity of Markov algorithms grow exponentially in the set of such strings + substrings. Unfortunately, I’m not even close to P!=NP because my proof stuck when RH did arose in it. I tried to bypass it and made another tool that allowed me to connect L-Functions(when non natural primes(Q,R,C) come I call such functions U-functions) with those MD sets through some convoluted topology, I called it “two compound derivative” and it is used to define LoMD in set of continuous objects. And when I did that I saw why exactly NP-Complete problems are so hard. I have found exact explanation, exact reason for that, but I can’t catch that fish at all. My hands are short and net is poor. I don’t have proper mechanical tools to rephrase things I do into mathematical sentence that would form a mathematical proof. So, how do I eliminate such gap between what I know and what I have to know to be mathematically correct with my thoughts and reasoning?

    • December 30, 2019 4:04 pm

      Interesting what Ivan says here: I had to understand my proof as I could, a posteriori. I summarize a bit: If it is true that set theory is a reduction of mathematics and if it is true that the distinction creates complexity while reducing it (Luhmann, Spencer-Brown), then to reduce the complexity of the data set (NP problems), another set (distinctions) had to be created and put into correspondence.

      Task for quantum technologists: how to create a set of distinctions and relate them to the data?

      • Ivan Gusev permalink
        December 31, 2019 4:47 pm

        LoMD was originally defined from my thought-experiment about physically correct logic. I can clarify how it works but some nonsense have to be introduced in support. The following isn’t true but clarifying.

        Nonsense number one — counterexample to the halting problem
        Let M to be a machine with BB that has to determine halting of another machine S. It can do so only when it enters state AGREED with the TM that it simulates. Now M can write in black box “asymmetrical property” of a machine that it simulates but the S does not have access to BB and doesn’t have its own. At the end M shows the proof to S that it pulls out of black box, if they agree with it they halt if not both run forever. Now if S=M they both halt saying that they would run forever. Well it doesn’t make much sense yet.

        Nonsense number two — there does exist one and only one black box hierarchy for extensions of TM world. Let M be machine with oracle now that knows what M did in (1) we can take M’ that does the same to M creating another black box and that BB gets its own machine with oracle and the chain continues. So now we know from(1) that all of those TM in chain agree with each other, and those that do never halt create “dead worlds” that we simply exclude from The Heirarchy.

        Nonsense number three — Church-Turing thesis is wrong. Now Let put a map of locality to every world with its own “counterexample to THP”, define natural numbers to be a map of locality of P related world. And let’s assume that every next world in our hierarchy has its own set of numbers with ITS OWN LOCAL FUNDAMENTAL THEOREM OF ARITHMETICS. So it can has unnatural PRIME NUMBERS(QRC) with its own RZF with its own Euler Product, what a twist! that describes through analytic continuation its own consistency(!!!). Now every action of TM can be modified deterministically according to its own set of numbers. If we have certain algorithm that runs in polynomial time we can show that there does exist another algorithm that does MORE in the same time because his world has different locality. Now we can easily show that if they have different sets of prime numbers that have different distribution and gaps among them also having different local rings, we have to slow down and add extra actions to simulate them. But we have to be very careful with queer polynomials, I call them “continuum-related” polynomials because they describe algorithms and mechanics of locality from different worlds.

        Now for the failed attempt on P!=NP.

        Let set of graphs G be defined through “Logic of mutual Determinations” (Rus. Логика взаимоопределения) to construct mutually determined set that has “asymmetrical property” extractable from it. Now we assume that it has its own “hidden parameters” like HMM and we can extract them. They are not only capable of describing every action that we do with graphs(cut/insert edge) they also can distinguish every edge/vertex and actions done to them and graphs they are in. Now we can simply FACTOR our graph to its set of prime numbers. It makes us capable of giving number to every vertex/edge and every operation on them allows us to construct another graph that has his own set of prime numbers. Mind blown! After that step we have to analyze our set of prime numbers that belongs to K-Clique. What exactly makes K-Clique hard then? I have two(or 3?) worlds first — cardinality of P world and K-Clique-P-world is different and second — topological property of K-Clique has inner property that naturally makes it hard to solve because there is not enough of information to extract. So we establish barrier of independence, saying that we cannot create TM which behaviour would be provable and it would solve K-Clique in polynomial time. And there is my great disappointment, I cannot go further and say that there is nothing beyond this barrier. I cannot show that independent TMs are empty set. Sorry for being rough and pretentious, I’m just sick, feel severe pain and want to sleep(4:46). Also topological property is extracted with “two compound” derivative that I use for LoMD in continuous sets.

  7. January 5, 2020 3:02 am

    A little clarification: Today I would change a word from the name of my algorithm GDSD (Given Data Set Distinction): “Select” by “Set” (establish). Happy 2020!!!

  8. January 10, 2020 3:07 am

    I found the best definition of mathematical correspondence in Wikipedia in spanish. I translate: Given two sets: X and Y, and a function f, which determines some binary relationship between some element of X with some element of Y, we will say that this function: f, defines a correspondence between X and Y, which we will represent:

    f: X → Y

    when at least one element of X is related to at least one element of Y.

Trackbacks

  1. New top story on Hacker News: Predicting When P=NP Is Resolved – Latest news
  2. New top story on Hacker News: Predicting When P=NP Is Resolved – Hckr News
  3. New top story on Hacker News: Predicting When P=NP Is Resolved – Outside The Know
  4. New top story on Hacker News: Predicting When P=NP Is Resolved – Ultimate News
  5. New top story on Hacker News: Predicting When P=NP Is Resolved – News about world
  6. New top story on Hacker News: Predicting When P=NP Is Resolved – protipsss
  7. Predicting When P=NP Is Resolved – INDIA NEWS
  8. Top story HACKER NEWS: Predicting When P=NP Is Resolved - Nate's Blog
  9. Predicting When P=NP Is Resolved – Hacker News Robot
  10. Predicting When P=NP is Resolved - Nevin Manimala's Blog
  11. Servant: The TV Show | Gödel's Lost Letter and P=NP
  12. Our Thoughts on P=NP | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s