# Theory Has Bet On P=NP

* Theory has made a bet that PNP in many ways. *

Pierre-Simon Laplace, was a famous mathematician from the century, who is known for many things. How about having a transform named after you—the Laplace transformation—pretty cool.

Today I want to discuss our version of his question: “What is the probability that the sun will rise tomorrow?” Our question is: what is the probability that P=NP? Okay, it not really the same as his question, but it is an interesting question.

** You Bet Your Life? **

I raised the issue of what is the right bet about P=NP in my recent post. There was a great deal of discussion, which was extremely interesting. I thank everyone who was kind enough to make a comment.

The central discussion, however, I think moved over to a discussion of beliefs vs. betting. This is a cool topic, but not exactly what I had in mind. I am not really interested in starting a betting pool on P=NP. I did want to make this point:

Theory, as a field, has placed a bet that P=NP is impossible.

Here is how the field makes that bet every day:

*Money:* Try sending a proposal to NSF, for example, and state that you wish to work on proving P=NP. Or even a weaker goal, that you wish to look at the possibility that factoring is in P. The chances that you will get funded are close to zero.

I tried an experiment a few years back: I wrote a reasonable—in my opinion—proposal on attacks on factoring. I am still thinking about them, and have posted on one of my ideas. The proposal got dollars. Perhaps it was poorly written, or perhaps the funds were tight that year, or perhaps my ideas were stupid, all that could have been true. But, the written reviews made it clear that I could not be funded without a “partial result.” If I had a partial result, then I probably would not need NSF funding. You either can factor or cannot factor.

The point is the field has made a huge bet that P is not equal to NP through the types of proposals that are funded. If P=NP is really closer to , then I would argue that more dollars should flow into different projects.

*Eye Balls: * Try sending a paper to a conference that goes against the conventional wisdom that P=NP is impossible—I think you will get nowhere. The same for journals or other places that control what we get to see. Again I think that the field has made a bet; the bet is that such papers are uninteresting. For example, would a paper that assumed that factoring was easy and then derived consequences of this assumption get attention? I think that it would not.

*Mind Share: * Try getting people to think out-of-the-box about P=NP, it’s hard. I started this blog mostly to do just that. I am happy that people seem to enjoy the blog; however, I would like people to think about P=NP in a serious way.

Just the other week Princeton/IAS held a meeting on Barriers in Computational Complexity. The organizers and the speakers consisted of some of the best theorists in the world. I really wish I could have been there to hear such top researchers speak on their work. However, my understanding is that the tone of the meeting was:

“How can we finally prove what we already know is true.”

My student Shiva Kintali, who was at the meeting, had two cool comments to report. The first was, One speaker mentioned that once a Nobel prize winner in physics was asked “if you can ask one bit of information from an all-powerful alien what would you ask ?” He said “I would ask if P=NP.” At this point many people in the audience said,

“We all

knowthat bit, all we need is a proof.”

This was surprising to Shiva. The second was that in almost every panel discussion, everybody unanimously said “I believe P NP, since a world with P=NP would be hard to imagine.” The most common reason they believed P=NP is impossible, Shiva says, is that in this world, creativity would be automated.

That is a pretty strong bet that P=NP is impossible. They may be right, but let’s at least agree that they are making a strong bet.

*National Security: * Try getting security people to worry about a classic factoring algorithm. Good luck. I think there are national security issues here—by “national” I mean more than the US. Any country’s safety and economy is potentially at risk.

What if the “wrong people” figure out how to factor, or how to compute discrete logarithm, or in general how to solve hard problems, what will the consequences be? I think at a minimum there should be plans on how to react quickly if modern cryptography is destroyed.

This is another implicit bet. If you think that it is impossible that P=NP, then you will not plan for the failure of your crypto assumptions. Is this not a bet of a million to one? I say this because the consequences of a failure could cost hundreds of billions, perhaps trillions—the latter if national security is compromised.

** Open Problems **

I wanted to point out that we do bet every day on P=NP, whether we realize it or not. I think that given the potential for many outcomes to the P=NP question, we should be more open minded about the computational world. As I have pointed out P=NP could be true and have nothing to do with “automating creativity,” or be a practical algorithm. But it still could true.

“…would a paper that assumed that factoring was easy and then derived consequences of this assumption get attention? I think that it would not.”

Well, it certainly would if you showed e.g. easy factoring => Riemann hypothesis is false or something like that. Taking the contrapositive might be a better way to present the results, though. 🙂

Actually, I think various papers that assume factoring and discrete log are easy have gotten some attention. They tend to involve computer scientists (OK, they’re made out of quantum particles) figuring out the consequences of having a quantum computer. Like, I think there are papers by Okamoto and collaborators on public key cryptography intended to be secure against quantum attack—only hitch is, you need a quantum computer to do the encryption and/or decryption; the results are obtained by assuming discrete log is easy.

Here’s one by Okamoto, Tanaka, and Uchiyama:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.108.4896

They say: “The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.” QPT = quantum poly-time Turing.

As an engineer reading the complexity theory literature, one big obstacle is that the notion of polynomial reducibility seems mighty blunt.

Relative to engineering-oriented ways of thinking, the notion of reducibility in complexity theory seems to broadly correspond to model order reduction (MOR) in simulation theory, which in turn broadly corresponds to the notion of pullback in algebraic geometry. And yet the mathematical notions of MOR and pullback—even their notation—exhibit a pleasing level of flexibility and refinement that (at least for me) are mighty tough to discern in the complexity theory literature on reducibility.

Is the idea of reduction in complexity theory somehow like the idea of integration in analysis? Whose enrichment was necessary to the solution of (for example) long-standing problems in harmonic analysis? What are the opportunities for enriching the notion of reducibility in complexity theory?

I would be especially grateful for recommendations to articles on reducibility that explicitly link to the literature on MOR and/or algebraic geometry—it is very likely (almost certain in fact) that I just haven’t selected the best articles to read.

I had two comments: first of all, you say that “at a minimum there should be plans on how to react quickly if modern cryptography is destroyed.” I would assume that national security agencies *are* actually prepared for this. At the very least, I assume they’re prepared for easy factoring.

Second, can you fix the link to your ideas for factoring? Thanks!

I have no idea what they are prepared for. But my expert friends in security, who I have raised this question with, have said that they known of few (no?) papers that give good ideas on what to do if the “unthinkable” happens.

I have wondered exactly the same thing, how will the National Security people react to the “unthinkable”. What if someone were to figure out a polynomial time algorithm for an NP-complete problem, and then post it on the arxiv where it is publically available information. Once it is learned that the paper speaks the truth, and many people have already downloaded it to their computer, how would the security people deal with that? Once it is publicly known, that is it.

Now what if the person who figures it out doesn’t post it on the arxiv maybe because he is aware of the consequences, and just sends it to one journal, and shares his work with no one. Would the journal be obligated not to publish such sensitive information and keep such information secret? Would such good news be released to the world or would even that be kept secret? Once such news is known, someone working for the journal could be bribed into giving away a copy of the paper..

….really, what would happen, I have wondered this alot?

Neither a proof for P=NP nor the existence of a polynomial-time algorithm for factorization would necessarily endanger national or international security … as long as factorization is practically slow enough (as an illustration, think of time o(n^{100}) for factorizing n-bit numbers).

On the other hand, a sudden practical breakthrough in factorization algorithms could actually vaporise much of the cryptographic infrastructure the world has today — even if the run time remains in the superpolynomial area.

Proving P=NP would only vaporize the common (and very conventient!) theoretical notion of classifying algorithms as “efficient” if they run in polynomial time, and as “inefficient” otherwise. Which would be very regrettable, of course …

Mathematicians often have strong hunches regarding unproved theorems, but the confidence that P != NP seems less justified than the confidence in other conjectures since it asserts the non-existence of an algorithm. We don’t have that many theorems regarding algorithm non-existence to go by. Most of the ones we do have are variations on one result: the halting problem. This seems very different from say the Riemann hypothesis where we have a great deal of numerical evidence consistent with the conjecture.

This post expresses better what I tried to say, that when it comes to practical engineering, reduction to a halting problem somehow “feels” like an excessively blunt mathematical instrument compared to suppler tools like number theory, or pullback methods in algebraic geometry and category theory.

Am I just selecting the wrong literature on this?

Although I agree that it is good to question the conventional wisdom on P vs NP, I do not agree that the community ignores serious (technical) attempts to question the validity of the conjecture. And I would be even more surprised that a paper that gives surprising consequences if there is poly time algorithm for factoring will be ignored.

It’s amazing how even in an academic setting, a bias quickly grows away from those ideas that are deemed unacceptable. It makes the undercurrent of this video all the more relevant even in modern times:

http://video.google.com/videoplay?docid=-5122859998068380459#

As a species we clearly not as open-minded as we like to think we are. In this link, towards the middle, I love how Alan Kay distinguished between news and new. We crave news, but are generally afraid of new things. It seems to accurately define the restrictions in our pursuit of knowledge:

Paul.

If someone would like to obtain some grant money to work on efficient factoring algorithms, isn’t it possible to propose improvements to the running time of existing factoring algorithms ? (by shaving log factors, so to speak)

That would not look so much like an “all or nothing” bet as proposing to work on a polynomial-time factoring algorithm.

And if you end up with shaving more than log factors, the grant agency probably won’t mind (maybe the banks will).

Perhaps you don’t like this idea because a polynomial-time algorithm would have to use a radically different approach than the existing “efficient” factoring algorithms?

“For example, would a paper that assumed that factoring was easy and then derived consequences of this assumption get attention? I think that it would not.”

Perhaps not from an algorithms perspective, but from a complexity perspective: sure! For example, if someone were to prove that factoring in P implies PH collapses, I think that would get some attention. This, of course, would have the effect of reducing the conventional wisdom on factoring (that it is not in P) to the convention wisdom on PH (that it doesn’t collapse).

Richard —

I also think you’re being far too pessimistic here. I think there’s a large (and growing) tradition of papers of the form : “If this is true, then the following interesting consequences hold…” in theoretical CS, particularly complexity. If you wrote a paper assuming that factoring was in P and derived interesting consequences, I’m SURE it would be published (in Complexity, if not in FOCS/STOC). The key, of course, is that they’d have to be interesting consequences.

Similarly, you could undoubtedly get funding for research into factoring, but you’d have to write a suitable proposal. Maybe instead of writing it as “Factoring is in P” one would be better off writing on “New directions for faster factoring algorithms”. But at least when I was in graduate school, there seemed to be plenty of people working on factoring (e.g., Lenstras) who I assumed were funded.

You might NOT get such funding from the national security establishment, which naturally has a vested interest in keeping factoring-based cryptography hard. But I think you’re being naive if you don’t think that there are groups funded by the NSA or similar organizations doing such work — but not publishing and requiring clearance to work on it. I’m sure they’re also working on variations of crypto schemes with different assumptions — although my impression was that that also was in mainstream cryptography. That is, if you wrote a paper on a new crypto algorithm that would be secure even if a factoring was in P, and the algorithm had other suitably nice properties (efficiency, etc.) it would be published in standard crypto conferences.

I would suggest to you that the implicit bet we have (in the scientific community — who knows what’s going on in the “money circles) on P neq NP is maybe more like in the 100 to 1 or 1000 to 1 range, not the millions-to-1 you claim. Which doesn’t seem unreasonable.

While I agree in general about how the field has bet, there is one part of your post that assesses the result of this bet incorrectly:

For example, would a paper that assumed that factoring was easy and then derived consequences of this assumption get attention? I think that it would not.The paper A polynomial-time theory of matrix groups by Babai, Beals, and Seress (which you even blogged about) from STOC 2009 does exactly that. It assumes both factoring and discrete log are easy and proceeds to show that if this is so then matrix group problems in odd characteristic are also easy.

In fact I would conjecture that any paper that derived anything non-obvious and of significance from the hypothesis that factoring is easy would get plenty of attention, because of what we know about quantum computing and since it is a TFNP problem and not known to be NP-hard. (Deriving non-trivial results about an associated complexity class for factoring analogous to PPAD would have major interest.)

Mike and Paul,

Probably did go a bit too far on factoring. It is true, I still think, that not many people are thinking about this direction. But perhaps they are, and no great result has yet emerged.

Yes, I would love to see more results about if factoring is easy then X is true.

I guess you know, but a recent result like that is: If factoring is easy, than the matrix group membership and order problem over finite fields is efficiently solvable (i.e. it is on quantum computers). See STOC’09 paper of Babai et al.

You should apply for quantum computing funding 🙂 Even if you don’t think the model is realistic/physical/etc. people in quantum computing are very interested in understanding what the implications are for a world in which factoring is easy. Indeed I’d even go so far as saying the field as a whole is more interested in the complexity of factoring than most of theoretical CS (at least those of us who are in the field because we want to understand how the world works, and not how it should work to let us continue to be funded.)

Do you think that quantum computers make this research harder to fund? We already have a polynomial-time factoring algorithm. Running it requires computers that do not exist. However, the research path to building a quantum computer might be more straightforward than for a classical factoring algorithm. There are clear partial results and milestones all along the way. Yes, it is more expensive, but the aggregate long-term risks are low. Even if quantum computers are far off, the partial results are extremely interesting, for physics if not computer science. Funding agencies might prefer this. Yes, I realize it is a mistake to think of all funding agencies as one entity.

I think there is an interesting question here. Some believe that quantum machines will never work, others that it will take decades, and others are more optimistic. I would argue for quantum funding—no problem for all your reasons. I would argue, however, that there is a possible classic algorithm out there that could factor much faster than we known today.

Physics is pretty clear that quantum machines should work. There’s a small chance that we’ll hit some insurmountable engineering problems for big quantum computers, but chance of practical quantum computers being developed in our lifetimes is vastly greater than P=NP.

Try sending a proposal to NSF, for example, and state that you wish to work on proving P=NP. Or even a weaker goal, that you wish to look at the possibility that factoring is in P. The chances that you will get funded are close to zero.I assume you have first- or second-hand evidence, so I can’t really argue. But I find this hard to believe. All I can say is that I would have no problem recommending to fund a proposal for factoring

as long as the proposal made a clear case for what techniques would be tried, and why there is hope they would work. (I.e., no one will fund someone to just go off and think about the problem without a clear line of attack.) For that matter, that goes for any proposal, on any topic.(Sorry, I wrote my comment before getting to the part in your post where you mention having your proposal turned down. Still, I think you are being harsh by generalizing from that one instance to the general case.)

I agree. Sometimes I get carried away. I apologize to all if I was too tough. I do think however that theory, like many fields in science, is driven by “fads” and “what is hot”. And sometimes this restricts the ideas that are listened too.

But again sorry if was too tough.

This has been said before in the comment thread, but just so we are clear… N=NP does not imply that there is no hard problem anymore. Most likely, NP=P would have no practical consequence. At least, that’s what I believe.

It is also unrelated to whether creativity can be automated. You are sort of assuming that our brain is somehow capable of solving NP-hard problems. I have no such evidence. Computers are very creative checker players, for example (even though checker is a solved problem).

How exactly would national security of any country be compromised by P=NP, let alone fast factoring algorithm? So everyone would need to update some cryptography, or maybe if the worst came to it, switch to one-time pads. That would be some expense, but not massive. And even if one country broke all crypto of another, and used that for a surprise attack – that would make some difference, but it would be quite small.

I think that history has shown that breaking codes can have a huge impact. The breaking of the German Enigma machine was one of the reasons that the Germans were defeated. I find it hard to believe that this would not be the case today. Also if someone figures out how to factor, for example, they might choose not to tell anyone–no?

I think part of the problem may be less that there is some bias against attempts to prove that P=NP, and more that it is harder to write those attempts up as a conventional mathematical publication. For instance, I find your blog posts about why we should not take the conventional wisdom for granted extremely thought provoking, and I regard the ideas they contain as serious, valuable ideas. However, they are also the kinds of ideas that are much better suited to a blog post than a paper, since they aren’t statable in the usual Definition/Lemma/Theorem language.

Why is that any different from attempts to prove that P does not equal NP? I’m not certain I’m right here, but perhaps it is simply easier to prove rigorous partial results. I’m sure if somebody could prove a result that said, “If the solution to this purely combinatorial problem is ***, then a polynomial-time algorithm for SAT can be built as follows,” then it would attract a lot of attention, particularly if the premise was not obviously false. But I know of no results of this kind. Similarly, if somebody could prove in a highly nontrivial way that a polynomial-time algorithm existed for some problem in NP, and people had previously had the intuition (without proof) that that problem was NP-complete, then it might count as evidence that P=NP. But again, I know of no such result.

I may have misremembered, but I think Dominic Welsh once speculated that a proof that P=NP might come from knot theory: the problem of whether two knots are equal somehow feels NP complete and also feels as though it might conceivably have a polynomial-time algorithm. What I’ve just written may be obviously stupid, but I think it is in the neighbourhood of something that wasn’t. But if a serious programme along those lines was developed, then I’d have thought it should be possible to get funding for partial results (partly because the partial results would be very interesting in themselves).

Hi Everybody,

I will add my two pence worth.

As a outsider, who is a science lover I see that my contrabution would be to act in a contrawise manner. I have done research into P=NP and have been burnt twice! However, If as a outsider I did discover that P=NP, I would use it to revolutionise Software Programming in the private sector and make some money! This leads to question of incentives which I will not go into here.

What really is fascinating as a outsider about the P=NP phenomena, is how it has blotted out discussion of other much more realistic propositions like BPP = NP. Infact I would love a post about what is the major view of the complexity community about BPP = NP ! Are there researchers working towards BPP = NP ?

What do people on this blog think is evidence in favour of BPP = NP? I would put down as a starting list as 1. Valiant Varazi Lemma 2. Toda’s Theorem 3. Fpras and Fptas for #P problems 4. Fptas for 0/1 Knapsack problems and finally the amazing Fpras for Counting 0/1 Knapsack problems.

Finally, I would take up Geordie’s offer of a $1000 to £1000,000 bet regarding P=NP assuming if no proof comes up there is not payout by either side!

Zelah

I think your question about the relation of BPP to NP is very much along the line of our host’s ongoing concern about “conventional wisdom.” At one point, it was “clear” that BPP was stronger than P; now, however, I bet almost everyone expects that P=BPP, because of the algorithmic similarity between hardness and randomness. That is to say, if a sequence of bits is hard to compute, then perhaps it is just as useful to provide that sequence to a not-very-strong algorithm as it would be to provide the same algorithm truly random bits. The hardness of the sequence “fools” the algorithm into thinking the bits are random, since the algorithm cannot detect a pattern. Hence, P=BPP unless “widely believed” conjectures about pseudorandomness are false.

However, If as a outsider I did discover that P=NP, I would use it to revolutionise Software Programming (…)I’m sure whoever proves that P=NP will become wealthy, if only out of the prize money… However, why would it mean anything at all for Software Programming? What if I could show that all problems are in O(n^x) where x in the number of electrons in the universe? How would you make smart programs out of that?

That seems to be another implicit belief, that a proof that P = NP is would likely to lead to practical algorithms. As you said, O(n^x) would be disappointing if x were impossibly large, so a proof would not

necessarilybe of practical importance, but it seems reasonable to expect that such a proof could lead to practical algorithms for at least some currently hard problems. Also, a proof could involve such new ideas that there would be other indirect practical benefits.Maybe one could ask a slightly more tricky question of the all-powerful alien. Instead of asking “Does P=NP?” I would ask, “Is there a proof that P!=NP?”

If the answer is “yes,” then we know that know that the prevailing beliefs are correct that P!=NP and that there is a proof to confirm those beliefs. If the answer is “no,” then at least everyone can stop looking for what they are so certain exists.

I would ask the Alien: “What metrics for complexity shed the most light on P!=NP?” Very plausibly, this would be enough for us to work out the proof for ourselves: “The sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. … yet it ﬁnally surrounds the resistant substance.”

Instead of seeing proofs that certain proof classes do not contain a proof of P != NP, I would love to see proofs of why no algorithm in certain (interesting) classes can solve SAT in polynomial time.

For example, prove that no dynamic programming algorithm can solve SAT in less than exponential time. Of course, you first have to define formally what is a dynamic programming algorithm.

I love the idea of restricted algorithms. I think it would very neat to prove any sort of negative result for such a class. DP is a great idea, by the way. Very cool.

rjlipton

In some sense, much of Proof Complexity has the “restricted algorithms for SAT” (well, SAT complement) flavor: if you want proofs of a certain class for tautologies, you’re out of luck.

@stefan, some work in this direction is here: http://cseweb.ucsd.edu/~russell/abboimp.ps

Excellent site, keep up the good work. I read a lot of blogs on a daily basis and for the most part, people lack substance but, I just wanted to make a quick comment to say I’m glad I found your blog. Thanks,

A definite great read…:)

-Bill-Bartmann

Hello there,

Good blog, I just stumbled upon it and I am already a fan.

The correct “translation” of Laplace’s question into the P = NP arena should be: “what is the probability that P not= NP tomorrow?”

If you can calculate this probability, then you can also calculate the probabilities of some quite interesting events: “P not= NP for the next k days” and from that, letting k go to infinity, “P not= NP forever”.

I find it “funny” how this question was not answered yet… I mean, every now and then we see someone popping up with a different answer, and yet yesterday I was talking with a friend in Recife, Brazil and there’s a professor there called Sostenes Lins who claims he found yet another solution to it, and his answer is P = NP…

let’s us see if after he publishes his results he’ll be right… 🙂

How ‘hard’ is factoring?

Define a TRIM number as one all of whose prime factors are less than the difference between the number and the previous prime.

Define a COMPACT number as one all of whose prime factors – except at the most one – are less than the difference between the number and the previous prime.

TRIM numbers – which would surely be considered ’round’ by Hardy’s definition of them as numbers that are a product of considerably smaller factors – are rare, but simple to generate.

COMPACT numbers are equally simple to generate, and may be of the same order of magnitude as the primes.

I don’t know (as I’ve never taken the trouble to study the math adequately) whether the ‘factoring’ algorithms that I discovered in 1964 for generating TRIM and COMPACT numbers (both of which were intended to essentially generate the prime series) are ‘hard’ or not, but I suspect it shouldn’t be hard to determine that, nor to show that the TRIM algorithm is a minimal prime generating (factoring?) algorithm.

http://alixcomsi.com/A_Minimal_Prime.pdf

http://alixcomsi.com/Prime_number_generator.htm

Note that – at most – we only need to store the first n primes and n residues in order to generate p(n+1) by a simple sieving method for finding the least even integer – generally the next prime difference – that does not occur amongst the n residues.

Can that really endanger national security?!