# P=NP: Perhaps I Change My Mind

* An old result put a new way (in a now-fixed-up post) *

Albert Meyer knows circuit lower bounds. He co-authored a paper with the late Larry Stockmeyer that proves that small instances of the decision problem of a certain weak second-order logical theory require Boolean circuits with more gates than there are atoms in the observable universe. The instances almost fit into two Tweets using just the Roman typewriter keys.

Today Ken and I discuss a simple but perhaps underlooked connection between P=NP and circuit lower bounds.

Albert recently co-authored, with Eric Lehman of Google and his MIT colleague Tom Leighton, the textbook *Mathematics for Computer Science*. It looks familiar to us because it uses the same MIT Press fonts and layout package as our quantum computing book. They say the following in their Introduction:

Simply put, a proof is a method of establishing truth. Like beauty, “truth” sometimes depends on the eye of the beholder, and it should not be surprising that what constitutes a proof differs among fields.

We would say that the game is not only about making truth evident from a proof but also from the way a theorem statement is expressed. This post uses an old result of Albert’s as an example.

## New Statement of Old Result

Well, any criticism of Albert for how his theorem was stated is really criticism of myself, because Dick Karp and I were the first to state it in a paper. Here is exactly how we wrote it in our STOC 1980 version:

There was no paper by Albert to cite. In our 1982 final version we included a proof of his theorem:

As you can see, our proof—Albert’s proof?—used completeness for and some constructions earlier in the paper. In a preliminary section we wrote that our proofs about classes such as involved showing inclusions

“where the set of strings is complete in with respect to an appropriate reducibility.”

But in this case the proof does not need completeness *for *. I came up with this realization on Wednesday and Ken found essentially the same proof in these lecture notes by Kristoffer Hansen:

This proof uses not only for but also for . For the latter it suffices to have polynomial-time reduce to . This follows so long as is complete for some class to which also belongs.

Suppose runs in deterministic time . Then both and belong to , the latter because on input we can run for steps. With minimal assumptions on the function , has complete sets , and then follows from . So we can state the theorem more generally:

In fact we would get into and even smaller classes but that’s going beyond our simple point.

## The Point

Our point comes through if we think of a concrete case like deterministic time , called for *quasipolynomial time*. So we have:

Corollary 2If then .

Now mentally substitute for (and `‘ for `‘) in the way Karp and I summarized the final implication in our paper:

What you get after contraposing and using the hierarchy theorem for is:

Corollary 3If then .

The point is that we can also do this for time and even smaller proper super-classes of . What follows is:

Any attempt to prove entails proving strong nonuniform circuit lower bounds on languages that are arbitrarily close to being in .

## Reflecting…

Again in the case of this implication too has been variously noted. Scott Aaronson mentions it in one sentence of his great recent 121-page survey on the versus question (p65):

“[I]f someone proved , that wouldn’t be a total disaster for lower bounds re-search: at least it would immediately imply (via ).”

Maybe I (Dick) considered this in terms of in weighing my thoughts about . But that it applies to in place of gives me pause. This greatly amplifies idle thoughts about the irony of how proving yields the same type of lower bounds against that are involved in the “Natural Proofs” barrier against proving . Ryan Williams had to combine many ideas just to separate from nonuniform —not even getting on the left nor on the right. (Incidentally, we note this nice recent MIT profile of Ryan.) So having such lower bounds for just drop from the sky upon seems jarring.

So I’m rethinking my angle on . I’ve always propounded that good lower bounds flow like ripples from new upper bounds, but the wake of seems a tsunami. We wonder if Bill Gasarch will do a 3rd edition of his famous poll about versus . Ken and I offset each other with our votes last time, but maybe not this time.

We also wonder whether Theorem 1 can be given even stronger statements in ways that are useful. In the original version of this post we overlooked a point noted first by Ryan Williams here and thought we had . To patch it, call a language in “reflective” if there is a TM running in exponential time such that and (namely, the “tableau” language defined above) polynomial-time reduces to . The complete sets mentioned above for classes within are reflective. If we let denote the subclass of reflective languages, then we can say:

Note that per Lance Fortnow’s comment here, sparse languages are candidates for being non-reflective: the tableau language which we would wish to polynomial-time Turing reduce to is generally dense.

## Open Problems

Is this realization about and strong circuit lower bounds arbitrarily close to really new? Can our readers point us to other discussions of it?

Is the notion of “reflective” known? useful?

[fixed error in original Theorem 1 and surrounding text; added paragraph about it before “Open Problems”; moved query about “cosmological” formulas to a comment.]

Regarding Thm 1: you start with some L is in EXP cap P/poly, and take an exptime machine M that accepts L… then you also need to be able to infer L_M is in EXP cap P/poly. How do you do that?

Also note that P=NP implies that E contains functions of MAXIMUM circuit complexity. That follows from E^(Sigma2 P) having a function of max circuit complexity. In general, TIME(poly(s(n)) will have circuit complexity at least s(n), for s(n) << 2^n/n.

Ah—you may be right, making my perception of a shortcut wrong, but Dick’s Cor 3 and the original if-then form of Cor 2 may still work. I’ve put in a temporary update there for now.

To emphasize Ryan’s point, Theorem 1 would imply every unary language in EXP is in $\Sigma_2^p$, which seems unlikely.

Is there a term like “reflective” for the property that (for some suitable M) L_M poly-time reduces to L(M)? Completeness makes that automatic, but has the concept (and results like “every reflective pig that whistles can also fly”) been framed apart from that? Anyway, it seems Cor. 3 is still right.

A more natural approach to show that if P=NP then there are explicit hard functions is to note that in such a case you can find in time poly(2^k) the truth table of the lexicographically first function on k bits that has circuit complexity at least 2^k/(100k).

Using this for k=log^2 n gives the corollary you mention.

[This was moved from the end of the original post.] It is fun to play with the wonderfully explicit formulas in Albert’s paper with Stockmeyer: If you have positive integers such that

then size- instances of their logic problem, encoded naturally over a 63-letter alphabet, require circuits of size . Estimates of the number of atoms in the observable universe range between and according to this article. Using for a target , we get so we need . This exceeds by only the character limit for two Tweets. Can the estimates or their proof be tweaked to get ? Alternately, packing their alphabet into UNICODE’s 16-bit encoding could get it under the 280-character allowance for one Tweet. Then we would rigorously prove that even the entire intelligence resources of the Universe can be stumped by a single Tweet.

great to hear of your enthusiasm for an apparent paradigm shift but boy this is hard to follow from a nontechnical pov and seems like youre not all that clear or really summarize the basic shift. how about just stating the contrapositive of a supposedly unlikely statement? apparently here its “if QP is in P/poly then P!=NP”. apparently the basic idea/ intuition outlined here is that “P/poly is relatively more powerful than P and QP is relatively less more powerful than P” making it all more plausible? also its great that you two sometimes blog as 2 ppl with one brain but if youre talking about different povs and shifts to new ones as here, its hard to follow individual attribution, who is thinking what? maybe be a little less coy on that one…

I find the consequences of “EXP in P/poly” intriguing. On one side it means Turing machines cannot solve NP problems in polynomial time (because it implies P≠NP), but on the other side it means that circuits can (because NP is in EXP).

I am not sure what to think of that from a practical point of view. Would we say that NP problems are intractable ? Maybe that means that the right paradigm for computations should have been circuits and not TM (but this is pure speculation at this point)

Perhaps, we have to get it all over with this.

Dick is a bit too late. Polls are frozen in history. He should perhaps confess, telling the world what he did with the proof for P≠NP that he received a while ago, or it perhaps should be the other way round.

It is diabolically amusing that we need to take such a long journey to come round, finding us chasing our tails at the same spot, hard running for a perfect standstill.

Me may have missed out big as well. 🙂 Circuit is nothing but TM in nature. We have only one computing/computational model in actual practice, quite fortunately or unfortunately. I am not sure if “EXP in P/poly”

meaningfullyimplies TM cannot solve NP problem polynomially, regardless of the truth value of the logical conclusion because of false antecedent. But keep reading.The actual creation of the pile really started at an earlier point as shown somewhere. It roughly paraphrases to the following:

Anything we perceive, through direct experience or imagination, either exists or does not, but cannot be both, independent of our knowledge. NDTM (or any mechanism we realize or ‘theorize’) is no exception. [I can find no fault with this.]

Case I. If NDTM exists, then we will have a world to prove with NDTM defining P=NP, because the existent or theoretically procurable NDTM can solve all NP problems, by definition, polynomially. [I have no basis to ever refute this a tiny bit.]

Case II. If NDTM is purely due to delusion, then NDTM defines |NP|=0, because non-existence, just like the Perpetual Motion Machine nonsense, can not (be used to) define a non-trivial set (of things/problems it can do). [I find myself a total idiot to not totally agree.]

This is basically the simple fact that non-existence of NDTM is equivalent to P≠NP.

Given, or suppose, P≠NP, we immediately lose the NP-completeness we cooked up but see that we have proved the equivalence between an empty set (NP by NDTM) and a non-empty one (NP by verification)!

It is a bit of a shocking composure that all our efforts are firmly based on and intimately tied to such blaring falsehood of the one-equals-two type. Perhaps we

shouldrealize the fundamental flaws, such as the inanity of circuit complexity, to understand why we have been failing so miserably.In 1973, Larry J. Stockmeyer and Albert R. Meyer prove that the problem SIMULTANEOUS INCONGRUENCES is NP-complete with a transformation from 3SAT. In December 2017, a proof showed IF the following problem:

Definition 1: OTHER–3SAT

INSTANCE: A Boolean formula in 3CNF and a satisfying truth assignment.

QUESTION: Has this formula another satisfying truth assignment?

Could be reduced in polynomial time to the another problem:

Definition 2: OTHER–SIMULTANEOUS INCONGRUENCES

INSTANCE: An instance of SIMULTANEOUS INCONGRUENCES and a certificate x.

QUESTION: Is there an integer y which is a certificate different of x?

THEN P = NP. The question would be:

Could Albert R. Meyer (or any other Theoretical Computer scientist) prove that P = NP using this argument?

In 1978, Kenneth L. Manders and Leonard Adleman prove that the problem Quadratic Congruences is NP-complete too. In December 2017, on the same proof is shown this problem can be solved in polynomial time for the average case. The other question which immediately appears would be:

Could this problem may still be solved totally efficient considering “most” of the polynomial time inputs that occur in practice just ignoring those inefficient “small” number of inputs or maybe find a method to relate the two?

The detailed proof will be available in the HAL preprint soon:

https://hal.archives-ouvertes.fr/hal-01665246/document

But now is only available in

https://drive.google.com/file/d/1XBh0lJNq-nYBye_K0S2xd412yMwEMdYN/view?usp=sharing

This study about some properties of the class NP–complete might address to think in the possibility of a P = NP answer. However, the paper does not explicitly conclude with an answer of the P versus NP problem. Scott Aaronson predicted that a serious P = NP approach would certainly unveil several other questions that are still unresolved. For that reason, we think some speculative proofs derived from this paper in the future might help to contribute to the field of Computer Science and many other fields in case of being correct. Would you dare to try…?

@KR, sorry to get off topic but will you be posting a blogpost on the AlphaMind chess programme? and its “difference” from the current chess engines? and your evaluation? Would really appreciate your thots, thots from an expert in computer science, maths and chess.

Thanks for asking and showing interest! I am in fact intending exactly that. Besides expressing the “difference” you mention in intuitive terms, I’d intend to try some of my quantitative methods to see if some kind of “personality” difference could be determined. Alas that is hindered by the authors’ having made available only 10 games of the 100-game match. They are said to be working on a full paper—and that will take time—but the games are already in a circulatable format and I don’t see why they haven’t been fully released.

@KRegan, thanks. Looking forward to reading …… Dana Mackenzie has made some interesting comments on his blog.

And yes, would good if they release the whole set of games or even better, if they agree to a better and fairer contest with SF.

But definitely a game changer, I think……