Skip to content

Have Ten Years Brought Us Closer?

February 18, 2019


To solving the big questions, that is

Cropped from Device Plus source

Tetsuya Miyamoto is a mathematics teacher who divides his time between Tokyo and Manhattan. He is known for creating in 2004 the popular KenKen puzzle, which the New York Times started running ten years ago. As with its sister puzzles Sudoku and Kakuro, unlimited-size versions of it are {\mathsf{NP}}-complete.

Today we observe the 10th anniversary of this blog and ask what progress has been made on the {\mathsf{P=NP}} question.

The {\mathsf{P=NP}} is a question about asymptotic complexity. From time to time we have tried to raise corresponding questions about concrete complexity that might yield more progress. What catches our eye about the KenKen puzzles is that their generation is a full-blown application within concrete complexity. The NYT’s KenKen puzzles are all generated using software by David Levy that can tailor their hardness. Quoting the NYT anniversary article by Will Shortz:

[Levy’s] program knows every possible method for solving a KenKen, which he has rated in difficulty from easy to hard. Thus, when a KenKen has been made, the computer knows exactly how hard it is.

This seems to say there is a hardness measure that is objective—quite apart from the idea of having human testers try the puzzle and say how hard they found it to be. We surmise that it is lower for instances that have more forced plays at the start. We wonder whether Levy’s criteria can be generalized.

Incidentally, this is the same Levy who won a challenge chess match in 1978 against the computer Chess 4.7 to complete his win of a famous ten-year $1,000+ bet. He lost $1,000 back when he was defeated by Deep Thought in 1989. He later became president of the International Computer Games Association (ICGA), whose Journal published a nice paper on the {\mathsf{NP}}-completess of the aforementioned puzzles and many others.

GLL’s Tenth Anniversary

GLL’s first post, on 12 Feb. 2009, featured Stephen Rudich and his work on the “Natural Proofs.” Two other posts that day covered other aspects of why the {\mathsf{P=NP}} question is hard. Our question, dear readers, is:

Has anything happened in the past ten years to make any part of those posts out-of-date in the slightest way?

We won’t claim any such progress, though we have tried to stir ideas. In the meantime, we have written 806 other posts:

  • Some have featured our own work and ideas (considering Ken’s chess research in a separate vein);

  • some have featured others’ direct attempts at breakthrough lower and upper bounds (with a few successes);

  • many have featured other kinds of results by others;

  • many have pulled “idea nuggets” from the past;

  • many have been humor and social commentary.

To date, we’ve had 18,575 comments plus trackbacks on these posts and just over 2.1 million views. We are less able to quantify impacts, beyond occasionally seeing citations of articles on the blog as sources. We try for precision as well as readability and are grateful for reader comments with fixes when we slip up on the former.

P vs NP

There continue to be claims of proofs that {\mathsf{P \neq NP}} and some that {\mathsf{P=NP}} While these proofs do not seem to be correct, there is something that we wish to remark about them. Many argue as follows:

There is some problem say {S} that seems to require a search of exponentially many objects. Then the proof states that any algorithm for {S} must actually look at all or most of the exponentially many objects. This of course is where the proof is not complete.

There is some sense to these proofs. They seem related to the oracle proofs that for example show that for some oracle set {A} it is the case that

\displaystyle  \mathsf{P}^{A} \neq \mathsf{NP}^{A}.

we have discussed these types of proofs before—we even said that we did not like them.

The trouble with these results that are rigorous is that they change {\mathsf{P}} vs {\mathsf{NP}} in a central manner, and this seems to make the results much less interesting. Roughly here is how they argue: Imagine that for each {n} we either put a string of length {n} into {A} or we do not. The point is that if we do this in a unpredictable manner then a polynomial time machine will not be able to decide whether for {n} there is or is not a string of length {n} in {A}. But a nondeterministic machine with just use its power and guess. This shows, essentially, that

\displaystyle  \mathsf{P}^{A} \neq \mathsf{NP}^{A}

is true.

There is some relationship to many attempts to show {\mathsf{P} \neq \mathsf{NP}}. The proofs often argue that one must look at all the objects. The counterpart here is that a polynomial time machine will not have enough time to check the {2^{n}} strings of length {n} to see if they are in {A}. But this works in the oracle case because we allow the rule that decides whether or not a string is in {A} to be very complicated. In the real world, in the world where we study the real {\mathsf{P} \neq \mathsf{NP}} question, we cannot assume that {NP}-complete problems use a complicated rule. That is precisely what we are trying to prove.

State of the Game

What can we say? Mostly the big open questions remain. We still have no non-linear lower bounds on circuit complexity and no progress of any definite kind on {\mathsf{P}=\mathsf{NP}}. What do you think?

What is commonly hailed as one of the two biggest results in our field last year was a positive solution to what is intuitively a slightly weaker form of the Unique Games Conjecture (UGC). For UGC we can refer you to Wikipedia’s article:



The note [2] is in turn a reference to a 2010 post here. The new paper proves hardness for the relaxed situation where, roughly speaking, a trial assignment to a node in a constraint graph limits the other node on any connecting edge to at most two possible values, rather than a unique value as in UGC. This relaxation retains many properties that had caused disbelief in the original UGC, yet it was proved—in that sense a big deal.

Nevertheless we note that UGC, at its core, is just asserting that for arbitrarily small {\epsilon}, with our power to make other parameter(s) as large as desired, we can execute an {\mathsf{NP}}-hardness proof. We have been executing {\mathsf{NP}}-hardness proofs for almost fifty years. That is something we in the field have proven good at. True, these hardness results becomes lower bound proofs if and when {\mathsf{NP \neq P}} is proved, and true, we have been as vocal as any on the standpoint that significant lower bounds will come from constructions that are usually thought of as being for upper bounds. But the new proof from a year ago doesn’t feel like that. We invite readers to tell us connections from UGC to the possibility of actually constructing lower bounds.

Open Problems

We at GLL thank you all for your help and support these ten years. Ken and I plan to continue doing what we have done in the past. Plan on a visit from our special friend on St. Patrick’s day, for example. Thanks again and let us know how we are doing.

[fixed date of first GLL post]

Advertisements
11 Comments leave one →
  1. February 18, 2019 4:59 pm

    Happy anniversary!

  2. February 18, 2019 6:06 pm

    Not all remarkable problems are P vs NP: There are several others approach which could leave us to P vs NP

  3. February 18, 2019 6:50 pm

    Happy anniversary for the only defenders of a cause (letter) apparently lost!!

  4. February 19, 2019 12:03 am

    💡 hey guys as for impact, 2M views is very high impact, and you didnt mention that CS award you one that cited your blogging. also, did you ever review Fortrnows book on the subj? excellent popsci treatment.

    P vs NP: in a word, bittersweet. the sweetness of a world class problem, the bitterness of its remoteness, not even Everest scale: now like trying to scale Olympic Mons on mars, in the sense of “maybe not in our lifetime(s), one for our descendants” like the Riemann problem. think there is some hope from the last decade in 2 forms that could come to fruition. (1) machine learning is a very big deal and believe it will be used to generate mathematical breakthrus in the near future; this has already happened to some degree but the results are rather isolated/ disconnected. (2) understanding of fractals is increasing. it appears that maybe computational complexity is actually all about fractals. unf there is not a lot of deep research on this, but think following paper is a very big deal that needs followup. havent seen much attn paid to it, think it deserves much better.

    Fractal dimension versus process complexity
    https://arxiv.org/abs/1309.1779

    as for solving P vs NP, would like to send out call/ plea to a wealthy patron to fund targeted, incremental/ cyber collaborative research instead of a prize, somewhat akin to simons foundation model and as outlined in Nielsons book Networked science. believe that could be a gamechanger. the claymath prize is both a success and a failure. it has motivated much work, thats a success. after 2 decades a solution is not even in sight, thats a failure.

    • February 19, 2019 2:04 am

      For me, Gödel raises the problem and the solution as well. It is so obvious that nobody sees it. Like Poe’s The Purloined Letter

  5. Anon Postdoc permalink
    February 19, 2019 5:50 am

    I became interested in computational complexity in 2010, and later returned from industry to do a PhD in the topic. Continuing research in the topic has become the primary focus of my working life – one I deeply enjoy and am very grateful to have found.

    This blog played no small part in my journey: explaining concepts with clear exposition as I muddled through the basics, and providing interesting content, discussion and updates on both my area of the field and others field. It is a treasure, and long may it continue.

  6. Serge permalink
    February 19, 2019 7:01 pm

    P vs NP looks like an NP-complete problem, and all the partial results already achieved look like P problems. Just because some P problems have been solved doesn’t mean the NP-complete problem will eventually get solved… In any case, congratulations for your awesome blog!

  7. February 20, 2019 12:45 pm

    Many congratulations on reaching this anniversary. Were there times when you were tempted to draw a line under it and walk away? I for one am glad you didn’t; this is one of my regular stops.

  8. February 21, 2019 2:24 am

    Happy 10th anniversary to GLL! Can’t believe it’s been 10 years. Waiting for more from you both!

  9. Pascal permalink
    February 25, 2019 4:03 am

    Showing that P is not NP amounts to proving a very strong lower bound. You could survey some of the progress on lower bounds that has been achieved in the last 10 years (as well as the progress on understanding the barriers that they face) and the outlook might not so look bleak.
    Perhaps a topic for another post?

Trackbacks

  1. Problems With a Point | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s