tags: ,

The P=NP Question and Gödel’s Lost Letter book is now available

Kurt Gödel is one of he main reasons I started this blog almost two years ago. We all owe him a great deal for so many things. For his completeness theorem, his incompleteness theorem, his notion of recursive functions, for Gödel numbers, for his work proving the relative consistency of set theories, and much more.

Today I want to thank Gödel and all of you for making writing this blog so much fun. Sometimes the time and energy required to write the next post is large, but it is worth it. Thanks again. Here is a photo mosaic of many of you that have been part of this enterprise—especially its first year, 2009.

If you cannot see the faces look at this larger version.

I started in 2009 not to write a blog. I thought there were plenty of excellent math and theory blogs, so I began instead to write a book on P=NP. I actually had a large part of the book written—not all—when one day I thought—forget the book. I would just write a blog. I opened a WordPress account, started to write, figured out how to handle HTML, and created my first post. Of course no one read it, but I wrote a second one, and then a third. I pressed on with the writing. Sounds a bit like the lines from one of the great scenes in Monty Python and The Holy Grail:

When I first came here, this was all swamp. Everyone said I was daft to build a castle on a swamp, but I built in all the same, just to show them. It sank into the swamp. So I built a second one. That sank into the swamp. So I built a third. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. And that’s what you’re going to get, Lad, the strongest castle in all of England.

Eventually the blog stopped falling down or burning up, and found a set of readers—you. I thank you for your continued readership. Some of my posts have been fun, some serious, some technical, more than a few wacky. I thank you for reading them, for making comments on them, and in general telling me directly or indirectly “thank you.”

$\displaystyle \S$

I want to make an announcement that my book entitled The P=NP Question and Gödel’s Lost Letter is now published. It is available here at Amazon.com, for example. I started writing a book that became a blog. Now the blog has become a book.

The book was a great deal of work—more than I would have guessed. After all it “just” consisted of selected posts from the first year of this blog. But books are very different from blogs, and I had to do quite a bit extra work to get the book even into the shape it is. Susan Lagerstrom-Fife is my editor at Springer, who helped me this last year to put the book together. I thank her.

Some additional material is included in the book, all essays have been edited and updated. So if you enjoy this blog you should enjoy the book. Some of you who have started reading this blog more recently may especially enjoy the book, since it covers posts from only the first year.

Please take a look at the book, post a review at Amazon.com if inclined, and perhaps even buy a copy. I have no doubt about Susan’s first statement to me when we discussed writing the book:

You will not make any money on this book.

I have no illusions about that, but I felt it would be fun to get a book out. A book is still a more permanent object than a blog. Also, as I said, the book has some additional essays, and some additional information.

Finally, the book has a secret code embedded in it that contains a fact about P=NP that I have never told anyone. If you look at every ${57^{th}}$ character of the prime numbered chapters, then ${\dots}$

Just kidding. But I hope you enjoy the book.

I plan further books based on this blog and on other topics. For example I am thinking about one on quantum algorithms without qubits and another on mathematical surprises and embarrassments.

I have been awed at the number of you reading this blog and am very grateful for your interest and support. The book tries to thank you all—you will see that I have tried hard to acknowledge all who have commented on posts. Check the book’s index to see if you are mentioned.

Open Problems

Finally, I want to again thank all of you who have supported this effort for close to two years now. It started in February of 2009: we are well beyond 200 posts, and I plan on going on for some time. I thank you all for making this effort fun.

Since I always like to ask open questions: should I put out other collections of these essays? What do you think? A final question: how about a collage T-shirt? I have thought about having them made up and sold for cost—would be about ten dollars. Would anyone want one?

September 17, 2010 5:05 pm

Well, thank you sir. You just made useless part of the 500 pages I printed two weeks ago, for which I spent quite some time on the formatting — this is quite a task to put so much goodies in less than half a dozen thousands pages!

Funny coincidences apart, thank you so much for your blog, and my hugest congratulations for the book: this is precisely what I was searching for… two weeks ago

September 17, 2010 5:16 pm

Congratulations!

3. September 17, 2010 6:03 pm

A Summary or Explain to my paper “A Polynomial Time Algorithm for Hamilton Cycle and Its Proof”

For the paper, see arXiv:1004.3702, Lizhi Du

1, The number of break points is less than n(n-1)/2 in a n-vertex graph. Each time the algorithm must get a new main break point(or no break point, which is limited), so, the time complexity is polynomial.

2, If there are some Hamilton cycles(paths) in the graph, the algorithm surely can find a Hamilton cycle(path) before using over all main break points, also, if when using over all main break points, it still can not get a Hamilton cycle(path), this means no Hamilton cycle(path) in the graph.

3, How to prove 2? If we can exhaustively compute all undirected graph, with any number of vertices, any edges, in all possible step orders, all are fulfil 2, sure, this means we can prove 2. But the number of undirected graphs is ∞. How to do?

4, My idea is: using some 12-vertex graphs(sometimes maybe 11, 10) to constitute any graph with any bigger number of vertices, so we only need to exhaustively compute all graphs with 12 vertices(maybe 11, 10). This means, on any step of the computing for any big graph, we can combine some 12-vertex graphs(maybe 11,10) to form the bigger graph.

5, The three items as mentioned in the paper can prove 4. Especially when the maximum degree of the graph is three or four(NPC too). For graphs with more vertex degree, I also have a way to prove its correctness.
Each time, we need to consider more than 12 vertices(by combining some 12 vertex graphs) to compute, then split it to several 12 vertices graphs by all possible deleting.
Notes: A big graph can become many small graphs(with 12,11,or 10 vertices) by all available deleting, the other direction is also true(the word “all” is important)!

6, So, we can prove the polynomial time algorithm is correct by computing all 12-vertex graphs, each in all possible step orders(need to combine and split).

Now, if someone want to reject my paper. He must reject any one of the above 6 items. Item 1 is only a fact, 2 and 3 are apparently logical, 4 is only an expression, so, the only way for you is 5,or 6, but 5 is absolutely true, no any possibility to be wrong. So, the only opportunity for you is 6. For 6, what I say is meaningless, you can do it

Dear Dick: the last time to bother your web page, very sorry.

• September 17, 2010 7:06 pm

Congratulations on a fine book, and a wonderful weblog … and most of all, on an outstanding job of teaching and community-building … all of which are tough-yet-vital jobs.

Please allow me to quote Edmund Blackadder—from the Blackadder episode Ink and Incapacity—in conveying to you, on behalf of the all your readers, “our most sincere contrafibularities! We are anaspeptic, frasmotic, even compunctuous with anticipation for your book!”

• September 20, 2010 11:43 am

a concrete algorithm like this could easily (if it is correct) gain attention by translation into SAT and the solution of an RSA factoring challenge number in SAT form. i do not understand why authors claiming constructive algorithms for p=np do not attempt this, if they are so confident in their results.

September 17, 2010 6:09 pm

Love this blog and congratulations on the book!

I am perhaps slightly disappointed by the fact that the book is \$80 for about 200 pages, compared to, say, the Arora-Barak text book which is \$50 for 600 pages.

5. September 17, 2010 8:22 pm

Thank you for writing interesting posts. i discovered your blog about 6 months ago (along with a few others i enjoy reading) and reading interesting blog posts like your motivated myself to continue my education at the graduate level in computer science where i will be studying algorithms.

Your book seems to have a lot of great articles, i added it to my bookmark folder of books to buy. Looking forward to eventually reading it.

September 17, 2010 11:11 pm

Dear Prof. Lipton,

I would like to thank you for your great posts. My major is mathematics but I believe that by reading your blog I learn a lot about Theoretical Computer science.
I hope you also publish a cheaper paperback edition which is good for student budget.

Thanks

September 17, 2010 11:30 pm

Congratulations on a great book. I’ve been reading it on Amazon’s “Look Inside the Book.”

I really love this blog. I only found it because of the Deolalikar festivities, but it gives me the same warm feeling I used to get as a child when I grabbed the latest Scientific American and turned immediately to Martin Gardner’s “Mathematical Games” column.

8. September 18, 2010 12:08 am

Congratulations! I just ordered a copy and look forward to reading it.

Having written books both the traditional way, and via blogs, I have to say that the blog route is ultimately more efficient in many ways (though not every book could be written as an unfolding blog). The near-real-time feedback is particularly valuable. Perhaps this method of writing will become more common in our discipline in the future…

September 18, 2010 10:28 am

But shouldn’t everyone be told in advance that the blog will/might be used to write a book?

Otherwise people who contribute helpful feedback might feel used.

September 18, 2010 11:17 pm

Amir: why would you feel used at all?

First, I’m sure that only a small portion of the comments are truly useful, so that the ratio work caused to payoff is not really in the book author’s favor.

Second, the book project may never materialize, for a variety of reasons (including when the material turned out less interesting than one expected — obviously not a problem for this blog!) so I see why one wouldn’t want to talk about it upfront.

And third, if you’re going to spend some time composing a thoughtful comment and articulating a tricky technical point on a blog, isn’t it more gratifying to think it may affect the book in some way rather than languish in the far reaches of the comments section?

September 19, 2010 9:06 am

Amir,

I am sorry if that is a problem. Never was what I wanted.

September 19, 2010 6:48 pm

I’ve only made a few comments on this blog, none of them contributing to CS theory in any way. So I don’t feel used, but others might.

September 18, 2010 11:29 pm

Professor Tao: I must confess to picking up one of the volumes from your blog purely for the novelty value. Still, I was struck straight away by how the many comments that your blog receives shaped the presentation of the material (and this was made especially clear since comments contributions were very carefully documented in the book).

I don’t necessarily think that all books will be written this way in the future, but your books certainly show the immense improvements that the collaborative web model has to offer over traditional publication. It’s like getting the 3rd or 4th edition of the book in first printing! Or even better, really.

So thanks to all the mathematicians and scientists who work so hard on their blogs to keep us abreast of what they’re thinking about. And now I look forward to receiving my copy of Gödel’s Lost Letter.

September 18, 2010 2:10 am

Congratulations!

Thanks for writing these wonderful posts. My answer to your open question is: definitely.

10. September 18, 2010 2:31 am

Congratulations!

11. September 18, 2010 5:02 am

Congratulations! Very happy to see the wonderful discussions archived in this manner.

Having said that, I must admit that I am a bit disappointed with Springer’s proof-reading job with your book. A cursory look for 10 seconds and typos already:

“To make the it fun…”
“He use to walk…”

Will buy the book, anyway. Thanks!

September 18, 2010 10:46 am

There are some proofreading issues. The Maxwell Smart quote should be “Missed it by that much.”

September 19, 2010 9:07 am

Fnord,

Yes I will set up errata page. It is easy to make errors.

12. September 18, 2010 6:37 am

Thank you for your blog. In case I can’t get your book, I wish you won’t wrap up your blog really soon now that the book is out.

• September 18, 2010 10:00 pm

There are on the one hand all kinds of Hi-Tech blogs, XHTML Dream Weaver, Amazon, Kindle, Wi-Fi, CT Scan, etc. and on the other hand getting caught in proper covers of the wells just around the curvatures of a passing street. Who’s to say that there are no Persian Poetry Masters such as Molana one of the verses of whom is: the seven cities of love Attar [who almost happened to be Molana's contemporary Poet] traveled while we are still caught in the curvature of a passing street. All that is needed is visiting the market and buy what you need, but attempting to get around the curvature of submitting your paper to SIGACT NEWS or publishing your poetry or a finely tuned Springer book must be a bit trickier.

September 18, 2010 7:01 am

I think it’s a wonderful idea to turn this excellent blog into a book — thank you so much for taking the time to do so! On the other hand I must admit that I find the book quite pricy. I don’t mean to say that the content isn’t worth the money, it just that you obviously don’t get very much of it (judging from the reaction of Lagerstrom-Fife).

14. September 18, 2010 11:15 am

Based on the publication date on Amazon, the manuscript surely must have been complete or very nearly so when Deolalikar’s “proof” emerged. Was there ever a time during that furor that you were concerned that the book may have barely missed being able to discuss the proof in the context of the broader theme, or even mention it?

September 19, 2010 8:59 am

Warren,

It was to Springer months ago. Their process takes much longer than you, or I, might have thought.

September 18, 2010 11:28 am

1) On amazon its 40 dollars cheaper if bought NEW rather than USED. I can see that—
I like to have books that are already broken in.

2) Not available on Kindle yet. Kindle WOULD make sense since you can easily have

3) I’ll be reviewing both your book and one of Terry Tao’s blog-books.
Where will the reviews appear? On complexity blog of course (as well as in
SIGACT NEWS).

5) Congrads or Congrats on getting the book out!

September 19, 2010 9:08 am

Bill,

Thank you very much.

16. September 18, 2010 3:34 pm

Congratulations!

I am thinking about one on quantum algorithms without qubits and another on mathematical surprises and embarrassments.

Mathematical surprises and embarrassments? That sounds quite interesting.

September 18, 2010 7:17 pm

Congratulations! Thanks for both, blog and book!
I’m looking forward for the book on quantum computation.

September 19, 2010 8:44 am

Congratulations, Dick. My Amazon order went in right away. What a great contribution.

September 19, 2010 9:08 am

Congratulations to the CS community for receiving this gem of a book.

It wasn’t surprising that the people who were interested core of theory would follow your posts with high enthusiasm, given your high order of reputation as a theoretician.

What surprised me was that after Deolalikar-episode the ‘popularity’ index sky-rocketed.
My wife, who had no interest in theory CS, started taking immense interest in following your blog; of course not at the level where she’d understand everything, but definitely at the level where a subset of the material would spark her interest to go and read up more.
From many of my friends, who do not actively do research, or not even actively follow some of it, I started hearning about ‘Lipton blog’

I think Vinay Deolalikar deserves some thanks for that as well.
I am alsmot certain that if one did a frequency distribution of words in your blog (including comments), “Deolalikar” would rank very high with P and NP

September 19, 2010 9:20 am

But hasn’t Deolalikar’s career been adversely impacted by his P vs NP proof attempt and its subsequent popularization?

Sure, this whole episode has given a popularity boost to this blog in particular and CS theory in general, but it came at a significant cost to one individual.

September 19, 2010 9:40 am

I feel that’s an orthogonal question.

As Scott Aaronson had put it, there were countless severed heads that lined his path already. So it was a really bold attempt, and one should only get credit for even a failed attempt of this order trying to bind ideas from FMT, complexity, statistical mechanics.

September 19, 2010 2:50 pm

Why do you say that? I see no evidence his career has suffered. I don’t see why it should.

September 19, 2010 5:09 pm

Why would his career be affected? Wasn’t it a genuine attempt?

September 19, 2010 6:53 pm

I think the problem is that he still claims to have a proof after experts have concluded that the approach is unworkable.

It’s unlikely that he would have found fixes to his proof given the difficulty of the problem and in such a short time.

So unless he really does have a correct proof at some point, I think that this could impact negatively on his career.

September 19, 2010 11:45 pm

Everybody has a right to ‘save face’. Also, history has taught us that going against the barrage of criticisms to be worthwhile as even ‘experts’ can be proven wrong. So, I’m quite anxious to see his next move.

• September 20, 2010 10:15 am

There is a saying in philosophy that “Philosophical problems are never solved, they are dissolved.” The same can happen in every branch of the STEM enterprise … think about the problems of vacuum-tube reliability and urban horse-manure disposal, for example. These problems were never solved … but they were dissolved.

In the context of TCS, even failed attempts to prove P≠NP can exert the same salutary effect … when the attempt introduces new proof technologies that are sufficiently strong, that the problems solved by the new technologies came to be viewed as more natural/interesting than the P≠NP problem as originally conceived.

This point-of-view has been (for me) illuminated particularly by Oded Goldreich’s memorial essay “On Promise Problems: in Memory of Shimon Even (1935–2004)”, which greatly helped me towards an (in retrospect obvious) appreciation that multiple avenues lead to—and therefore multiple natural definitions bear upon—the P≠NP problem.

More broadly, Dick’s weblog (and now, a book too!) has helped everyone appreciate that P≠NP isn’t just a deep problem, it also is a wide problem. For which, well-deserved thanks and appreciation are arriving from all quarters.

September 19, 2010 12:51 pm

Congratulations on the book!

And I, for one, would buy such a t-shirt.

21. September 19, 2010 5:36 pm

Richard,

I think that your open question

should I put out other collections of these essays?

has a very easy answer. The success of your blog posts and the responses to the publication of your book clearly indicate that you should, provided you feel that you have the time to keep writing such informative and thoughtful posts. I am truly amazed by the quality of the posts on your blog, which offers a great service to the TCS community. Keep it up!

In case they may be of use, here are two small typos that I spotted while having a look at the preview of the book initial pages on Amazon.com.

Line 1 of 1.1. He use ==> He used

Line 3 of 1.2 the the ==> the

I will ask our library to buy a copy of the book straight away.

September 20, 2010 10:44 am

Dick,

“You will not make any money on this book. ”

But someone seems to be making money out of this book if it costs almost \$80.

If you are not making any money out of it, why not put a pdf file
interested reader pay \$80 to Springer?

Nice book!

• September 20, 2010 11:59 pm

If you are not making any money out of it, why not put a pdf file
interested reader pay \$80 to Springer?

Sheesh…
You really don’t understand human nature, do you?
Dick Lipton is in Academia, he doesn’t get his kicks from money.
He gets much more than \$80 on each book, he gets status (as if he didn’t had enough…)
So he doesn’t mind about spending other people money for this purpose and many are happy to oblige and even add a little toping on the cake by commenting on this post.
OK, I will oblige too, “Terrific Book professor Lipton”, but I won’t buy it!

September 20, 2010 11:26 am

Apologies in advance, I don’t know about the history of this theorem. I notice in the prologue to your book, you discuss Steve Cook, but not Levin, and refer to his theorem as “Cook’s Theorem.” Elsewhere I’ve heard it referred to as the “Cook-Levin Theorem”. Is this controversial? What is the status of Levin’s contribution?

September 20, 2010 12:54 pm

Tony,

It is called both. I used Cook only; the history is a bit complex, and both are fine names for the great result.

24. September 20, 2010 11:30 am

congratulations! i am just an interested amateur, but yours is my favorite math/theory blog.

September 20, 2010 5:05 pm

Congratulations! I can’t wait to get a copy.

26. September 20, 2010 8:24 pm

Congratulations! Probably polishing blog posts into a book is a lot of work. There is something appealing in the tentative and non-committing blog format. But there is also something very appealing in the definite and committing book format…

One remark (or rather a question) about the NP=P episode apropos the nice talk about it mentioned in the previous post. (and I cannot avoid the temptation mentioning Ken’s beautiful and mysterious algorithm for LP http://www.almaden.ibm.com/u/kclarkson/lp/p.pdf I always wanted to understand it much better.)

This was an opportunity for me (as for other non-experts) to realize the gap in my computational complexity education. One gap that I was always aware about (and is often mentioned here and was also mentioned in some comments by Niel and Tim and others) is between non uniform and uniform computational complexity where the circuit complexity represent the non uniform version and Turing machines is were things are uniform. (But what uniformity really sais was always mysterious). My question is this: Is there a way to describe say just using circuits some, perhaps even stronger conditions of uniformity. so if you have a sequence of polynonial size circuits for 3-SAT, this is non uniform, but maybe you also demand that there is a sequence of circuits describing efficiently these circuits, and a sequence of circuits describing these other circuits and so on..

So my questions are: 1) It there a description of uniformity just in terms of circuits, and 2) Is it possible to come with an even stronger form of uniformity to give even more restricted notion of what effective alforithms in P are?

• September 21, 2010 7:26 am

I would like to echo Gil’s question … and say more broadly that any discussion at all of uniformity would be very welcome, and very interesting, Because if these points are challenging for mathematicians like Gil, imagine how challenging they are for engineers like me!

Especially, the challenge of validating a Turing machine that claims to accept a language in P  is natural to engineers as a validation problem, and to mathematicians as a decision problem. Is it a decidable problem? If not, can we restrict P  so that it *is* a decidable problem? If so, is the validation problem itself in P ? If we decide to ignore these issues, so that (in effect) an oracle decides membership in P, then does that mean the problem P≠NP is a promise problem? And if so, what is the computational power of that oracle that provides that promise?

The dovetailed intricacies of these (to engineers, very natural) questions illustrates a wonderful lesson of this past summer … the P≠NP  problem is not only deep, it is wide.

• September 21, 2010 3:21 pm

One notion that doesn’t quite work comes from how you prove that circuits can do what Turing machines can do. If you follow that proof, then you find that you can represent each step of a Turing machine by a circuit that acts on the current values along the tape, the position of the machine, and the state of the machine (the latter two represented as binary sequences). But it turns out to be much more convenient to get the tape to move at each step and the Turing machine to stay still. So I think a circuit that represented a Turing machine would have one part that is the same from step to step and another part that sends all the variables to the right or left according to what one of the variables says, or something like that.

In any case, informally I think of a uniform family of circuits as being one that iterates a very simple function, whereas a non-uniform family can do what it likes. (But I’m not sure how precise I can make that view of it.)

September 21, 2010 9:53 pm

1) It there a description of uniformity just in terms of circuits, and 2) Is it possible to come with an even stronger form of uniformity to give even more restricted notion of what effective algorithms in P are?

Gil, these are great questions. Unless I am missing something, it is non-trivial to give short, quick answers to them. Let me encourage you to ask them on
http://cstheory.stackexchange.com/

P.S. Many congratulations, Dick!

September 24, 2010 2:55 pm

1) Is there a description of uniformity just in terms of circuits?

Yes (I think). Here are two, of different kinds:

a) If there is a P-time algorithm (in the problem input size) which generates the circuits in a circuit family (as a function of the input size), then we can consider the family to be uniform. (I think this is the standard definition of the concept, but I am not an expert, so probably there’s some extra subtlety I don’t know about.)

This covers any circuit family that we could want to call uniform, but as a definition of the concept of P-time, we could complain that it’s circular.

b) If there is a 1-dimensional cellular automaton which evolves the problem input to the problem solution (for a decision problem, the solution would be a single bit in a specified cell relative to the cells containing the input, which is a stable state of the CA), in polynomial time in input size, then this corresponds to a circuit which is periodic in 2D in a simple way (one repeat unit per cell per time-unit), and whose state only matters in a quadratically large region relative to the solution time.

This is a very special kind of uniform circuit family, but sufficient to solve all problems in P, since a Turing machine can be easily encoded as a 1D CA. (This is similar to the encodings of Turing machines as circuits mentioned in Gowers’ replies here — in fact, one of them is probably identical.)

September 24, 2010 3:02 pm

(One way to encode a Turing machine as a 1D CA: in each cell, we represent the tape state at one point, the state the turing machine head would have if it were here now (whose value doesn’t matter if it’s not here), and one bit saying whether the head is here now. Clearly, each such state at time t only depends on its immediate neighborhood states at time t-1, which is all we need for this to work as a CA.)

September 24, 2010 3:22 pm

Correction: when I said my description (a) was “circular”, what I meant was that as a definition of the concept of P-time, it just reduces the definition on circuit families to the definition on Turing machines, which might not be what you want (as I realized when editing this reply for http://cstheory.stackexchange.com).

• September 24, 2010 4:16 pm

Thanks, John, Tim, and Ryan for the comments. I did ask the question on the CS questions site. One clarification: my question about an even more restricted class based on a stronger form of uniformity was still about a complexity class which practically contains all polynomial algorithms known to us and is restricted or tame just, perhaps, in the complexity needed to describe the algorithm.

September 23, 2010 11:59 am

Requesting Kindle version of the book please.

September 23, 2010 12:59 pm

Kindle is due out soon

September 24, 2010 10:51 am

ty

Any idea how soon? I can check here periodically but I might forget thx.

September 29, 2010 3:59 pm

Prof. Lipton:

Do you have any particular recommendations for software platforms for publishing via a blog and then a book? E.g. getting wordpress and latex to play well together, etc?

thanks, -Neal