# The Travelling Salesman’s Power

* What would the world be like, if TSP were in P? *

Benjamin Krause is the one person in the world who can say he has composed what sounds like. He composed the score for the movie Travelling Salesman, which is due to premiere on June 16. The movie is written and directed by Timothy Lanzone, and produced by Preston Clay Reed under the aegis of Fretboard Pictures.

Today Dick and I ask whether our world would really change much if the Travelling Salesman Problem (TSP) were solvable by a polynomial-time algorithm.

At last someone is taking the position that is a possibility seriously. If nothing else, the film’s brain trust realize that being equal is the cool direction, the direction with the most excitement, the most worthy of a major motion picture. We doubt there could be a cool movie that builds a story around a lower bound.

As Krause described in a post on his own blog in December 2010:

The film depicts the heated backroom discussions of four mathematicians who have solved the P vs. NP problem and a government agent (a severe, creepy, Big-Brotherish sort of guy) who has come to collect their findings and coerce them into silence.

Last month he linked the film’s trailer. Say what you will, the sound of is foreboding. The trailer proclaims: “Death, destruction, annihiliation, everything—simplified.” We can temper this with a few observations.

## Some Notes

First of all, we are already pretty close to living in a world where . Although TSP is -complete, it is considered among the easiest to solve in practice, especially the familiar version with cities in the Euclidean plane. Dick’s colleague William Cook maintains a vast page on TSP, including his recent book *In Pursuit of the Traveling Salesman*. (No typo here—our salesman loses an ‘‘ when he visits America.) Although all known -complete problems are related by polynomial-time computable *isomorphisms*, these can have quadratic overhead and do not always preserve problem-solving structure. Some -complete problems show their hardness in important cases, but David S. Johnson, who began his seminal work on TSP in the 1980’s, was already fond of saying two decades ago that instances with hundreds of cities were by-and-large solvable exactly. Real-world cases solved exactly include:

Second, we are not talking about the possibility of a proverbial -time algorithm, which would show in theory but of itself be practically useless. Nor do we mean small-exponent algorithms with huge constants like the -time deciders originally found for certain graph-minor related problems by Neil Robertson and Paul Seymour. We covered this here, and discussed it among others we like to call *galactic algorithms* here.

No, let us get down to Earth and suppose TSP has a *linear-time* algorithm, with a reasonable constant, not only for the Euclidean plane but for general graphs provided they are fairly dense. What then?

## Too Much Information, Too Little Power?

Let us first bait-and-switch by discussing the -complete Independent Set problem, because its standard reduction from Three-Satisfiability (3SAT) exemplifies our point better. Given a 3SAT instance with clauses, the reduction builds a graph with nodes and edges, a sparse graph. The constant in the is small, since the clause gadgets have three nodes each and the connections to truth settings of the variables are single edges. Say this graph has “SAT-power” proportional to its size.

We can imagine, however, that important instances of Independent Set have average vertex degree higher than , say which is still fairly sparse. In proportion to their size , these graphs may have SAT-power only . In practice these instances, for relevant sizes of , might present themselves as being fairly easy. They have a lot more information from the greater number of edges, but it might not be contributing to the hardness.

Now the reductions from 3SAT to TSP, especially Euclidean TSP, are less familiar, and we ascribe this to their being far more “expansive.” Texts usually reduce 3SAT to Hamiltonian Cycle, then present the latter as a special case of TSP, but this does not apply to Euclidean TSP. The -completeness of Euclidean TSP took a few years until being shown by Christos Papadimitriou, and a 1981 paper by him with Alon Itai and Jayme Luiz Szwarcfiter advertised a “new, relatively simple, proof.” This proof uses vertex-induced subgraphs of the grid graph in the plane, for which the shortest possible TSP tour and any Hamiltonian cycle have the same length. Despite this simplification, the gadgets involved are large—a diagram of one occupies most of one journal page. Note that it is open whether Euclidean TSP is -hard for “solid,” i.e. hole-free, grid graphs. Here is one of many smaller gadgets in their proof:

The point is that these instances have very low relative SAT-power, and this may be true of common instances of the same size. The idea of classifying -complete problems this way emerged in the 1980’s as the “Power Index” of Richard Stearns and Harry Hunt III, which presaged the Exponential Time Hypothesis of Russell Impagliazzo and Ramamohan Paturi, and in work by Alexander Dewdney on the fine-tuned time complexity of reductions between complete problems. Stearns and Hunt’s paper notes that planar Hamiltoniam Circuit has power index at most , and speculates that Euclidean TSP may be as low as .

Thus even a linear-time solver for Euclidean TSP might not confer enough power for solving sizable instances of the harder problems. Some instances of over 10,000 cities have been solved exactly, and the same methods work well on many cases with 1,000 cities, but if the SAT-power is only or from a quartic or cubic-time reduction, then we’re only able to apply it to formulas with ten-or-so variables, and even in the latter cases is only 100.

## Here are the Movies

The new film does give a positive answer to a question Dick asked in the first month of this blog:

Where are the movies on P=NP?

We can perhaps already say movies plural: this web article has picked up on John Nash’s 1955 letter proposing “concepts that presage modern cryptography and computational complexity theory” in its blurb on “A Beautiful Mind,” and we’ve always had “Sneakers.”

Here is the “Story” summary on the film’s website, edited by us:

“Travelling Salesman” is an intellectual thriller about four mathematicians hired by the U.S. government to solve the most elusive problem in computer science history — P vs. NP. The four have jointly created a “system” which could be the next major advancement for our civilization or destroy the fabric of humanity.

The solution’s immediate use would exist within computer science. However, its application would soon extend to countless other disciplines. For example, by utilizing the solution to P vs. NP, a hacker can crack advanced encryption codes within seconds–a task that now takes weeks, months, or even years. He could break into La Guardia’s air traffic control or China’s communication grid. But the mathematical algorithm would also serve as the basis for accelerated biological research, curing diseases such as AIDS and cancer.

…As the mathematicians are about to sign documents that will give the US government sole and private ownership of their solution, they wrestle with the moral dilemma of how this volatile discovery will be used. The math is real. The implications are real.

Despite our caveat that a solution to TSP might not be to die for, let alone to kill for, it would certainly be a huge change in our knowledge of the world. The implications could be unlimited.

We certainly hope the movie raises awareness of computer science theory and the life importance of its subject matter. As we said it is coming out in June.

## Open Problems

Can one reduce -clause 3SAT to problems on dense graphs with nodes? Is Euclidean TSP hard or easy for solid grid graphs?

[word fixes and clarifications in paragraphs on Independent Set and Euclidean TSP]

Re: “Where are the movies on P=NP?” The movie Pi can be interpreted as being about P=NP. The 216-digit number Max discovers encodes a Turing Machine that solves (say) SAT in some obvious encoding scheme.

Did not know this—thanks! Maybe we should include that film’s score composer.

I made some word-fixes, but left the uncaught malaprop “…talking the position that P=NP is a possibility seriously” alone in the intro, rather than correct it to “taking…seriously.” Dick wrote that and a few other sentences.

See http://arxiv.org/abs/cs/0611082

The worst-case running-time of the Held-Karp algorithm for the TSP, O*(2^n), still hasn’t been beaten and will never be beaten.

That’s a joke, right?

No.

“by utilizing the solution to P vs. NP, a hacker can crack advanced encryption codes within seconds–a task that now takes weeks, months, or even years.”

Interesting how this Hollywoodism persists even in a movie about mathematicians. The world of movies just cannot stand for codes that take millions of years to crack — or where doing it in months is a breakthrough

In case anyone is curious, there are well solvable cases of TSP.

http://www.cs.ust.hk/mjg_lib/bibs/DPSu/DPSu.Files/SIR000496.pdf

My own view is that the most general case is a limiting case, the noisiest, so there should be a “spectrum” of progressively harder cases. In any case, I often think about what happens when a n-sided polygon gets “sucked in” over some structure. Over simple structures this is easy to visualize, but over more complicated structures the problem becomes one of choosing how the polygon begins to deform around its “inner” points. I have been toying with different ways of ways of trying to reconfigure the structure in such a way in some higher dimensional space so that it “smooths out” in some sense. No breakthroughs at this point, but if that is helpful for people to think about it I figured I’d share.

The film

Traveling Salesmanportrays a world in which every person can access the seminal mathematical capabilities that Alfréd Rényi attributed to John von Neumann in a celebrated aphorism:So in essence,

Traveling Salesmanasks, what might/could/should we do with this unbounded creative capability, that in the 20th century was accessible only to genius, but now in the 21st century ordinary citizens can access?Many biographers of von Neumann — Steve Heims is a prominent example — and many historians too, have simplistically ascribed to him far-right motivations and objectives. For example, in Neil Sheehan’s

A Fiery Peace in a Cold Warwe read:However, John von Neumann’s brother Michael — who was a friend to me in graduate school — once assured me that this simplistic portrayal was deliberately crafted by John von Neumann himself, with a view toward rendering his motivations understandable, and thus administratively and politically simplifying the creation of the vast new 20th century enterprises that his fertile mind had conceived (I did not inquire further of Michael von Neumann on this point; which was a missed opportunity that I have often since regretted).

In any case, nowadays ordinary citizens *DO* largely possess NP-solving capabilities, and this capability has empowered

all of usto conceive enterprises on the same global scale as von Neumann.That is why the most interesting question associated to the film

Traveling Salesmanis not “Who conceives the mathematics, and how?” but “Who conceives the enterprises, and why?”It will be vastly disappointing if the protagonists of

Traveling Salesmantimidly choose to delegate these wonderful opportunities, and these grave responsibilities, wholly to government agencies. Because surely, we can do better thanthat, in grasping and actualizing the marvelous opportunities of our century.I thought the trailer was a joke, but apparently its not.

In Lance Fortnow’s book on complexity for the layperson he has an awesome chapter on what the world woudl be like if P=NP. He portrays it as being a much better place since we can solve all of these problems. Even crypto would be okay- we’d go back to private key.

I think he is right in the long term, but the short term transition might be brutal.

Would even private key systems be safe?

Maybe I’m missing something obvious (likely), but it seems to me that if the size of the keys used by the cryptosystem are significantly shorter than the size of the plain text that gets encrypted, a P=NP algorithm could be used to recover those keys in many cases (perhaps when used in combinatioin with other ideas): the idea is that for many cryptosystems, the keys that turn encrypted text into text that contains the right frequency of letters A through Z appearing in ordinary English text, will be unique or close to being unique. Given the encrypted text, then, you would “just” have to search for keys that produce text with this letter frequency.

Of course, if the plain text were, say, a sequence of bank codes, then the letter frequncy idea would not work; but it should work with ordinary plain text (unless I’m missing something obvious).

Cryptosystems like one-time-pads would be secure — there, the key is as large as the plain text.

Agree with Ernie — Gasarch’s comment doesn’t make sense. Are we missing something? Why would private-key crypto be safe?

Private-key crypto is safe because it is _theoretically_ uncrackable; if a message is XOR’d with a completely random string of bits shared between the two parties, then by definition the encrypted message carries literally zero information about the original message.

(And as I say this, I see that Ernie has already called out this possibility in the last paragraph of his comment – but I suspect that what’s being referred to as ‘private-key’ here is essentially the one-time pad.)

SteveS — Agreed that one time pads are secure. Usually, “private key crypto” refers to systems like AES, which have a key length which is much shorter than the message length.

The exactly solved TSP cases shown are visually beautiful. Is anything known about the structure of those case? The “leafiness”? Is some of that due to underlying geographic features like mountain ranges? Are they in fact using actual travel times for the costs rather than just the geographic distance? What would be the structure of a solution of a random TSP instance (Poisson distributed cities, etc….)

Georgia Tech is pretty much the best place to get TSP info

http://www.tsp.gatech.edu/

I’m surprised that there are even 24,978 cities in Sweden. I’m also surprised at how many of them are in the north. I thought most Swedes lived in the south. I assume that the cities in the north aren’t very big.

Please let me advance the modest proposition that the world depicted in the

Traveling Salesmantrailer — a world of efficient solution of NP-complete problems — would differ from our present world in no substantial respects.Ciphers?The switch to one-time pads (distributed by quantum means?) would be awkward at first, but technologically feasible.Optimization?We might find that our existing materials, devices, drugs, and salesman routes are dismayingly close to optimal (ouch).Proofs?No doubt many mathematical postulates have no short, human-readable proofs (Is chess a win for white/black? Is P.ne.NP?); it would be nice to know this for sure, but hardly surprising or game-changing. As for mathematical postulates that *do* have short proofs, already nowadays the main challenge that students face is to acquire a naturalized and integrative understanding of those proofs — a challenge that would only becomemoreacute, not lesss acute, in the world ofTraveling Salesman.—————

SummaryPerhaps we all of us arealreadysharing a world that is a reasonably close approximation to the world ofTraveling Salesman… in which case, isn’t our primary challenge simply to conceive happier & more fulfilling narrative arcs?I agree with this sentiment. In some sense the boat has already sailed. Most of the problems are solvable by brute force in short enough time that it is really only in a very narrow band of real world problems that a speed up would make a major impact. In a lot of the harder cases the error is already within acceptable margins.

Yes, it was Camus who wrote “

~~Truth~~,cher ami,is a colossal bore.” Whereas Ken’s and Dick’s insight that “is the cool direction, the direction with the most excitement, the most worthy of a major motion picture” provides us (IMHO) with a welcome inspiration for creative enterprises … enterprises that are much-needed on our slowly overheating planet of citizens, that stands in such obvious need of abundant, new, meaningful jobs and productive new global-scale enterprises for its young people.> “… an awesome chapter on what the world would be like if P=NP.”

Nobody has yet been able to make a distinction between a world where P=NP from one where P!=NP, otherwise they would have won the Millennium Prize. Nevertheless, some believe that those two kinds of world must look different from each other, though they can tell neither P=NP nor P!=NP for the world they live in. Isn’t it contradictory?

I am curious as to what other people think they might do, if they stumbled onto a proof for P=NP? Publish to arXiv, submit to a journal, give to a govt agency, or possibly even file a patent? The way such a proof was released would definitely have an impact on the world. What if you only had 85% of such a proof? Or 25%? How would you feel about releasing it (if it turned out to be right)?

Paul.

I once thought I had a major ingredient of a proof of PERM != DET (by which I mean VNP != VP), based on the ideals formed by the (n-k)x(n-k) permanental minors having oodles of monomials while those for the determinantal minors have none. The associated hardness predicate avoided two of the “Natural Proofs” obstacles but alas failed to be a hardness predicate: there are polynomials with linear-size circuits that also give “oodles”. My reaction after finding it was to hold on to specifics but communicate the general idea.

Would your reaction have changed if it where VNP = VP (from my quick reading, that would imply problems with some crypto systems)?

If you turned out to be right, and then someone else when on to complete the work (and be known for it) how do you think you’d feel now?

Paul.

PHomer, assuming that feasibly encompasses

practicalproblem-solving, then your question has the following Lasker-esque answer: construct an AI and ask it for advice!Let’s suppose (for example) that our AI development strategy amounts to efficiently solving problems in this class: “Exhibit a Turing machine (TM) of length that provably passes, in polynomial time, all the Turing tests that can be generated by another TM (the latter TM being finitely specified, yet possibly nondeterministic).”

OK … now let’s construct that machine, and ask for its human-quality advice! :)

This procedure acquires a Hartmanis-esque flavor by reason of the constraint “provably.” Hmmm … we can conceive of TMs that pass Turing tests, but not provably — such undecidably Turing-capable AIs escape even -based verification capabilities; ought we to worry about this?

These considerations lead us deep into a class of thorny questions that (AFAICT) were first asked by Juris Hartmanis: “How much do we really understand about provable membership in the complexity classes and ?”

Yet another (seemingly) tough question is “Even if we perhaps cannot (yet? ever?) construct TMs that pass Turing tests — whether provably or otherwise — can we none-the-less construct TMs that

administerrealistic Turing tests, andassessthem accurately?”Solid answers to these tough questions no doubt would improve our chances of solving the tough class of complexity-theoretic separation problems that includes .

So thank you, phomer, for posing a fine question! :)

I’ve always had a concern about the Turing test. To me it seems to indicate that the machine is having a human response or not. So, if we build a machine that had a vast amount of intelligence — it could correctly answer any question within the full range of human knowledge — but it displayed no emotional responses, it would still be subject to detection on any question that required an irrational emotional outburst. There was a sci-fi story about backfilling an AI with a person’s history to get around that, but even that had a hole in that the AI logically deduced that it was an AI (where a human would irrationally stick to their gut).

So if I constructed such an AI, and the choice came down to only being ‘right’ or being ‘happy’, it would no doubt pick ‘right’ (and in following it’s advice, I’d wind up unhappy :-)

Maybe :-)

Paul.

I think in this discussion of turing tests phomer brings up a good point and highlights a type of contradiction. If to pass the turing test a machine must match the fallibility of a human, then we certainly can imagine a machine that is intelligent enough to capture all potential responses of a human in some statespace and through some subroutine determine the correct response.

However, that is not how a person makes a mistake. A person makes a mistake because of some coevolving function of the human and their environment. So human computation incorporates factors that are outside of their control to a degree that a computer may have difficulty in matching. There are definitely two classes of mistakes (if not more), one is a mistake of individual competence, and the other is a mistake of failure of poorly executed intended action.

In any case, the precision of the modeled mind in a computer may not be able to match the randomness and imprecision of the coupled human-environment system. Certainly it is plausible to think of different ways to get around this impediment, However, is the end state one where we are merely animating the otherwise inanimate? Although we may have no ability to distinguish a sophisticated machine from a person, isn’t the person much more sophisticated than we tend to think?

phomer,

“I am curious as to what other people think they might do, if they stumbled onto a proof for P=NP?”

If…. Super Mario will solve that problem, I think he will be lucky if can survive 2 years…

The competition is to powerful, and he will lose every friend , maybe the job , he will be homeless , and a victim of his own success, before the authorities will do something.

What exactly is a “solid” grid graph?

If I understood it right, than I can tell you the TSP has already been solved polinomially for solid grid graphs, in the case they have even “dimensions” (even numbers od rows and columns).

I believe this result can be extended for solid grid graphs with arbitrary dimensions.

This was published in an conference in robotics and I think the authors didn’t realize what they have done.

I actually just interviewed the writer/Director of this film, Timothy Lanzone, over at the podcast Strongly Connected Components. You can check it out at http://bit.ly/KbOUyt

Thanks for sharing this. Besides the bon-mot about “science movies being the new comic-book movies”, it’s notable where he says (midway through) that Vinay Deolalikar’s proof claim of P != NP was a threat to the movie at the height of production :-).

Ken, there seems to be a slight mistake in the sub-title of this post – “What would the world be like, if TSP were in P?”

I think it should rather be: “What would the world be like, if a polynomial algorithm for TSP had been discovered?”

Indeed, as of yet, no one on this planet knows whether TSP is in P. And as long as no fast algorithm has been discovered, no difference at all between the two possibilities will be noticeable by humans.

Actually, via a noted old theorem of Leonid Levin, there is an already-known algorithm A accepting TSP such if NP=P then A runs in polynomial time. This uses enumerations of P and NP machines and self-reducibility. So an algorithm has already been “discovered”—you just have to prove its running time! :-)

So, our ability or inability to prove anything about its running time doesn’t seem to make a huge difference on what the world looks like… :)