What is the role of impossibility proofs in science?

Alan Turing is of course famous for his machine based notion of what computable means—what we now call, in his honor, Turing Machines. His model is the foundation on which all modern complexity theory rests; it is hard to imagine what theory would be like without his beautiful model. His proof that the Halting Problem is impossible for his machines is compelling precisely because his model of computation is so clearly “right.” Turing did many other things: his code-breaking work helped win the war; his work on AI is still used today; his ideas on game playing, biology, and the Riemann Hypothesis were ground breaking.

Today I plan to talk about the role of negative results in science, which are results that say “you cannot do that.” I claim that there are very few meaningful negative results; even Turing’s result must be viewed with care.

You may know that Turing took his own life after being treated inhumanely by his own government. Just the other day the British Prime Minister, Gordon Brown, said that the treatment of Alan Turing was “appalling.”—the full statement is here. Part of Brown’s statement is:

Turing was a quite brilliant mathematician, most famous for his work on breaking the German Enigma codes. It is no exaggeration to say that, without his outstanding contribution, the history of World War Two could well have been very different. He truly was one of those individuals we can point to whose unique contribution helped to turn the tide of war. The debt of gratitude he is owed makes it all the more horrifying, therefore, that he was treated so inhumanely. In 1952, he was convicted of “gross indecency” – in effect, tried for being gay. His sentence – and he was faced with the miserable choice of this or prison – was chemical castration by a series of injections of female hormones. He took his own life just two years later$\dots$

So on behalf of the British government, and all those who live freely thanks to Alan’s work I am very proud to say: we’re sorry, you deserved so much better.

The terrible treatment of one of the greatest mathematicians that ever lived cannot be undone, but we will continue to honor him every time we use his intellectual legacy.

I have a story that I had planned for today about Turing. The story and most of the post was written before Brown’s solemn statement. The story—I believe—shows something interesting about Turing’s character. I hope that the story is still appropriate to present—I think it still is, and I hope that you agree.

There are many stories about Turing—one of my favorites took place right after the start of World War II. At the time, Turing was working at Bletchley Park in England as a code-breaker. This work was fundamental to the war effort and eventually helped to win the war against Germany—especially fighting U-boats in the North Atlantic.

Right after the fall of France in 1940, it seemed that Germany might invade England at any moment. Turing wanted to be ready to fight, but he had no idea how to fire a rifle, so he signed up for the Home Guard. This was a part-time way to join the army and learn the basics of firearms.

When he joined, the form that he had to sign asked the question:

“You understand by signing this form that you could be drafted into the regular army at any moment.”

He signed the form, but he answered the question “No.” An official apparently filed it, without bothering to read it. Turing then went to enough meetings to learn how to fire a rifle. When it became clear that England was not going to be invaded, he stopped going to the meetings.

The Home Guard officer for his locale later decided to call up Turing into the regular army. Little did the officer know that there was no way that this could ever have happened, since the secret work Turing was doing was critical to the war effort, Prime Minister Winston Churchill would have stopped it personally.

Turing went to see the officer anyway. At their meeting, the officer pointed out that Turing had signed a form that allowed Turing to be put directly into the army. Turing smiled and said, “Take a look at my form.” There, where it said, “You understand by signing this form that you could be drafted into the regular army at any moment,” was Turing’s answer “No.” Apparently, Turing was thrown out of the meeting.

Let’s now go on to discuss impossibility results, including the Halting Problem.

Can They Do That?

I love far-side cartoons. One of my favorites, as I have mentioned before, is a cartoon of a western scene circa mid-1800’s: the cowboys are in a wooden fort shooting at the attacking indians. The indians do not have rifles, like the cowboys, but the indians have bows and arrows and are shooting burning arrows at the cowboys. Some have hit the wooden fort, which is starting to burn. One cowboy says to another, “are they allowed to do that?”

In my opinion this is crux of why an impossibility proof (IP) can be misleading. Of course the indians can shoot burning arrows or any kind of arrow they wish, even if the cowboy’s wooden fort is not prepared for them.

Thus, to me the key issue with any IP is whether the rules of what is or is not allowed is complete. The IP is only as strong as the assumptions of what the “attackers” can do. If the assumptions are too weak, then the IP results can be meaningless.

“Impossibilities are merely things of which we have not learned, or which we do not wish to happen.”
Charles Chesnutt

I like the latter part of this quote: “which we do not wish to happen.” I think many IP’s have that flavor.

Burning Arrows

Over the years many IP’s that were later shown to be not so impossible—that is they were wrong. I would like to point out a few of my personal favorites. They all failed because their assumptions were incorrect: they did not allow for “burning arrows”.

${\bullet}$ Halting Problem: One of the great IP’s is the impossibility of deciding whether or not a program will halt. Of course, this is due to Turing, and seems to be a very strong result. Not only is the halting problem impossible, but by Rice’s Theorem any property of a program that depends on the program’s behavior is impossible to decide in general. This theorem is due to Henry Rice.

Yet, there is a simple observation: most “real” programs are written in such a way that it is often quite easy to see that they halt. If a program uses only for loops, for example, with fixed bounds, then it must halt. For many other classes of programs also it is not hard to see that they will halt.

So is the halting problem really impossible? Yes it is. However, that does not mean that the halting problem is impossible for programs that arise in practice. For example, Byron Cook of Microsoft has worked on a system called the TERMINATOR. This project has had good success in discovering if a program can potentially hang the operating system. See this also for more details.

Is the halting problem impossible: yes in general, but perhaps not in most examples that we care about. Perhaps.

${\bullet}$ Trisecting Angles: Pierre Wantzel proved in 1837 that the ancient Greek problem of trisecting a general angle with a straight-end and a compass is impossible. The proof shows that the angle that arises for some trisections is an angle that cannot be constructed with these limited tools. It is a rock-solid IP, but as Underwood Dudley points out in his wonderful book, the proof has not stopped many amateurs from trying to find constructions that circumvent the proof.

The problem is solvable with slightly different tools, which are not much more complex than the allowed ruler and compass. If the ruler has two marks on it, then there is a solution. Or given a piece of string, one can trisect any angle. Again Wantzel’s proof is an IP that is correct, but it does not completely close all the doors on the problem. A marked ruler or the piece of string can play the role of the burning arrows.

${\bullet}$ Airplanes: William Thomson, Lord Kelvin, is famously quoted as saying, “I can state flatly that heavier than air flying machines are impossible.” However, there is some evidence that he did not say this, and even if he did I doubt that he had a real IP. So I will not count this as a flawed IP.

${\bullet}$ Rockets: There was a paper published in the early 1900’s on the possibility of a chemical rocket ever getting into orbit. The paper gave a careful and tight IP that showed this was impossible. Of course, today, there are countless satellites in orbit around the Earth, so where was the error in the IP? The paper used careful and correct laws of physics and reasonable assumptions about the amount of energy that could be obtained from chemical reactions.

The mistake was simple: the proof assumed implicitly that the rocket had constant mass. Clearly, as a rocket burns fuel it gets lighter, and this was the mistake that killed the IP. Note, there was a real insight in the flawed IP—the motivation for why rockets usually have many stages. Rockets stages allow mass to be shed at a quicker rate than just losing the mass of the fuel that is burnt. Early rockets used three or even four stages to place their payload into orbit, while today it is possible to get into orbit with two stages—one might argue that the shuttle uses ${1.5}$ stages.

By the way I have lost the paper citation, and would love if someone can find the paper for me.

Along similar lines, when physicist Robert Goddard published his groundbreaking paper “A Method of Reaching Extreme Altitudes”, the New York Times reacted sharply, by heaping severe criticism on his ideas, even attacking his integrity. See here for the relevant portion of the NYT article. The basis of their attacks was the same as above, implicitly assuming that a fixed mass system cannot accelerate in a vacuum. See here and here for more details on this.

After a gap of nearly 50 years, three days before man set his foot on the moon, the NYT eventually came around and rescinded their stance — they published a short correction article to their 1920 article.

${\bullet}$ Crypto Protocols: Crypto-protocols are one of great sources of IP’s that are often later shown to be flawed. The reason is simple: the IP rests on assumptions about what the attacker can do. In cryptography the attacker is often well motivated, has large resources, and is often very clever. Thus, the IP’s are “broken” not by direct attacks, but by showing that the IP’s assumptions are flawed.

A famous one is Johan Håstad attack on RSA, which used the Chinese Remainder Theorem (CRT) —recall CRT is one of my favorite theorems. He showed that sending the same message via many RSA systems was dangerous: the message could be found without directly breaking RSA. I was involved in another attack that also used the CRT to break crypto-systems such as RSA in the presence of faults. This is joint work with Dan Boneh and Richard DeMillo.

More recent examples are problems with crypto-protocols that are composed with other protocols, and also protocols that encrypt messages that are used later as keys in another system. The latter are called circular security models, and there is active research into their properties.

${\bullet}$ Quantum Protocols: Charles Bennett and Gilles Brassard in 1984 suggested how to use quantum mechanics to build a key exchange system that would be impossible to break. They had an IP that was not based on a complexity assumption, but on the correctness of quantum mechanics. This was a brilliant idea then, and is still one of the great insights about practical applications of quantum methods to security.

However, the exact proof that the system, BB84, is secure is not quite right. There are a number of attacks that were missed in the proof. For example, some sources of single photons sometimes put out multiple photons. This allows the photon number splitting attack to be applied. Again these were flaws in the assumptions of what an attacker could do.

Impossibility Results

So what is the role of IP? Are they ever useful? I would say that they are useful, and that they can add to our understanding of a problem. At a minimum they show us where to attack the problem in question.

If you prove that no X can solve some problem Y, then the proper view is that I should look carefully at methods that lie outside X. I should not give up. I would look carefully—perhaps more carefully than is usually done—to see if X really captures all the possible attacks.

What troubles me about IP’s is that they often are not very careful about X. They often rely on testimonial, anecdotal evidence, or personal experience to convince one that X is complete.

Open Problems

The open problem is to please view all IP’s with a certain degree of doubt—question, question, and question the assumptions.

I of course am open to P=NP as a possibility. But even a proof that P${\neq}$NP viewed as an IP should be taken as a direction, not as an absolute statement. I have talked before about the many ways that P${\neq}$NP can be true, please think hard about these.

1. September 13, 2009 10:09 am

In the theory of fault-tolerant distributed real-time systems there are a couple of nice impossibility results stating the minimum resources required to build systems tolerating a single arbitrary fault (or k of them). First, there is the Fisher-Lynch-Paterson result of 1985, arguing that a distributed system without any synchrony assumptions cannot reach global agreement in the presence of a single crash fault (=very benign fault). Yet, there are few safety criticial systems flying on current Boeing or Airbus aircraft designed with any synchronization mechanisms deployed. Second, the same problem can be solved in a synchronized system with at least four computers exchanging two rounds of messages over three independent channels and NO LESS, as pointed out in the well-known 1980 “Byzantine Generals” paper of Pease-Shostak-Lamport, and follow-ups. Yet there are few systems on transport category aircraft actually deploying the expensive degree of redundancy required by theory. Why does nobody care, not even certification authorities? Because there are always application specific assumptions that can be made within any system design based on “engineering judgement” that allows designers to disregard certatin “unlikely” failure modes, breaking the assumption of arbitraryness of failures of the impossibility results and allowing for solving the problem with less resources in a different model. Safety in contrast to security is only concerned with faults by chance rather than malice, yet there is no way to quantify the risk taken by just disregarding “hard to believe” failure modes, yet this is common industry practice that has lead to quite a split between academic theory (sychronous, time-triggered systems are the only possible way to go) and what’s done in “real life” (keep everything asynchrounous and as decoupled as possible). See papers of Kevin Driscoll, John Rushby, Paul Miner for reviews.

September 13, 2009 10:14 am

The distributed result is rightly famous. But it is based on assumptions that are not always present. For example allowing randomness, allowing protocols that fail with very very low probability, etc.

September 13, 2009 11:16 am

If P != NP is an impossibility proof, then does that make natural proofs / algebrization barriers impossibility of impossibility proof proofs?

September 13, 2009 11:48 am

No Way: The Nature of the Impossible, eds. P.J. Davis & D. Park (Freeman ’87) collects essays on the impossible from way areas: Mountaineering, to Science, to Philosophy.

September 13, 2009 12:17 pm

Nice article! I was just wondering why you left out Godel’s theorem. In my opinion, this is the strongest IP ever made.

September 13, 2009 12:59 pm

Very nice question. Yes it is a good example of an IP that practicing mathematicians seem, we think, to never hit their heads against. Yes, very good example. Thanks for the comment.

September 18, 2009 3:17 am

Georg Cantor hit his head almost literally, going bonkers after trying for many years to prove the continuum hypothesis. Not exactly cause and effect, of course.

6. September 13, 2009 12:22 pm

In computably axiomatized formal systems that embrace arithmetic, the power X is often measured as the maximum growth rate of functions f: N –> N that the system can prove to be total. This in turn can be connected to properties of large known or hypothesized cardinals.

However, in the 1980s I agreed with papers by Debbie Joseph and Paul Young arguing that in this particular X measure one shouldn’t need to go into the stratosphere to prove super-polynomial lower bounds: an elementary growth rate “should be” enough. “Elementary” means some finite stack of exponentials. Thus I stopped looking for a Paris-Harrington-style phenomenon that might obstruct “P != NP”. Of course the Razborov-Rudich obstacle is fully compatible with the Joseph-Young view, and IMHO focuses attention on mathematical primitives that involve doubly-exponential growth—which abound in polynomial ideal theory.

Here’s an “IP” example that I’d like to expand on. Suppose you have a collision-resistant family of hash functions h_n : {0,1}^n –> {0,1}^m with m < n. Then your "feasible working logic" L cannot refute false assertions about h_n being 1-to-1, either in the "for infinitely many n" or "for all but finitely many n" senses or maybe in some concrete-n cases. Now can we build on this some complexity-class assertions that L cannot refute? I've thought in the past about NP intersect co-NP versus "L-provable NP intersect co-NP" but not gotten far. There's the problem that proofs are finite concrete objects whereas complexity assertions are asymptotic. (Lurking in the background here is a famous still-open problem of Georg Kreisel on whether PA proving \phi(n) in k steps for all n implies that PA proves (\forall n)\phi(n), though maybe the fact that some simple additions to PA such as a proper-subtraction function make this *false* renders it a non-issue in this context.)

7. September 13, 2009 4:16 pm

Lower bound proofs for algorithms work the same way. We know that it is impossible to create a comparison sorting algorithm with a worst case runtime of o(n log n). Instead of giving up, we continue to look for other ways to sort without using comparisons.

September 13, 2009 6:45 pm

I think all Turing did was prove that there exists some program (or programs) for which we cannot solve the halting problem. He definitely didn’t prove that we cannot solve the halting problem for all programs. I don’t even think solving the halting problem is impossible in general, unless we consider wildly impractical to be equivalent to impossible. I suspect it’s only logically impossible very rarely. That is, you have to deliberately set up a situation to be logically impossible.

September 13, 2009 7:18 pm

Since my days as a grad student I always felt that quite a few IP results were overstated, over the years I’ve added more. The current list includes:

-IPs on proofs that relativize
-Comparison based lower bounds
-Code obfuscation
-Halting problem
-NP completeness of various problems

September 13, 2009 7:37 pm

I agree. As some one who proved an .5n^2 lower bound on the knapsack problem for linear comparisons, I still agree with you. Our result, with David Dobkin, is neat; but just shows that need to either do more compares or non-linear ones.

September 14, 2009 6:06 am

A a grad student, the first time I heard about the relativization barrier was along the lines of “do not attempt proofs by direct simulation since those relativize” and the professor proceeded to go into the details of the relativization IP.

I didn’t know the result before that, but I could immediately tell that the short version of the result was oversold. The proper reading of that result is “feel free to use direct simulation in parts of your proof, but make sure to introduce a step which is not a direct simulation at some point in your argument”. E.g. the proof could simulate portions of the computation until a certain state is reached in which a non-relativizing lemma can be applied.

10. September 13, 2009 10:01 pm

Another pretty famous proof of impossibility of long distance flight was provided by Simon Newcomb (who discovered a statistical low which is known as “Benford’s law” and worked with Michelson on measurement of the speed of light)
in this book in 1901

http://www.gutenberg.org/files/4065/4065-h/4065-h.htm#chap21

September 13, 2009 10:47 pm

Nice post! FWIW, I usually cite the impossibility of solving a general quintic by radicals as my favourite IP, at least from classical mathematics.

September 14, 2009 11:01 am

It is interesting that both this and the angle-trisection IP can be derived from Galois theory.

12. September 14, 2009 2:30 am

I think that there is a danger in saying that undecidability only applies to strange systems that are not practical. I agree entirely that one has to be a lot more careful with an impossibility result than a possibility one. This is why the most solid impossibility results seem to apply in horribly abstract settings.

My favourite undecidability result is the tiling problem: Whether there is an algorithm that can decide for any set of shapes whether they tile. (As an aside as a practicing mathematician working in tilings I find that we hit our heads against undecidability/Gödel with depression regularity).

Of course for most sets of shapes it is obvious that they cannot tile (a circle) or can tile (a square). The impossibility result does not give a practical problem. What it can do however is show that strange and interesting examples must exist. It is, if you like, a “here be dragons” for the world of tiling. I can propose an algorithm and there must be sets of shapes that break it. In this way I can get an understanding of strange properties that are worth looking for.

The first example of this was the discovery of aperiodic sets of shapes, shapes that could tile the plane but never give periodic tilings. A simple and beautiful example being the Penrose tiling.

Be careful therefore to characterise undecidability as applying only to unpractical situations. It is too easy to switch this around and then get trapped declaring that anything that is undecidable is therefore not practical. In my view this leads to bad mathematics, as instead of finding interesting questions and then working out if we can solve them we try to find solutions and then work out why the solutions are interesting (unfortunately this happens all to often anyway).

Of course the discovery of aperiodic shapes led to the refutation of another “impossibility” result that the only possible ordered matter was periodic, with the discovery of quasicrystals.

13. September 14, 2009 4:44 pm

This is a wonderful topic! Here are three more “IP forts” that (IMHO) are at substantial risk fire nowadays … it’s great fun to live in a decade in which there are so many “flaming arrows” flying around.

———

The first IP fort is the result that “Radiation damage is the main problem which prevents the determination of the structure of a single biological macromolecule at atomic resolution using any kind of microscopy. This is true whether neutrons, electrons or X-rays are used as the illumination.” [Henderson].

The flaming arrow is quantum spin microscopy (per recent experimental demonstrations from IBM), which sidesteps radiation damage by conceiving microscopy as communication over low-energy quantum spin channels.

———

The second IP fort is the result that large-scale quantum systems cannot be efficiently simulated “because the system has too many variables, it cannot be simulated with a normal computer” [Feynman], so that “simulation of quantum systems by classical computers is possible, but in general only very inefficiently.” [Nielsen and Chuang].

The flaming arrow is to simulate noisy and thermal systems, in a gauge in which the thermal noise concentrates the dynamics on low-dimension manifolds. Here the answer to “Are they allowed to do that?” is “yes, definitely” for the kind of quantum systems that arise naturally in biology, chemistry, and materials science.

———

The third IP fort is the result that “nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems” [Abrams and Lloyd].

The third flaming arrow is pullback of the symplectic and metric structure of Hilbert-space QM onto a non-linear (and low-dimension) Kählerian state-space—thus avoiding what Bennet et al. call the “linearity trap” and “computational extravagances” of large-dimension nonlinear QM theories. It is interesting that in nature this pullback is dynamically accomplished by the second of the three flaming arrows.

———

At a recent meeting of David Baker’s synthetic biology group, I amused myself by envisioning what the biomedical world might look like when these three forts have finished burning … the resulting brief sketch of a no-fortress world is posted on the Fortnow/GASARCH blog.

— some BibTeX for forts and arrows —

@article{Henderson:95, Author = {R. Henderson}, Journal = {Quarterly Reviews of Biophysics}, Number = 2, Pages = {171–193}, Title = {The potential and limitations of neutrons, electrons and {X}-rays for atomic resolution microscopy of unstained biological molecules}, Volume = 28, Year = 1995}

@article{Feynman:1982hs, Author = {R. P. Feynman}, Journal = {International Journal of Theoretical Physics}, Number = {6–7}, Pages = {467–488}, Title = {Simulating physics with computers}, Volume = 21, Year = 1982}

@book{Nielsen:00, Author = {M. A. Nielsen and I. L. Chuang}, Publisher = {Cambridge University Press}, Title = {Quantum Computation and Quantum Information}, Year = 2000}

@article{Abrams:1998ud, Author = {Abrams, D.S. and Lloyd, S.}, Journal = {Phys. Rev. Lett. (USA)}, Number = {18}, Pages = {3992 – 5}, Title = {Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems}, Volume = {81}, Year = {1998/11/02}}

@unpublished{Bennett:2009kx, Author = {Charles H. Bennett and Debbie Leung and Graeme Smith and John A. Smolin}, Note = {arXiv:0908.3023v1 [quant-ph]}, Title = {Can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems?}, Year = {2009}}

14. September 14, 2009 4:47 pm

This is a wonderful topic! Here are three more “IP forts” that (IMHO) are at substantial risk of fire nowadays … it’s great fun to live in a decade in which there are so many “flaming arrows” flying around.

———

The first IP fort is the result that “Radiation damage is the main problem which prevents the determination of the structure of a single biological macromolecule at atomic resolution using any kind of microscopy. This is true whether neutrons, electrons or X-rays are used as the illumination.” [Henderson].

The flaming arrow is quantum spin microscopy (per recent experimental demonstrations from IBM, which sidesteps radiation damage by conceiving microscopy as communication over low-energy quantum spin channels.

———

The second IP fort is the result that large-scale quantum systems cannot be efficiently simulated “because the system has too many variables, it cannot be simulated with a normal computer” [Feynman], so that “simulation of quantum systems by classical computers is possible, but in general only very inefficiently.” [Nielsen and Chuang].

The flaming arrow is to simulate noisy and thermal systems, in a gauge in which the thermal noise concentrates the dynamics on low-dimension manifolds. Here the answer to “Are they allowed to do that?” is “yes, definitely” for almost all classes of systems of interest in biology, chemistry, and materials science.

———

The third IP fort is the result that “nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems” [Abrams and Lloyd].

The third flaming arrow is pullback of the symplectic and metric structure of Hilbert-space QM onto a non-linear (and low-dimension) Kählerian state-space—thus avoiding what Bennet et al. call the “linearity trap” and “computational extravagances” of large-dimension nonlinear QM theories. Interestingly, in nature this pullback is dynamically accomplished by the second of the three flaming arrows!

———

At a recent meeting of David Baker’s synthetic biology group, I amused myself by envisioning what the biomedical world might look like when these three forts have finished burning … the resulting brief sketch of a no-fortress world is posted on the Fortnow/GASARCH blog.

— some BibTeX for forts and arrows —

@article{Henderson:95, Author = {R. Henderson}, Journal = {Quarterly Reviews of Biophysics}, Number = 2, Pages = {171–193}, Title = {The potential and limitations of neutrons, electrons and {X}-rays for atomic resolution microscopy of unstained biological molecules}, Volume = 28, Year = 1995}

@article{Sidles:2009yq, Author = {J. A. Sidles}, Journal = {Proc. Nat. Acad. Sci. USA}, Number = {8}, Pages = {2477–2478}, Title = {Spin microscopy’s heritage, achievements, and prospects}, Volume = {106}, Year = {2009}}

@article{Feynman:1982hs, Author = {R. P. Feynman}, Journal = {International Journal of Theoretical Physics}, Number = {6–7}, Pages = {467–488}, Title = {Simulating physics with computers}, Volume = 21, Year = 1982}

@book{Nielsen:00, Author = {M. A. Nielsen and I. L. Chuang}, Publisher = {Cambridge University Press}, Title = {Quantum Computation and Quantum Information}, Year = 2000}

@article{Abrams:1998ud, Author = {Abrams, D.S. and Lloyd, S.}, Journal = {Phys. Rev. Lett. (USA)}, Number = {18}, Pages = {3992 – 5}, Title = {Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems}, Volume = {81}, Year = {1998/11/02}}

@unpublished{Bennett:2009kx, Author = {Charles H. Bennett and Debbie Leung and Graeme Smith and John A. Smolin}, Note = {arXiv:0908.3023v1 [quant-ph]}, Title = {Can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems?}, Year = {2009}}

15. September 14, 2009 8:55 pm

Sorry for the above duplicate post … please delete either one … (and thanks for running this fine site).

September 14, 2009 11:10 pm

Impossibility results do matter! It’s true that people often read too much into them, but once research efforts have routed around the block, people become accustomed to the impossibility result and fail to appreciate what would be possible if it did not hold.

Sure, Godel’s theorems do not have much bearing on working mathematicians, and yet they single-handedly destroyed a decades-long research program. Who’s to say what results might have been proven if there did exist a finite, provably consistent axiomatization of mathematics?

The halting problem may not be a problem for most real programs, but that’s because we mostly write programs that we understand. If we could determine whether arbitrary programs halt, we may have been able to answer any number of long-standing problems by writing programs to enumerate possible proofs and checking whether they halt.

People sometimes dismiss NP-completeness results with something along the lines of “NP-complete instances don’t arise in practice”, but in many ways this is a vacuous statement because practice has adapted to get by with only solving those instances that are easy. They fail to appreciate what would be possible in practice if they could solve all instances.

You can dismiss relativization, natural proofs, and algebrization as not precluding a resolution to P vs NP, but those barriers are currently very real. Many major open questions in complexity theory may have been resolved by now or admit simpler proofs if these barriers were not standing in our way.

September 15, 2009 7:00 am

Thanks for a thoughtful comment. Your last point puzzles me, however. Let’s agree that the barriers you list are very real, They make proving P not equal to NP near impossible. Okay, why is this not evidence that P=NP is true? Or if not true, at least worth thinking about in a serious way? If I have a fork in the road and one side is blocked by multiple high barriers, would it not be reasonable to at least try to take the other fork?

September 15, 2009 7:17 pm

A favorite example: the New York Times ran an article on October 9, 1903 saying:

…it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous effort of mathematicians and mechanicians in from one million to ten million years…

The famous Wright brothers’ flight took place just over two months later.

(PS: You can read the article by searching the NYT archive for the title “Flying Machines which do not Fly”.)

September 16, 2009 6:51 am

I am not a computer scientist (but I do enjoy your blog). Isn’t Shannon’s channel capacity converse a very strong IP ? It is one that seems to have many practical applications as well.

September 18, 2009 4:39 pm

Yes! That is what I thought of right away an “IP” that has had huge practical importance and is used in current designs all the time. How many more like this are out there? Or is something like this just defined away from being an IP? This is the big counter to this road of argument that came to mind.