Skip to content

No-Go Theorems

March 13, 2013


How far does no-go go?

emeriti_05182011_KochenS_005

Simon Kochen is the Henry Burchard Fine Professor of Mathematics at Princeton University. He wrote a PhD thesis on ultrafilters at Princeton in 1959 under Alonzo Church, and supervised the thesis on bounded arithmetic and complexity theory by Sam Buss in 1985. He is also listed as faculty in Philosophy at Princeton, but may be best known for the impact of his work in physics. With the late Ernst Specker, he proved a theorem that implies quantum mechanics cannot be modeled by certain natural assumptions.

Today Ken and I want to discuss no-go theorems: results that show that one cannot prove something via a certain style argument.

The term “no-go” is actually used more by physicists than by mathematicians: a no-go theorem refers in their world to a physical state or action that is impossible. For example, thermodynamics demonstrates that perpetual-motion machines as historically understood are no-go. Relativity rules out faster-than-light communication, and some other theorems have been inspired by this.

Although no-go theorems have negative implications, often they flow from a positive result, a construction. This may have implications beyond the original motivation. The new constructed objects may promote advances, while the no-go aspect provides guidance toward techniques that are not ruled out—that are capable of making progress toward the goal that the no-go theorem guards. Crucial in all this is determining exactly how far the barrier erected by a new no-go theorem extends: what presumptions it makes, and what workarounds may be available.

Something exactly like this situation is going on now in combinatorial optimization. I am quite confused about it, hence it seemed like a perfect idea for Ken and me to try to work it out. Let’s start first by reviewing the whole idea of no-go theorems. In a Part 2 of this post we will turn to the specific case that is being discussed.

Kochen-Specker and SAT

The Kochen-Specker theorem in one form can be boiled down to this fact:

There exist 18 vectors in {\mathbb{R}^4} from which one can make 9 bases for {\mathbb{R}^4} using each vector twice, such that each basis is orthogonal.

Our friends at Wikipedia give the construction; here we focus on the implications. The no-go conclusion itself comes from this simple fact:

One cannot assign a value 0 or 1 to each of the 18 vectors such that every basis has exactly one 1, because the duplication of each vector makes the total number of 1’s even, while 9 is odd.

If certain local/non-contextual hidden-variable theories of quantum mechanics were true, then Nature would act in a way that entails the existence of just such an impossible assignment. Along with the better-known Bell’s Theorem and several related results, the theorem rules out such theories. However, we have encountered a common mistaken impression that the barrier applies to all “hidden-variable” theories. This is not true; for instance not only is Bohmian mechanics a hidden-variable theory that survives, it is said to have inspired John Bell toward his theorem in the first place.

Thus far the “no-go” sounds philosophical, but it has concrete purely mathematical facets. Cases of the Kochen-Specker theorem like the form above can be encoded by Boolean formulas {\phi}, with the conclusion saying that {\phi} is unsatisfiable. It is possible that such {\phi} might be tricky enough to “fool” certain attempts to solve SAT, thus showing that certain kinds of algorithms—who is talking about quantum theories now?—are no go. Ken gleaned this idea from a talk by Richard Cleve while he was in Montreal in 2009, which has since developed into this revised paper by Cleve with Peter Høyer, Ben Toner, and John Watrous (which in turn has spawned other work on “quantum games”). Two other papers of interest are:

No-Go Theorems

Simply put, a no go theorem says: “You cannot do X by an argument of type Y.” It is not easy to define exactly what such theorems are, but perhaps a few examples from mathematics and computer science will help.

{\bullet} Unconstructables: Duplicating the cube, trisecting angles, and constructing regular polygons were classic tasks for the ancient Greek geometry of straightedge and compass. It took until the 1800’s to prove that all these tasks are impossible in general. But note that some angles such as {90^\circ} can be trisected, and some surprising polygons constructed. Moreover, if you are allowed to “cheat” by having one mark on the ruler for distance, then the first two tasks and more of the third become possible.

{\bullet } Set Theory Forcing: In set theory we know, thanks to the deep work of Paul Cohen, that any proof of the Axiom of Choice must not be expressible in ZF. More precisely, if you could prove the Axiom of Choice in ZF, then it would be terrible. Such a proof would imply that ZF is inconsistent, and this would destroy the world of math as we know it.

{\bullet } Oracle Results: In computational complexity theory we know, thanks to the brilliant work of Baker-Gill-Solvay, that any solution to {\mathsf{P} = \mathsf{NP}} cannot “relativize.” This follows from one of their theorems:

Theorem 1 There are oracles {A,B} such that {\mathsf{P}^{A} = \mathsf{NP}^{A}} while {\mathsf{P}^{B} \neq \mathsf{NP}^{B}}.

Thus if you have proved—you think—that {\mathsf{P}\neq \mathsf{NP}}, or the opposite, then you had better check that your proof does not relativize.

Roughly this means that the proof cannot simply treat Turing machines as black boxes, with meters for their inputs, outputs, and running times. Instead the proof must get into the details of how computations work. You must somehow get your hands dirty in the details of how machines can compute. So far no one has figured out how to get dirty, how to get inside the machines, and how to show that they cannot solve hard problems efficiently.

There are further “barriers”under the monikers natural proofs and algebrization. These are basically more conditional: not a flat no-go but rather

If you can do X by an argument of type Y, then you can also do Z which would be a surprise if not a shock…

No-Go Against Particular Algorithms

If we cannot rule out all possible polynomial-time algorithms for {\mathsf{NP}}-complete problems, perhaps we can erect a barrier against some broad kinds of algorithms. Our notion of “progressive” algorithms has this motivation. The barrier would say that certain kinds of “soft” algorithms cannot deal with a certain “super-hard” kind of hardness.

A natural idea to solve any hard computational problem is to encode it as a linear program of polynomial size. Then invoke the deep result that all linear programs of size {S} can be solved in time polynomial in {S}. The great Aristotle would be happy:

  1. My problem is a polynomial size linear program.

  2. All polynomial size linear programs are easy.
  3. My problem is easy.

Thus the quest is for polynomial size encodings of hard problems, like the TSP or HC problem. The obvious ways to do this fail, and many have tried to find the right encoding so they could invoke the above syllogism. This has led over the years to the following kind of no-go theorem, which we covered last fall:

Theorem 2 A problem {Y} cannot have a polynomial-sized linear program provided there is another problem {X} so that (i) {Y} is an extension of {X}, and (ii) {X} not only does not have a polynomial size linear program, but is “super-hard.”

As Aristotle might say:

  1. Your problem extends my super-hard problem.

  2. All problems that extend a super-hard problem are super-hard.

  3. Your problem is super-hard.

Exactly what “super-hard” means is ingrained in the details of the proofs, but it applies to the standard TSP polytope {X}, and implies an exponential lower bound on the linear program size. The notion of “extension” {Y} is called extended formulation (EF) and basically means that {X} is a projection of {Y}. This argument can carry extra power because often the problem {Y} may be easier to understand than the problem {X}.

But now comes an issue of whether it has too much power. Suppose {Z} is any “ordinary” hard problem, to which {X} reduces via an ordinary complexity-theoretic reduction function. Now we make a problem {Y} that augments {Z} to include the variables used to define {X}, in such a way that solutions to {Y} map back to solutions to {X}, while {Y} remains equivalent to {Z}. If {Y} then always counts as an “extension” of {X}, then we get the following syllogism:

  1. Your problem {Z} is hard.

  2. Your problem is equivalent to an extension {Y} of my {X} that is super-hard.

  3. Your problem is super-hard.

However, the no-go theorem does not say that every hard problem is super-hard. Its barrier is only stated to extensions of {X}, but now it sounds like the barrier is applying everywhere. Where does it stop?

In our particular case, we are worried about whether we are getting a syllogism like this:

  1. Every extended formulation {Y} of the standard polytope {X} for the TSP must have exponentially many constraints.

  2. Every linear-programming formulation {Z} for TSP yields such a {Y}, with polynomial overhead.

  3. Every linear program for TSP requires exponentially many constraints.

But 3. seems too strong. Because linear programming (LP) is complete for polynomial time, it sounds very close to saying {\mathsf{NP}} has exponential-time lower bounds. It is stronger than what the no-go theorem says. So how far down does the barrier really go, to what kinds of {Z}‘s? That is what we wish to know. Part 2 of this post will investigate this further; for now we close with two other ideas.

Ways Around The EF No-Go?

Every no-go theorem is based on some assumptions. If you play by the assumptions—the rules—then you cannot succeed. But what if one chooses to violate the rules, then the no-go theorem says nothing. One way to think about a no-go theorem is precisely this: it shows you where to look for your proof. Stay away from any proof that is covered by the no-go theorem, and by doing this you at least have a chance to succeed.

There are several ways that the EF no-go theorem for TSP could be “cheated on.” Here are two that come to mind. The first is a standard observation, while the second is one that I (Dick) have thought about and believe is new. In any event these two show that the “no” in the EF no-go theorem could be more of a yellow light then a red light.

{\bullet } Big LP’s may be okay: Suppose that you find a way to encode the TSP as an exponential size LP. This means that the naive way of running the LP solver will fail. But thanks to an insight in the famous ellipsoid-method paper by Martin Grötschel, Laszló Lovász, and Lex Schrijver, it is possible to solve an exponential sized LP in polynomial time. A separation oracle produces a violated constraint whenever a given point {\mathbf{x}} is not feasible.

Theorem 3 Let {\cal L} be an exponential-size LP with a polynomial-time computable separation oracle. Then there is a polynomial-time algorithm that solves {\cal L}.

{\bullet } Having more than one LP may be okay: Suppose that you find a set of polynomial-sized LP’s whose union equals the LP you wish to solve. Then this would allow you to get around the EF no-go theorem. In geometric terms suppose that you want to solve an LP over the polytope {X}. You find a set of polynomially many polytopes

\displaystyle Q_{1}, \dots Q_{m},

so that the projection of the {Q_{i}}‘s covers the polytope {X}. Then you can solve {X} by solving all the {Q_{i}}‘s and taking the best answer. This seems to me to possibly avoid the EF no-go.

Open Problems

How far does the EF no-go go? As we said above, we are preparing a “Part 2″ post with a more detailed look at this question.

Note there is a danger in calling anything part I without having written part II. John McCarthy famously wrote the paper, “Recursive Functions of Symbolic Expressions and their Computation by Machine (Part I),” and alas never wrote part II. Kurt Gödel’s famous paper on undecidability was also a “Part I,” but by his telling he had a long manuscript of the “part II” which he would have published had people felt they needed to see the extra details. Hopefully we are closer to Gödel than McCarthy here, but trying to strike a stick on newly-hewn land is always uncertain.

About these ads
15 Comments leave one →
  1. John Sidles permalink
    March 13, 2013 4:44 pm

    The cryptic no-go postulate  Any proof in ZFC that separates P from NP has to be strong enough to imply the nonexistence of cryptic languages (perhaps this would be a suitable TCS StackExchange question?)

  2. March 13, 2013 5:10 pm

    one of the important no-go thms from complexity theory is one by razborov that monotone circuit arguments based on an approximation method-like structure can only prove max n^2 lower bounds on circuits. unfortunately its in a highly technical paper and it seems nobody has ever simplified it much or included it in a book. its one of those papers that is probably cited much more than it is understood. also, the semifamous natural proofs paper is very similar to a no-go theorem & one of the 1st objections that everyone raises to P=?NP proofs these days.

    it would also be interesting to look at a study of no-go theorems that were eventually overturned. the problem with no-go theorems is that the assumptions tend to be exactly those that need to be challenged, and people tend to take them as big roadblocks instead of “dont go exactly this way” signs.

    as for bells thm, it is quite legendary, and it would seem the jury is still out on it. it turns out its generally extremely difficult to subject to experimental testing with even just two qubits. also if one looks in papers that test it eg famous ones by Aspect et al, and CHSH who formulated experimental inequalities, they mention key loopholes. one gaping loophole is that the hidden variable cannot control the probability that the particle is detected. this has been known for decades and arguably its not at all a implausible scenario but it is mostly swept under the rug and mainstream physicists call it the “conspiracy theory” on rare occasions they deign to address it…..!

    re hidden variables try this post, solitons, cellular automata, qm & disagreeing with aaronson

    • March 20, 2013 5:20 pm

      In response to the “detector efficiency loophole”:
      A 2001 experiment with ions observed Bell inequality violations without this loophole.

      http://www.nature.com/nature/journal/v409/n6822/abs/409791a0.html

      This experiment did suffer from a different loophole, which is that measurements weren’t done in a space-like separated way. However, for a local hidden variable theory to survive both this and the Aspect experiment, it would at a minimum have to choose different loopholes for different experiments. Epicycles would seem positively elegant by comparison.

      • March 20, 2013 6:40 pm

        AH– hi, enjoy your dialogues with GK immensely & have cited them on my blog.

        yes there are many very good Bell-related experiments since aspect experiments which were already very “tight”. could cite many others. Gisin in particular has done very many good ones.
        imho einsteinian relativity looks like epicycles compared to newtonian mechanics.
        that is the point with new theories– they are extremely subtle and escape the net of all existing experiments, showing up only as anomalies if observed at all.

        ingenious experiments designed to push the boundaries of theory must be devised. bell came very close. but few people have taken the torch and tried to build on his theoretical work that would be designed to “break” qm. have many ideas of my own after studying the subj very intensely for yrs… may write them up on the blog at some pt…. but it requires very openminded experimentalists with some free time, who are very rare…. it is not very cheap or easy to test QM….

      • March 21, 2013 9:24 am

        I think QM has been tested many many times. The periodic table is a test of QM. So is the blackbody radiation curve. Practically everything we observe in semiconductors, atoms, molecules, metals, etc. is in some way a test of QM. Bell inequalities are just one very particular piece which are designed to prove entanglement to someone who knows no other physics.

      • March 21, 2013 9:52 am

        agreed qm is extremely highly tested, now basically more than a century old. so was newtonian mechanics highly tested before einsteinian relativity and quantum mechanics. any new theory, if it exists, must agree with the prior one in some kind of [highly] subtle limit case or boundary conditions in the current theory. and cannot even be conceived, at first, by virtually all the users of the current theory. this is the teachings of kuhnian paradigm shifts which transcends individual scientific theories. there are many excellent research directions to pursue pointing at new possibilities in QM. your debate on qm computing feasibility with GK touches on many of them. another one that appeals to me is analysis of relatively simple classical systems that seem to violate nonlocality based on pecularities of measurement dynamics. would be interested to chat with anyone further who is openminded on the subject. plz reply on my blog for that.

  3. March 13, 2013 10:32 pm

    Thermodynamics, “as historically understood,” is empirically understood, and more, naturally understood in the way Maxwell explicitly stated that the Law of Equal Temperatures (LET) is not a logical truism or theory of identity. This understanding is different from the way we understand probability, or the way Newton understood mass, when he was arguing with his editor Cotes about the best way to defeat Descarte’s ideas. In regard to ideas, and verifiable (or falsifiable!) fact or knowledge, what principles do you believe without proof or derivation; it’s just the way it is? In any case, I enjoyed reading the Kochen-Specter (oops, I meant coke-inspector) preprint because they said they weren’t going to use the word probability.

  4. March 14, 2013 1:52 am

    May I have a reference to the 90 degree trisection? Much thanks

    • March 14, 2013 6:18 pm

      Constructing an equilateral triangle and bisecting its angles does it. There are other known cases, but I think 60 degrees cannot be done.

  5. o0o permalink
    March 14, 2013 12:20 pm

    Would an acceptable way to think of no-go theorems be: theorems which show there cannot exist a program having some specific type?

  6. Stephen Gismondi permalink
    March 14, 2013 10:04 pm

    Always interesting comments. Thanks for this follow-up discussion related to something you wrote about a few months ago re: Proof of no compact formulation of the TSP polytope. Part II would be good.

    About your second idea for ways around the No Go … For example – if perhaps we could cut-up the TSP polytope into digestible chunks? They could overlap of course … Is that what you were getting at maybe? (Balas, Wolsey and Conforti wrote about similar ideas but not related directly to the TSP, 1998–2008) But I wonder if the TSP polytope is so extremely asymmetric that it might even be defined as a polytope that is unable to be “cut up” this way. What I mean is: Intersecting facet-defining constraints at each extreme, I would imagine, are complicated in a way that no matter how you choose to define and subset these Qs, there’s always one Q that has no compact formulation. I’d see it as a tough job in the full dim space (n^2-3n+1) where we’d examine these constraints – even cut out convex chunks from interior to the core of the TSP polytope to build each Q – cause we don’t care about the core if we build a convex combo of overlapping Qs – to help build some symmetry into an EF of each Q. BUT we still have to deal with the facet-defining constraints of the TSP polytope which are also facet-defining constraints of each Q – that’s the trouble. Just a thought – it’s such a tough polytope! There’s also the issue of the magnitude of coefficients of these constraints, and how they might grow. What do you think?

    Perhaps the no-go theorems are telling us to look at existence proofs i.e. focus on coNP and a YES decision – like partly modelling an NP-complete problem as a relaxed compact LP e.g. Hamilton tour integer & fractional like Swart, and then, rather than search for a tour, find some iterative LP approach – a sequence of LPs that forces non-Hamiltonian graphs infeasible (if we’re looking to prove P=NP) OR of course showing that no such relaxed formulation can ever exist (and with LP being P-complete as said before in an earlier post … PNP.)

    I really do enjoy your blog. Thanks. Steve

Trackbacks

  1. A Negative Impossibility Theorem | Gödel's Lost Letter and P=NP
  2. In Praise Of P=NP Proofs | Gödel's Lost Letter and P=NP
  3. sketch of Razborovs paper "on the method of approximations" - FAQs System
  4. sketch of Razborovs paper "on the method of approximations" | Question and Answer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,757 other followers