# No-Go Theorems

* How far does no-go go? *

Simon Kochen is the Henry Burchard Fine Professor of Mathematics at Princeton University. He wrote a PhD thesis on ultrafilters at Princeton in 1959 under Alonzo Church, and supervised the thesis on bounded arithmetic and complexity theory by Sam Buss in 1985. He is also listed as faculty in Philosophy at Princeton, but may be best known for the impact of his work in physics. With the late Ernst Specker, he proved a theorem that implies quantum mechanics cannot be modeled by certain natural assumptions.

Today Ken and I want to discuss *no-go theorems*: results that show that one cannot prove something via a certain style argument.

The term “no-go” is actually used more by physicists than by mathematicians: a no-go theorem refers in their world to a physical state or action that is impossible. For example, thermodynamics demonstrates that perpetual-motion machines as historically understood are no-go. Relativity rules out faster-than-light communication, and some other theorems have been inspired by this.

Although no-go theorems have negative implications, often they flow from a positive result, a construction. This may have implications beyond the original motivation. The new constructed objects may promote advances, while the no-go aspect provides guidance toward techniques that are not ruled out—that are capable of making progress toward the goal that the no-go theorem guards. Crucial in all this is determining exactly how far the barrier erected by a new no-go theorem extends: what presumptions it makes, and what workarounds may be available.

Something exactly like this situation is going on now in combinatorial optimization. I am quite confused about it, hence it seemed like a perfect idea for Ken and me to try to work it out. Let’s start first by reviewing the whole idea of no-go theorems. In a Part 2 of this post we will turn to the specific case that is being discussed.

## Kochen-Specker and SAT

The Kochen-Specker theorem in one form can be boiled down to this fact:

There exist 18 vectors in from which one can make 9 bases for using each vector twice, such that each basis is orthogonal.

Our friends at Wikipedia give the construction; here we focus on the implications. The no-go conclusion itself comes from this simple fact:

One cannot assign a value 0 or 1 to each of the 18 vectors such that every basis has exactly one 1, because the duplication of each vector makes the total number of 1′s even, while 9 is odd.

If certain *local/non-contextual hidden-variable theories* of quantum mechanics were true, then Nature would act in a way that entails the existence of just such an impossible assignment. Along with the better-known Bell’s Theorem and several related results, the theorem rules out such theories. However, we have encountered a common mistaken impression that the barrier applies to *all* “hidden-variable” theories. This is not true; for instance not only is Bohmian mechanics a hidden-variable theory that survives, it is said to have inspired John Bell toward his theorem in the first place.

Thus far the “no-go” sounds philosophical, but it has concrete purely mathematical facets. Cases of the Kochen-Specker theorem like the form above can be encoded by Boolean formulas , with the conclusion saying that is unsatisfiable. It is possible that such might be tricky enough to “fool” certain attempts to solve SAT, thus showing that certain kinds of algorithms—who is talking about quantum theories now?—are no go. Ken gleaned this idea from a talk by Richard Cleve while he was in Montreal in 2009, which has since developed into this revised paper by Cleve with Peter Høyer, Ben Toner, and John Watrous (which in turn has spawned other work on “quantum games”). Two other papers of interest are:

- “On Searching for Small Kochen-Specker Vector Systems” by Felix Arends, Joël Ouaknine, and Charles Wampler. (Recall Dick met Joël at Oxford last fall.)
- “Bertlemann’s chocolate balls and quantum type cryptography” by Karl Svozil. (Truth-in-blogging note: Ken intended to cover this paper in late 2011—two posts with “quantum chocolate” in the title were meant to lead into it—but time for this soon got displaced by the matrix-product advances.)

## No-Go Theorems

Simply put, a no go theorem says: “You cannot do *X* by an argument of type *Y*.” It is not easy to define exactly what such theorems are, but perhaps a few examples from mathematics and computer science will help.

**Unconstructables**: Duplicating the cube, trisecting angles, and constructing regular polygons were classic tasks for the ancient Greek geometry of straightedge and compass. It took until the 1800′s to prove that all these tasks are impossible in general. But note that some angles such as can be trisected, and some surprising polygons constructed. Moreover, if you are allowed to “cheat” by having one mark on the ruler for distance, then the first two tasks and more of the third become possible.

**Set Theory Forcing**: In set theory we know, thanks to the deep work of Paul Cohen, that any proof of the Axiom of Choice must not be expressible in ZF. More precisely, if you could prove the Axiom of Choice in ZF, then it would be terrible. Such a proof would imply that ZF is inconsistent, and this would destroy the world of math as we know it.

**Oracle Results**: In computational complexity theory we know, thanks to the brilliant work of Baker-Gill-Solvay, that any solution to cannot “relativize.” This follows from one of their theorems:

Theorem 1There are oracles such that while .

Thus if you have proved—you think—that , or the opposite, then you had better check that your proof does not relativize.

Roughly this means that the proof cannot simply treat Turing machines as black boxes, with meters for their inputs, outputs, and running times. Instead the proof must get into the details of how computations work. You must somehow get your hands dirty in the details of how machines can compute. So far no one has figured out how to get dirty, how to get inside the machines, and how to show that they cannot solve hard problems efficiently.

There are further “barriers”under the monikers natural proofs and algebrization. These are basically more conditional: not a flat **no-go** but rather

If you can do X by an argument of type Y, then you can also do Z which would be a surprise if not a shock…

## No-Go Against Particular Algorithms

If we cannot rule out all possible polynomial-time algorithms for -complete problems, perhaps we can erect a barrier against some broad kinds of algorithms. Our notion of “progressive” algorithms has this motivation. The barrier would say that certain kinds of *“soft”* algorithms cannot deal with a certain *“super-hard”* kind of hardness.

A natural idea to solve any hard computational problem is to encode it as a linear program of polynomial size. Then invoke the deep result that all linear programs of size can be solved in time polynomial in . The great Aristotle would be happy:

- My problem is a polynomial size linear program.
- All polynomial size linear programs are easy.
- My problem is easy.

Thus the quest is for polynomial size encodings of hard problems, like the TSP or HC problem. The obvious ways to do this fail, and many have tried to find the right encoding so they could invoke the above syllogism. This has led over the years to the following kind of no-go theorem, which we covered last fall:

Theorem 2A problem cannot have a polynomial-sized linear program provided there is another problem so that (i) is an extension of , and (ii) not only does not have a polynomial size linear program, but is “super-hard.”

As Aristotle might say:

- Your problem extends my super-hard problem.
- All problems that extend a super-hard problem are super-hard.
- Your problem is super-hard.

Exactly what “super-hard” means is ingrained in the details of the proofs, but it applies to the standard TSP polytope , and implies an exponential lower bound on the linear program size. The notion of “extension” is called **extended formulation** (EF) and basically means that is a projection of . This argument can carry extra power because often the problem may be easier to understand than the problem .

But now comes an issue of whether it has too much power. Suppose is any “ordinary” hard problem, to which reduces via an ordinary complexity-theoretic reduction function. Now we make a problem that *augments* to include the variables used to define , in such a way that solutions to map back to solutions to , while remains equivalent to . If then always counts as an “extension” of , then we get the following syllogism:

- Your problem is hard.
- Your problem is equivalent to an extension of my that is super-hard.
- Your problem is super-hard.

However, the no-go theorem does not say that every hard problem is super-hard. Its barrier is only stated to extensions of , but now it sounds like the barrier is applying everywhere. Where does it stop?

In our particular case, we are worried about whether we are getting a syllogism like this:

- Every extended formulation of the standard polytope for the TSP must have exponentially many constraints.
- Every linear-programming formulation for TSP yields such a , with polynomial overhead.
- Every linear program for TSP requires exponentially many constraints.

But 3. seems too strong. Because linear programming (LP) is complete for polynomial time, it sounds very close to saying has exponential-time lower bounds. It is stronger than what the no-go theorem says. So how far down does the barrier really go, to what kinds of ‘s? That is what we wish to know. Part 2 of this post will investigate this further; for now we close with two other ideas.

## Ways Around The EF No-Go?

Every no-go theorem is based on some assumptions. If you play by the assumptions—the rules—then you cannot succeed. But what if one chooses to violate the rules, then the no-go theorem says nothing. One way to think about a no-go theorem is precisely this: it shows you where to look for your proof. Stay away from any proof that is covered by the no-go theorem, and by doing this you at least have a chance to succeed.

There are several ways that the EF no-go theorem for TSP could be “cheated on.” Here are two that come to mind. The first is a standard observation, while the second is one that I (Dick) have thought about and believe is new. In any event these two show that the “no” in the EF no-go theorem could be more of a yellow light then a red light.

**Big LP’s may be okay**: Suppose that you find a way to encode the TSP as an exponential size LP. This means that the naive way of running the LP solver will fail. But thanks to an insight in the famous ellipsoid-method paper by Martin Grötschel, Laszló Lovász, and Lex Schrijver, it is possible to solve an exponential sized LP in polynomial time. A **separation oracle** produces a violated constraint whenever a given point is not feasible.

Theorem 3Let be an exponential-size LP with a polynomial-time computable separation oracle. Then there is a polynomial-time algorithm that solves .

**Having more than one LP may be okay**: Suppose that you find a set of polynomial-sized LP’s whose union equals the LP you wish to solve. Then this would allow you to get around the EF no-go theorem. In geometric terms suppose that you want to solve an LP over the polytope . You find a set of polynomially many polytopes

so that the projection of the ‘s covers the polytope . Then you can solve by solving all the ‘s and taking the best answer. This seems to me to possibly avoid the EF no-go.

## Open Problems

How far does the EF no-go go? As we said above, we are preparing a “Part 2″ post with a more detailed look at this question.

Note there is a danger in calling anything part I without having written part II. John McCarthy famously wrote the paper, “Recursive Functions of Symbolic Expressions and their Computation by Machine (Part I),” and alas never wrote part II. Kurt Gödel’s famous paper on undecidability was also a “Part I,” but by his telling he had a long manuscript of the “part II” which he would have published had people felt they needed to see the extra details. Hopefully we are closer to Gödel than McCarthy here, but trying to strike a stick on newly-hewn land is always uncertain.

The cryptic no-go postulateAny proof in ZFC that separates P from NP has to be strong enough to implythe nonexistence of(perhaps this would be a suitablecrypticlanguagesTCS StackExchangequestion?)one of the important no-go thms from complexity theory is one by razborov that monotone circuit arguments based on an approximation method-like structure can only prove max n^2 lower bounds on circuits. unfortunately its in a highly technical paper and it seems nobody has ever simplified it much or included it in a book. its one of those papers that is probably cited much more than it is understood. also, the semifamous natural proofs paper is very similar to a no-go theorem & one of the 1st objections that everyone raises to P=?NP proofs these days.

it would also be interesting to look at a study of no-go theorems that were eventually overturned. the problem with no-go theorems is that the assumptions tend to be exactly those that need to be challenged, and people tend to take them as big roadblocks instead of “dont go exactly this way” signs.

as for bells thm, it is quite legendary, and it would seem the jury is still out on it. it turns out its generally extremely difficult to subject to experimental testing with even just two qubits. also if one looks in papers that test it eg famous ones by Aspect et al, and CHSH who formulated experimental inequalities, they mention key loopholes. one gaping loophole is that the hidden variable cannot control the probability that the particle is detected. this has been known for decades and arguably its not at all a implausible scenario but it is mostly swept under the rug and mainstream physicists call it the “conspiracy theory” on rare occasions they deign to address it…..!

re hidden variables try this post, solitons, cellular automata, qm & disagreeing with aaronson

In response to the “detector efficiency loophole”:

A 2001 experiment with ions observed Bell inequality violations without this loophole.

http://www.nature.com/nature/journal/v409/n6822/abs/409791a0.html

This experiment did suffer from a different loophole, which is that measurements weren’t done in a space-like separated way. However, for a local hidden variable theory to survive both this and the Aspect experiment, it would at a minimum have to choose different loopholes for different experiments. Epicycles would seem positively elegant by comparison.

AH– hi, enjoy your dialogues with GK immensely & have cited them on my blog.

yes there are many very good Bell-related experiments since aspect experiments which were already very “tight”. could cite many others. Gisin in particular has done very many good ones.

imho einsteinian relativity looks like epicycles compared to newtonian mechanics.

that is the point with new theories– they are extremely subtle and escape the net of all existing experiments, showing up only as anomalies if observed at all.

ingenious experiments designed to push the boundaries of theory must be devised. bell came very close. but few people have taken the torch and tried to build on his theoretical work that would be designed to “break” qm. have many ideas of my own after studying the subj very intensely for yrs… may write them up on the blog at some pt…. but it requires very openminded experimentalists with some free time, who are very rare…. it is not very cheap or easy to test QM….

I think QM has been tested many many times. The periodic table is a test of QM. So is the blackbody radiation curve. Practically everything we observe in semiconductors, atoms, molecules, metals, etc. is in some way a test of QM. Bell inequalities are just one very particular piece which are designed to prove entanglement to someone who knows no other physics.

agreed qm is extremely highly tested, now basically more than a century old. so was newtonian mechanics highly tested before einsteinian relativity and quantum mechanics. any new theory, if it exists, must agree with the prior one in some kind of [highly] subtle limit case or boundary conditions in the current theory. and cannot even be conceived, at first, by virtually all the users of the current theory. this is the teachings of kuhnian paradigm shifts which transcends individual scientific theories. there are many excellent research directions to pursue pointing at new possibilities in QM. your debate on qm computing feasibility with GK touches on many of them. another one that appeals to me is analysis of relatively simple classical systems that seem to violate nonlocality based on pecularities of measurement dynamics. would be interested to chat with anyone further who is openminded on the subject. plz reply on my blog for that.

Thermodynamics, “as historically understood,” is empirically understood, and more, naturally understood in the way Maxwell explicitly stated that the Law of Equal Temperatures (LET) is not a logical truism or theory of identity. This understanding is different from the way we understand probability, or the way Newton understood mass, when he was arguing with his editor Cotes about the best way to defeat Descarte’s ideas. In regard to ideas, and verifiable (or falsifiable!) fact or knowledge, what principles do you believe without proof or derivation; it’s just the way it is? In any case, I enjoyed reading the Kochen-Specter (oops, I meant coke-inspector) preprint because they said they weren’t going to use the word probability.

May I have a reference to the 90 degree trisection? Much thanks

Constructing an equilateral triangle and bisecting its angles does it. There are other known cases, but I think 60 degrees cannot be done.

Would an acceptable way to think of no-go theorems be: theorems which show there cannot exist a program having some specific type?

Always interesting comments. Thanks for this follow-up discussion related to something you wrote about a few months ago re: Proof of no compact formulation of the TSP polytope. Part II would be good.

About your second idea for ways around the No Go … For example – if perhaps we could cut-up the TSP polytope into digestible chunks? They could overlap of course … Is that what you were getting at maybe? (Balas, Wolsey and Conforti wrote about similar ideas but not related directly to the TSP, 1998–2008) But I wonder if the TSP polytope is so extremely asymmetric that it might even be defined as a polytope that is unable to be “cut up” this way. What I mean is: Intersecting facet-defining constraints at each extreme, I would imagine, are complicated in a way that no matter how you choose to define and subset these Qs, there’s always one Q that has no compact formulation. I’d see it as a tough job in the full dim space (n^2-3n+1) where we’d examine these constraints – even cut out convex chunks from interior to the core of the TSP polytope to build each Q – cause we don’t care about the core if we build a convex combo of overlapping Qs – to help build some symmetry into an EF of each Q. BUT we still have to deal with the facet-defining constraints of the TSP polytope which are also facet-defining constraints of each Q – that’s the trouble. Just a thought – it’s such a tough polytope! There’s also the issue of the magnitude of coefficients of these constraints, and how they might grow. What do you think?

Perhaps the no-go theorems are telling us to look at existence proofs i.e. focus on coNP and a YES decision – like partly modelling an NP-complete problem as a relaxed compact LP e.g. Hamilton tour integer & fractional like Swart, and then, rather than search for a tour, find some iterative LP approach – a sequence of LPs that forces non-Hamiltonian graphs infeasible (if we’re looking to prove P=NP) OR of course showing that no such relaxed formulation can ever exist (and with LP being P-complete as said before in an earlier post … PNP.)

I really do enjoy your blog. Thanks. Steve