# Proofs By Contradiction And Other Dangers

* Some suggestions on how to present proofs, again *

Henry Cohn is an expert in many aspects of theory including cryptography, various aspects of combinatorics, and in particular the matrix multiplication problem. One of his most fascinating ideas is on using group theory to approach fast matrix product—this is joint work with Robert Kleinberg Balázs Szegedy, and Chris Umans.

Today I plan on talking about a recurrent theme—can an amateur make a contribution to modern mathematics and theory?

Cohn has written a beautiful piece on exactly this topic, and that is what led me to think about it once more. I frequently get earnest emails from people who think they may have solved the question. They are often amateurs in a sense, but not amateurs in another sense. A typical profile is that the person is a master programmer, with some mathematical training; sometimes the training is informal and sometimes it was formal, yet happened years ago.

Here is a paraphrase of a typical message I get:

I have emailed a number of experts with a version of my solution. Most ignored the email. The ones who responded said that they took a quick glance at the paper, decided it was probably wrong, and refused to discuss it further.

I think right now there are multiple claims, private and public, from some sharp people that they have solved **the** problem. Some claim equal, others not equal; all claim they have found a proof. At least one side’s proofs must be wrong, else we are all in trouble, since ZF would then be inconsistent. If we insist that something called a “proof” must be a complete argument, then I suspect that all are “wrong”—this quote is relevant:

A mathematician’s reputation rests on the number of bad proofs he has given—Abram Besicovitch.

However, someone could have the correct significant idea for such a proof, or failing that, at least an important idea in a new context that only an outsider to the tight-knit theory community could see.

This has happened many times in science. Let me give a couple of examples where the mainstream specialists thought that the work of an outsider was wrong.

** Wrong, Wrong, Right **

There are cases throughout science of the conventional wisdom being wrong. This makes things especially hard for an amateur to make a contribution, because the conventional understanding is usually a needed support, but it affects professionals too. Here are three relatively recent examples. In each case the person was right, but the initial reaction was negative. Also, all three were not amateurs but professionals. Thus we all have problems with ideas that do not fit the conventional thinking, even when the thought is coming from a professional, and not an amateur.

**Irrational Numbers**: Roger Apéry proved that is irrational. This was a major surprise, since it had been open a very long time whether or not this value was rational. I have discussed some of the issues surrounding his wonderful result before—see this. Note there was definitely some doubt surrounding his initial claim; he was right, but that did not make people believers instantly.

**Stomach Ulcers**:

When I was growing up several friends and relatives unfortunately had stomach ulcers. The belief was that stress was the main cause of their problem, and so most patients were told to reduce their stress. They also were treated with bland diets, and still suffered most of their adult life with the ailment.

Dr. Barry Marshall showed that essentially all the previous ideas about this illness were wrong. He demonstrated that almost all stomach ulcers were caused by a bacterium, *Helicobacter pylori*, that it could be completely cured by antibiotics in most cases. For this terrific work he was awarded the 2005 Nobel Prize in Physiology or Medicine, along with his co-researcher Robin Warren. I love that the prize title has an “or” in it: “Physiology or Medicine.”

A great story. Marshall and Warren are heroes who saved countless people suffering and worse. Yet it took decades for them to get the mainstream medical profession to take their ideas seriously. Note they were outsiders in a physical sense, since they were based in Australia. But they were physicians, trained in medicine, and based at the Royal Perth Hospital.

At one point Marshall infected himself with the bacterium, which is one way to make a dramatic statement He is quoted as saying “Everyone was against me, but I knew I was right.” See this for an interview with him.

What I find most disturbing about this great success story is that it could have saved lives much earlier. The claim Marshall and Warren made was easily testable: doctors only had to give their stomach ulcer patients standard antibiotics. While no drug is perfectly safe, Marshall points out that this would have been quite safe, especially compared to surgery that many underwent. And it would have quickly shown that he and Warren were right—the patients’ symptoms would have disappeared. But the mainstream profession refused to change for decades.

**Prions**:

Infections are caused by either bacterium or by viral agents—right? Dr. Stanley Prusiner discovered that there is a third way to cause an “infection.” The third method is based on what he named prions. For this work he was awarded the Nobel Prize in Physiology or Medicine in 1997.

Prions are proteins that have folded incorrectly. They work in an incredibly novel way: when they enter a host they cause the existing correctly folded proteins to also become incorrectly folded. This clearly can cascade and eventually many of the proteins in the host are changed. Unfortunately the proteins that are not folded correctly cause terrible damage to the host. Prions appear to be the cause of mad cow disease in cattle and Creutzfeldt-Jakob disease in humans.

Unlike the work on ulcers, while a breakthrough, this discovery has not helped yet to find a cure for these diseases. Indeed there is still some doubt and debate on whether prions are the cause of these diseases. Read this about the issues.

** Danger: Proof By Contradiction **

Any proof that is not equal to is likely to be a proof by contradiction, at least in its initial presentation. Since is formally a for-all/there-exists () statement of arithmetic, there is a principle that such a proof should be convertible to one without contradiction—see for instance this and its references. However, recent advances such as Williams’ separation of from have used contradiction extensively. See the posts on such proofs here and related issues here.

Cohn makes an interesting point about proofs by contradiction. Here is a liberal paraphrase of his point, in the context of a proof:

- Let’s start by assuming that and we will eventually show that this leads to a contradiction.
*So far, no problem.* - Then, Cohn points out there will probably be a long series of definitions and lemmas and theorems.
*Still fine.* - Finally, a contradiction will be reached—something wrong will be concluded. This shows that the assumption was wrong, and thus that . Great. Become famous, become acclaimed, become rich, and so on.
*Great.*

The difficulty is what if the contradiction comes not from the assumption that , but rather from some error in the proof of one of the lemmas or theorems? *A problem. A serious problem.* Then the proof is wrong. The proof is invalid, since the contradiction did not follow from the assumption. The whole point of a proof by contradiction is to show the assumption leads to a falsehood. This is the key; if the contradiction comes from any other source the whole proof is in trouble.

Cohn points this out, but does not have an answer on how to avoid it. Nor do I. It seems inherent in the nature of reasoning about proofs. Especially those that use proof by contradiction.

** Danger: Polynomial Time **

One misconception that I believe is at the heart of many of the attempts to prove is the nature of polynomial time, . This complexity class is closed under many operations: union, intersection, complement, reductions of various kinds, and much more. However, it not closed under the taking of **limits**.

Here is what I mean by “taking limits.” Consider a series of Turing machines that run each in polynomial time: let the sequence be ,, and so on. They define each a language , , and so on. By definition, all of are in . The problem is the limit of these languages is **not always** contained in polynomial time. Loosely speaking, even in cases where there is a well-defined limit,

does not converge to a language in , in general.

I see this fact implicitly used in many attempts to solve the problem. If this taking of limits was okay, then there would be lots of simple “proofs” that . Unfortunately it is false. For an easy counter-example, let be the set of so that the Turing machine with input and program accepts in time at most . Here as usual is the length of . Each , but their limit is not.

Note . Suppose that accepts in time , then it certainly still accepts in time . Thus even monotone nested sequences of sets do not converge:

The limiting language leaves polynomial time.

Note this lack of convergence is similar, I believe, to the confusion that occurred in the early days of calculus and analysis. Many famous mathematicians of the time made errors, or “proved” theorems using assumptions about convergence that were either false or at least not justified.

I would guess that at least some major fraction of mistakes in claims about the question stem from this non-convergence property. I know of at least one well respected, well published complexity theorist, who fell to victim to this very misconception. The limiting argument can be well hidden in a long proof, and quite often is subtle to see.

I hope this helps some of those who are trying to resolve . Check carefully that you are not making this convergence mistake. And if you are, be of good cheer, since you are in good company, both recently and historically.

** Advice? **

The question is how to make a contribution, how to get some to listen to your voice. It is an interesting question. Here is some advice I gave one person recently:

You should try to write a one page or so description of the key idea(s). This page should not say anything about ‘s importance, its background, or its history. We all know that. It should concentrate on what is the

newidea that makes your proof work. A very helpful remark would be: I avoid the convergence problem by Or I avoid the usual barriers becauseAs a skilled programmer approach the proof in the same way you write quality code. Define the key notions carefully, lemmas and theorems should be stated exactly. They should be stated so that anyone reading them will understand them. Define the constants, the sets, the notions precisely. Do not use a formal language, but strive to make all crystal clear.

My experience is that if this is done, you may actually discover your own error. But if you still feel it is correct I think we can find people to look at the page.

** Open Problems **

Any suggestions on how to help with proofs by contradiction? This is the question raised by Cohn. I would like some suggestions on how to avoid the problem he pointed out.

Two more examples:

(1) The rejection of continental drift, in part because the person proposing it was not a geologist: http://en.wikipedia.org/wiki/Continental_drift#Rejection_of_Wegener.27s_theory

(2) I have been told that many palaeontologists were initially unwilling to believe Alvarez’s evidence that the dinosaur extinction was caused by an asteroid impact, in part because Alvarez was an outsider (a geologist).

Incidentally, in a recent interview with Marshall he commented that even today some Doctors believe that ulcers are caused by stress, and, accordingly, don’t prescribe antibiotics. It’s a rather sad state of affairs.

(3) Ignaz Semmelweis, the 19th century doctor who went mad because his colleagues across the continent wouldn’t accept his amply-supported procedure of disinfecting hands when going to deliver babies and thus avoiding untold suffering.

I think a lot of times people write proofs by contradiction (of simpler propositions than P≠NP!) unnecessarily. It can often make an argument more readable to turn it around and write it directly instead of arguing ad absurdam.

Indeed, that was on my mind while expanding the first paragraph of Dick’s proof-by-contradiction section. The example I had in mind is, we can phrase proofs-by-diagonalization as proofs-by-contradiction. But they really aren’t—they are constructions of an object that lies outside a specified range. This is so even without going as far as the issues Dick raised here and before.

My query is: Should we expect the constructive translation of proofs of class-separation statements to be felt overtly? At the research-intuitive level, not trying to be formal, I’m asking whether extensive use of for-contradiction assumptions is likely to be helpful, or a diversion from a straightforward witness-style “Book” proof that we should be seeking from the get-go…

I think that this is a pretty good goal in general — proofs by contradiction hide the very thing that is causing the contradiction. For instance, it could be that a much smaller assumption could provide the contradiction. Once you can determine the minimal assumption that will provide the contradiction, you really get at the essence of the problem.

Coming at it a different way, it’s not necessarily the case that everything can be turned around — just because something exists doesn’t mean that it has a simple description, or even one that requires only a finite number of statements to describe.

I think that we’re luckiest when we can get a constructive proof — these are the most revealing, in a way: “Witness this object that I present for your viewing pleasure! Reproduce it with your own tools if you prefer!”

My two favorite examples are Robinson’s demonstration of a small series of cuts sufficient for the Banach-Tarski paradox, and Jones’ small polynomials that are universal diophantine. 100% demonstrated, unconditionally satisfying. Knowing that these examples exist, but not having them in hand is quite a bit different from knowing that they can be written down in a few lines/pages.

s.

A good technique to shield oneself from the dangers of spurious contradictions may be to try hard to turn the proof by contradiction into a direct proof. This is not easy, even for relatively simple proofs, but may produce a more robust proof, or alternatively help one discover the flaw.

It is actually not a black/white situation here, there are several levels of boldness in the contradiction. For example you may turn a proof of “if P = NP then…. absurd” into a proof that “such algorithm is NP but not in P for this reason”. The proof that it is strictly outside P may rely on a contradiction, but the overall proof is still “more constructive” in a sense that you extracted an algorithmic content from the previous proof.

Amateurs do have a hard time getting taken seriously, and to help with this I’ve started the quantropy.org website, which is a repository for academic papers to which anyone can submit a paper. My hope is that those who submit papers can get feedback on what is wrong with the paper and what they could do to improve it. I believe that there are people willing to give this advice, but that trying to get advice by emailing the best known people in a field isn’t likely to work.

Stephen,

I will look at this site. Thanks for doing this.

This is a great idea! It would be brilliant if you could make it sufficiently well known that university departments/lecturers referred unsolicited papers to the website by default. It seems a shame that currently there’s a huge hurdle between doing research on a problem and getting published if you’re not within academia, but I’m sure there are plenty of people who would happily read through amateur papers and give feedback.

Stephen,

I would love to add my paper there.

It seems the website is still “under construction”– or is it down right now?

Thanks,

Hi Javaid,

It should be working ok, if you go to http://quantropy.org . It hasn’t got many papers yet, and of course there are plenty of things I’m planning to improve about the site (one of which is that http://www.quantropy.org doesn’t point to the site – maybe that was the problem)

But anyone can already submit to arxiv.org? Seems to me your website might just end up being a place where people who publish end up with a bad reputation; or just kind of a “honeypot” for crackpot papers.

Well arxiv.org has the endorsement system, which gets back to ’emailing the best known people in a field’. More importantly though, I think that there are a lot more amateurs out there looking more for feedback on their ideas than for fame. I don’t think that arxiv.org is set up to provide feedback, but my hope is that quantropy.org will be able to do so. Also, anyone reading the abstract will be able to read the feedback, and so decide whether the paper is worth reading.

I’m having trouble understanding the paragraph that starts with “The difficulty is what if the contradiction comes not from the assumption that {\mathsf{P}=\mathsf{NP}}, but rather from some error in the proof of one of the lemmas or theorems?”

Isn’t it true generally that if there is an error in the proof of one of your lemmas or theorems that your result may be wrong?

How is PBC more dangerous this way? Is it because, with a proof not-by-contradiction, an erroneous lemma is only dangerous if it is wrong in some particular way, whereas with a PBC any incorrect lemma leads to a contradiction?

Also, where can I read more about {\mathsf{P} \neq \mathsf{NP}} being {\Pi_2}?

This was my thought as well (ja).

Terry Tao had some great thoughts on rewriting proofs by contradiction to be clearer. Can try to find the post if google fails.

Here are those links:

http://terrytao.wordpress.com/2009/11/05/the-no-self-defeating-object-argument/

http://terrytao.wordpress.com/2010/10/18/the-no-self-defeating-object-argument-revisited/

http://terrytao.wordpress.com/2010/10/18/the-no-self-defeating-object-argument-revisited/ perhaps?

I feel like if people have confidence in their proof of P v NP, it shouldn’t be hard to get someone to listen. (If they don’t have confidence, they shouldn’t be asking for others’ time.) Why not offer $1000 to a reviewer, since you have $1mm coming in a couple years?

Incidentally, I hope that sometime in my lifetime, computer proof systems become sophisticated enough that ordinary math papers can be comprehensibly written using them, allowing people to catch their mistakes early and greatly streamlining the process of review.

In many cases $1000 would not be enough to compensate the reviewer for (a) the time spent and (b) the risk that his counterpart is a crackpot. By no means all correspondents fall into the latter category, but those who do tend to get into vehement disagreements about being cheated.

The legitimate correspondent also is more likely looking for feedback than an acknowledgment of proof, and therefore has no serious expectation of $1MM in the future. That makes a$1000 cost per mistake in the proof seem a fairly high hurdle.

+1000!

Noah in the discussion provides argument that contradicts simulations. I do not believe my abstract construction, and therefore trying to test them when possible with simulations. It contradicts my simulations and published simulations, (see note here).

At this point, he represents the most credible side of the professionals related to the approach, who in my opinion can point out at least the literature to dig, or explain correct state of the problem (which I think he was failing to provide). I have filling that the intention instead is to put BIG RED FLAG on me, instead of on the approach.

There is no point for top experts to look at the solution of amateur, because it is quite possible it has an error grad student, or PostDoc can find, or if not then this work become interesting to check by themselves. Therefore, the students and PostDocs is a natural interface between amateurs and professionals.

Dear Dick, thanks for the place where people can attempt to explain their intentions, and expose crazy ideas (some of them may be “crazy enough to be genius”).

Open question. Can someone point me to the literature that resolve this issue. Or my digging the literature in the dark gave me correct conclusion.

I also wondering how many people work on the intersection of real algebraic geometry and computational complexity, with the meaning – if Noah discards me, can I find another person to talk to, or I should do all by myself?

Thanks.

Hi Dick,

It’s a bit surprising to see the follow-up on your blog, but ok it might be interesting to see what happens. Please notice that the proof is slighlty different, thanks to a professional who pointed a flaw in the version I sent you a couple of month ago.

The idea is to show that Karp reductions give NPI power, assuming the permanent problem is hard on average. In other words, P=NP if P<NP.

It avoids the relativisation barrier because Karp reduction would not exist relative to an oracle making P inequal to NP. It avoids the natural proof barrier because this barrier assumes one-functions exist, thus does not apply to P=NP proofs.

The strategy is to create two bipartite problems, both in the same K-class* so that a polynomial reduction from BP1 to BP2 must exist, and to construct these problems so that while reducing from BP1 to BP2 the positivity of a subinstance in NPI must frequently match the positivity of a subinstance in P.

First bipartite problem (BP1) includes:

a) an instance of a NPI problem -we want to find out if it’s positive

b) an instance of the permanent problem, chosen at (pseudo)random

An instance of BP1 is positive iff both subinstances a) and b) are positive, otherwise negative. BP1 is complete for the permanent problem.

Second bipartite problem (BP2) includes:

c) an instance of a problem in P, with the additionnal constraint that there must exists one and only one valid certificate.

d) a series of poly(n) instances of the permanent problem, chosen with the following procedures: the certificate for c) is taken to produce a series of seeds (every bit of the certificate is taken, then one every two, then one every three, etc…), then a given pseudo-random generator takes these seeds to produce a series of number, and these numbers will refer to the xths instances of the permanent problem. The idea of this ugly procedure is to ensure that the instance is as difficult as if it was randomly chosen.

Second subinstance d) is positive if half or more of the instances of the permanent problem are positives, otherwise negative.

An instance of BP2 is positive iff both subinstances c) and d) are positive, otherwise negative. BP2 is complete for the permanent problem.

The idea is then to show that while reducing from BP1 to BP2 a) must match c), otherwise either b) or d) are special instances easier to solve than the average case.

The strategy is to list all a) + b) => c) + d) reductions according to whether each subinstance is positive or negative. Scrutinizing the 2^4=16 cases demonstrate that:

-6 cases can be excluded because BP1 is positive and BP2 is not, or vice-versa.

-6 share the property that when c) is positive or negative, then a) is positive or negative, respectively.

-2 cases includes a) positive and c) negative so that b) could be decided in NPI time.

-2 cases includes a) negative and c) positive so that d) could be decided in NPI time.

The last 4 cases would violate the assumption that the instances of the permanent problem, namely b) and d), are harder to solve on average than NPI in worst case. These 4 cases then cannot occur too often, which demonstrates that a) and c) must agree most of the time.

Last step is to take the majority of the answer produced by this procedure using a series of b) chosen at (pseudo)random. There seems to be no need for a true randomness, but this is an issue I’m not sure of. Any help wellcome.

* here I’m using using a definition that Dick Lipton provided in a previous post

Historically, a pretty strong case can be made that belief-driven searches for deep proofs and/or transformational technologies have failed about as often as they have succeeded … thus showing that STEM faiths can blind us as easily as they illuminate.

Here are four examples of belief-associated “aspect blindness” (as Wittgenstein called it) occurring STEM contexts.

(1) The pioneering plasma physicist Lev Artsimovich was fond of asserting “Fusion power will be there when society needs it.” Partly in consequence of this STEM faith, which was widespread in the 1960s-80s, and partly in consequence of the need for new energy sources, the technological challenges associated to dynamical plasma instability were severely underestimated for many decades. Peter Kapitza’s 1979 Nobel Address

Plasma and the controlled thermonuclear reactiongives a very readable overview. Von Neumann’s hopes for weather control: “The stable we will predict, the unstable we will control”—a point-of-view whose deficiences Freeman Dyson has discussed—were similarly over-optimistic.(2) Both on his weblog and on MathOverflow, Tim Gowers has asserted “I don’t myself believe that there is some fundamental way in which humans are better at mathematics than computers” and he has asked for examples of (in essence) non-computational mathematical creativity. Without venturing an opinion myself, Claude Shannon’s article

Von Neumann’s contributions to automata theory(1958) contains the following striking passage:It seems to me that in this passage, Shannon and von Neumann are (in effect) arguing against Gowers’ (and Turing’s?) thesis that math-is-computation, and in favor of a Chomskyan (Gödelian? Thurstonian?) view in which the deep language of mathematics—and thus presumably the deepest truths of mathematics—is inaccesible to our conscious minds … and thus very difficult to realize by any artificial intelligence.

Hmmmm … how do our “primary language” brain centers regard proofs-by-contradiction? It may well be a long time before we know how best to answer questions like this!

(3) The central dogma of molecular biology—from roughly 1958 to 2000—was that information flowed from gene to protein, never in the reverse direction; see for example Crick’s 1970 review article

The central dogma of molecular biology. Nowadays we appreciate that the central dogma is oversimplified to the point of being just plain wrong; see for example the Johnson and Desplain’s reviewStochastic mechanisms of cell fate specification(2010) and/or Hubner and Spector’s reviewChromatin dynamics(2009). As with plasma systems, gene expression has turned out to be vastly more dynamical, and vastly less information-driven, than was appreciated by Watson and Crick’s generation.(4) Similar to both plasma systems and biological systems, we are gaining an appreciation that quantum systems too too are vastly more dynamical, and thus (relatively) less information-driven, than was appreciated in previous decades. Here the central dogma of quantum mechanics, until recently, was expressed by version 1 (2002) and version 2 (2004) of ARDA’s Quantum Information Science and Technology (QIST) Roadmap, that dogma being, that quantum systems are described largely by their symmetries, and/or their conservation laws, and/or their flow of information, which in aggregate are sufficiently well understood, that a reasonable QISE goal for 2012 was thought to be “systems on the order of 50 physical qubits [that] exercises multiple logical qubits through the full range of operations required for fault-tolerant QC, in order to perform a simple instance of a relevant quantum algorithm.” Obviously we’re not very close to achieving this objective, in large measure because quantum noise mechanisms have turned out to be far subtler, and far more ubiquitous, than was first appreciated.

If there is a lesson here, it is perhaps that we humans have a very great love for systems whose dynamics is elegantly determined by symmetries, conservation laws, and the deductive flow of information … and yet we don’t live in that world. Rather, we live in a world in which fusion power, human-type AI, gene-driven biological understanding, and nontrivial quantum computation—and a host of enterprises associated to these broad programs—have proven to be, if not infeasible, immensely more difficult of achievement than was initially appreciated.

This is very good news for 21st century researchers … it means there is plenty of work to do, plenty of urgent challenges to be met, and plenty of scope for new ideas as to how to meet these challenges.

Perhaps the various speakers and attendees at next week TEDx/CalTech

Feynman’s Vision: the Next 50 Years, will provide us with some imaginative new roadmaps!As much as I agree with all your points above and especially that

“we don’t live in that world [of elegant dynamics]”I strongly disagree with your conclusion.You are actually proposing that we keep with our obviously ineffectual approaches to research which amount to emptying the ocean with a tea spoon.

Of course this provides unending employment for STEM people but not much in terms of

results, fusion and AI are indeed prime examples and Quantum Computing is predictably the next “casualty”.And… where would you get the budgets?

Even if risky could not it be more fruitful to try to disentangle the innards of “the deep language of mathematics” in order to better assist our intellect works by automated process in mathematics and elsewhere?

This is somehow an AI goal but it is not pursued very consistently to say the least:

If AI has made little obvious progress it may be because we are too busy trying to produce useful systems before we know how they should work.

Marcel Schoppers Nov 28 1983Applies as well to fusion and (soon) QC…

This was a very interesting topic, especially your advice in the end is useful.

If you want your proof to be taken seriously, take print-outs (hard copies), walk down to the nearest post office, affix stamps and send them by air-mail. (If you are rich, you can send them by courier or Fedex.) ONLY then you are going to get ANY attention.

If you e-mail your PDF file to experts, it is NOT going to get attention, your email will be deleted right away; and in many cases, it will not get past their spam filters.

I think the most serious problem about proof by contradiction is that, even if you succeed in proving the opposite case is false, there’s no guarantee that this very case is true.

For example, consider the paradox “This statement is false”. If you suppose it’s true, it’s false, and vice versa. But using proof by contradiction, we can prove it’s true. The proof is simple:

1. Suppose it’s false.

2. It says it’s false, so it’s true.

3. It cannot be both true and false. A contradiction.

So it cannot be false. Then it must be true. [\qed]

This is ridiculous because using the same method we can prove it’s false. But if we haven’t noticed this fact, we may just conclude that’s a truth.

So there should be some consistency among the axioms you use in proof by contradiction. And you should prove this consistency before using PBC. But I don’t quite know how to formulate this consistency, especially when axioms/theorems from different fields such as complexity theory and number theory, etc., are used. How can one make sure the axioms/theorems from different fields are consistent? This problem has troubled me for a whole semester…

People tried to do this in the beginning of the 20th century. Unfortunately the project fell through when Gödel’s incompleteness theorems showed that this is impossible.

However, assuming that ZFC (the most common foundation for most areas of mathematics)

isconsistent, even if we can never prove this, it does not suffer from the problem you mention: It has the law of excluded middle built in, and it’s explicitly designed to disallow directly paradoxical statements such as the one you mention. (It was a redesign of “naive” set theory, a previous theory whichdidsuffer from such paradoxes and thus was inconsistent.)Thank you R.J. Lipton for a most entertaining and informative blog.

Most of all, thank you for being the kind of person who truly cares about others.

Here:

http://donblazys.com/

you will find three different versions of my “Proof of Beal’s Conjecture”.

As you can see by the “letters and articles”, it has been verified and confirmed

by many very competent mathematicians, so its correctness is not in question.

It has also been carefully refereed by the Journal of the London Mathematical Society (University of Leeds). Unfortunately, they ran out of available publication space,

but it should be noted that this is a top notch math journal, so had the referee found

even the slightest flaw, then he would not have given it any support whatsoever,

and the editors would not have recommended that I submit it to another good journal.)

I then submitted my proof to the Journal of the American Mathematical Society

(Princeton University) only to be told that they often decline to publish papers whose

correctness is not in question! (I saved and still have every one of those e-mails.)

The Immaculate Heart Community (many retired educators with impeccable credentials)

has also verified and confirmed my proof, and has even suggested that the American Mathematical Society is absolutely unjustified in its actions and behavior.

Please Professor Lipton, do you have any ideas on how this situation might be rectified?

Don.

Your proof is seriously flawed. Many competent mathematicians have tried to point this out to you, but you fail to listen. To save everyone the time, Don’s proof assumes that 1^(0/0) is both defined and determinate, which is not true.

Sorry “Mark”,

My proof is quite correct and it is your reasoning that is “seriously flawed”. The expression 0/0 is indeterminate. It is not disallowed as are expressions such as 2/0. Thus, 1^(0/0) = 1, just as 1^n = 1 for any number n.

Don.

JA says:

This passage was hard to grasp for me too … until I recalled one of GASARCH’s posts on

Computational Complexity. GASARCH’s post is titledSocial Process and Proofs of Theorems and Programsand it focuses upon the same question as today’s post, namely “How does a theorem get to be believed to be true?”GASARCH’s title refers to a 1979 article

Social process and proofs of theorems and programs, by De Milo, Lipton, and Perlis. Yes, Dick is one of the authors. De Milo, Lipton, and Perlis challenge us with the following idea:The confluent message, then, of Henry Cohn’s (beautiful and sensible) essay

Advice for amateur mathematiciansthat Dick and Ken reference, and of the 1979 De Milo, Lipton, and Perlis article, and of the weblog essays by GASARCH and by Dick and Ken, are these three principles:(1) A good mathematical article conveys a clear message.

(2) Associated to its message is a “proof that makes us wiser.”

(3) Proofs make us wiser when they are short and constructive.

Conversely, oftimes when reading a lengthy proof-by-contradiction it is difficult to understand the message, and therefore difficult to feel wiser at the end of the proof. Such proofs provide no natural answer to the question “At what step in following the proof did I begin to feel wiser?”, and this leaves us with a feeling of frustration, even when the proof is correct.

It is even more difficult to answer the question “At what step in the proof-by-contradiction might there be a mistake?”, and for amateurs especially, this is the gravest risk.

Steve Uurtamo’s post above had a passage that I thought was truly wonderful:

Here Steve has concisely and beautifully expressed what seems to be the consensus opinion of Cohn, Lipton, Regan, GASARCH, De Milo, Perlis, and perhaps most practicing mathematicians. For which, thank you, Steve!

I would like to see a book or paper which is on the every topic which consists of great unsolved mathematical problem which should explain to amateurs existing approaches to that problem. The big and great example is this book: Paulo Ribenboim. (2000). Fermat’s Last Theorem for Amateurs. Springer-Verlag. ISBN 978-0-387-98508-4 for The Last Fermat Theorem. It is very well written, it is even easy to read, but also covers advanced topics and gives very well references for further reading. I wish it will be a standard – also for P=/= NP problem as well as others problems!

Hmmmm … as a followup to the above, since Cohn, Lipton, Regan, GASARCH, De Milo, Perlis, et al. are expressing a Great Truth—”proofs-by-contradiction are deprecated, constructive proofs are preferred”—this means there must be examples out there of nonconstructive proofs that are

terrific.One classic example is Shannon’s (wholly nonconstructive) proof of the Coding Theorem (1948). Even today, the LDPC codes and the “Turbo” codes that approach the Shannon limit are not easy to construct. They are good examples of the many algorithms that we all use daily, having immense practical and economic importance, that effectively are in

Peven though they are not provably inP. This usefully keeps us reminded of how mysterious the complexity classPreally is … as Ketan Mulmuley is fond of pointing out.We conclude that nonconstructive proofs sometimes focus our attention upon mysteries that are enduring, important, and beautiful … we should cherish these methods, not deprecate them!

This is totally tangential, but I was a little amused by your description of “Prions … work in an incredibly novel way ” – Presumably they have working like that for millions of years – it is our understanding which is novel, not their behavior.

Alan,

Of course. We try to make this some approximation of coherent English, but not always possible.

I couldn’t find an “interesting point about proofs by contradiction” in the (otherwise wonderful) article that sentence linked to. Probably this was the one intended:

http://research.microsoft.com/en-us/um/people/cohn/Thoughts/contradiction.html

All of the essays on Henry Cohn’s website

Random Mathematical Thoughtsare worth reading … elsewhere I have posted about how the reasoning of Cohn’s exceedingly well-written essayWhy symplectic geometry is the natural setting for classical mechanicsapplies similarly well to quantum systems as to classical systems.Dear Dick,

I wonder why you give the convergence/non-convergence issue any importance. It is easy to construct a language L in NP iff L not in NP. I’m sure you know how to do this via the Kleene-Rosser Paradox.

Your wishful thinking of a consistent ZFC is continuing to ignore the only conjecture which is the eventual fate of P vs. NP, namely: P=NP iff P!=NP, the proofs in: http://kamouna.wordpress.com have been 3 years in review, the last journal has lasted a complete year till now.

So, it might well be many proof of P=NP and P!=NP are equally correct. But the proof of P=NP iff P!=NP is the convergence you are looking for, dear Dick.

Rafee Kamouna

Dear Ken,

Proofs by diagonalization are proofs by contradiction. Instead of saying they are constructions of an objects outside a specified range, I suppose they are a constuction of a missing element in an infinite sequence. You establish your argument by saying it is missing because if it is not missing, I shall give you a contradiction.

In the paper:”A Turing Machine that Diagonalizes Against Itslef”, you encode a contradiction into both the diagonal element and its complement.

Although the paper proves L in NP iff L not in NP, it disproves your statement and confirms that diagonalization arguments (not proofs to be precise) are all in all proofs by contradiction.

cf. Baker, Gill, Solovay theorem, full harmony, you are quite sure.

I enjoyed your comment,

Best,

Rafee Kamouna.

Interestingly, if every sequence of languages in P converged to a language in P, not only would many proofs of P neq NP go through (as claimed in the post), but one can also prove quite easily that P=NP: consider the sequence of languages where L_i consists of all satisfiable formula of length at most i. The limit of this sequence is SAT, so under our (false) assumption of convergence, SAT is in P.

My proof is that SAT is both NP-complete and (NOT) NP-complete. I don’t bother whether SAT is in P or not. Furthermore, a language is constructed such that L is NP iff L not in NP.

Dear Josh,

Can you tell me the importance of convergence after these results?

Kamouna, you rock!

I think that contribution of amateurs and semi-amateurs would be more substantial if they focused on easier problems instead of super-tasks.

There is no channel to communicate between amateurs and professionals.

There is a presumption that someone is working on the problem for the recognition and fame.

I know my limits, and know that writing technically correct version will take an infinite amount of time. I also know that any claim should show why the quartic polynomial I’m using is not representable as a sum of squares. As far as I know, no one have shown it, and there are numerical works that consistent with the claim. Moreover, there is no reason (if you think about it) why some of the cases should be exceptional. What is bothering me is that there almost exist a solution, and no one pays attention to it. Therefore, I claim to remove all my references to P vs NP question from the web, when someone incorporate the main idea in a technically correct way in some paper. I’m ready to share all my experience with the problem, and still remove all the references, I have enough things to work on.

Once again there is no channel to communicate, and the intention of professionals to shut the conversation down , and once again you practically have the P vs NP solution, and I know that whatever I write, it works against me.

There is a way out of your predicament.

The last picture is a very good example of an amateur-professional communication channel. Would be good to have one like this with mathematicians😉.

I don’t understand why NP!=P is a pi2 statement.

I can see that a fixed-polynomial circuit lower bound would be a pi2 statement: “for all circuit families (C) of size n^d, there is a language in NP that (C) cannot compute.”

But a super-polynomial circuit lower bound seems to be a sig2 statement: “there is a language L in NP such that for all d and for all (C) of size n^d, (C) cannot compute L.”

I wrote the above in terms of P/poly instead of P, because the P analog of the above pi2 statement is trivially true by the deterministic hierarchy theorem.

What am I missing?

The statement is roughly: for all programs x and all k there exists an input y so that the Turing machine M_x on input y does not get SAT right in n^k time.

This is a PI_2 sentence. By the way it could be that there are other encodings that are better—that is an open problem.

Regarding the open question; the answer is obvious – work in a system where the assumptions and bounds and whatever else can be checked automatically. Indeed, this is something I’ve thought about and is being worked on, I think, in programs like Mizar: http://mizar.org/

Clearly, the problem with this approach is that it’s not easy to use, and much time must be spent in the setup. And, I guess, it’s perhaps not a “natural” way for proofs to be constructed. I’m unsure about that, as I’m not a professional. I’ve documented my thoughts on a system like this here – http://www.mirios.com.au:8081/index.php?title=Systematic_Visualised_Proof_Development_%28SVPD%29 – and would be interested in hearing from anyone who thought this is valuable to work on. My personal opinion is that it is, but first and foremost it must be *usable* and also *useful*, these two requirements are in a bit of a conflict, though, because the system would have to be formal to be useful (i.e. to detect points of failure in a proof) and not-so formal to be easy to use. Nevertheless, I do still think that the situation is clearly the ideal one, and whether or not it can be realised is up for debate (and worth review, in my mind).

It is striking that Mizar checking process is a disprover, that is, each claimed inference is negated, adjoined to the premises associated to that inference, and the Mizar checking engine searches for a contradiction, drawing upon Mizar’s (vast! and growing!) library of prior inferences.

Thus, at least internally, for Mizar all of mathematics is a linked set of proofs-by-contradiction.

According to many authors, and according to the De Milo, Lipton, and Perlis article

Social process and proofs of theorems and programs(1979) in particular, this is *not* the way that humans think about mathematics, and perhaps this is one reason that Mizar-style proof validation systems have not become popular.So maybe Claude Shannon and John von Neumann were right: “We are still totally unaware of the nature of the primary language for mental calculation.”

And yet on the other hand, it seems plausible too, that a claimed inference that cannot in principle be checked by a Mizar-style proof-by-contradiction algorithm, is not a valid inference.

So it seems that mathematics can be understood in multiple compatible ways … Bill Thurston and Saunders Mac Lane have both argued for this point-of-view.

Well, I think there is a primary “language”; it’s just not formal. The problem is wanting formal constraints but not wanting them when they’re “not neccessary”. The hard part, obviously, is weighing that up. I tend to think that perhaps formality can be “added” while still letting the proof develop. But I have no experience in professional proofs.

Taken from Section 5, “Halting is a problem”, by Scott Aaronson @ https://stellar.mit.edu/S/course/6/sp08/6.080/courseMaterial/topics/topic1/lectureNotes/lec4/lec4.pdf

>We argue by contradiction. Let P be a Turing machine that solves the halting problem. In other words, given an input machine M, P(M) accepts if M(0) halts, and rejects if M(0) instead runs forever. Here P(M) means P run with an encoding of M on its input tape, and M(0) means M run with all 0’s on its input tape. Then we can easily modify P to produce a new Turing machine Q, such that Q(M) runs forever if M(M) halts, or halts if M(M) runs forever. Then the question becomes: what happens with Q(Q)? If Q(Q) halts, then Q(Q) runs forever, and if Q(Q) runs forever, then Q(Q) halts. The only possible conclusion is that the machine P can’t have existed in the first place.

This is Scotts transcription of Turings proof that the Halting problem is undecidable. Assuming that this is a reasonable and accurate transcription of Turings proof, as given in “On computable numbers, with an application to the Entscheidungsproblem”, then I think it’s reasonable to ask:

1) How does this prove that the halting problem is undecidable?

2) While the case of “Q(…)” is considered, it would seem to me that the case of “P(Q(…))” is not properly considered, nor why “P(…)”, where P is at the “root”, is in some strong sense “guaranteed” to be impossible, and therefore can not “exist”.

3) Unless one can properly rule out the possibility of “P(…)” under all possible circumstances and permutations, it would seem to me that the conclusion of the proof can not follow.

Regardless of whether or not this is a legitimate case of an “invalid” proof (i.e., possibly a mistake in transcribing the original, though I am of the opinion that it is faithful to the proof given by Turing), I believe that there is a bigger problem that you didn’t touch on:

Once an incorrect “proof” has been accepted as valid, it is virtually impossible to invalidate that “proof”. Your stomach ulcers example does not capture how absolutely overpowering the “conventional and accepted wisdom” was prior to this discovery:

> “Yet it took decades for them to get the mainstream medical profession to take their ideas seriously.”

Decades, and ultimately one of the researchers being forced to infect himself because of the other doctors belief in the accepted dogma was so strong that none of the other doctors would perform a simple, trivial test… a test that had virtually no downside if the hypothesis was wrong, but nearly limitless upside if it turned out to be correct.

What if Scott’s transcription of Turings proof is “reasonably accurate and faithful”? Without getting in to the details of why or how, what would the consequences be to P vs NP if it turned out that Turings proof turned out to be invalid? How many people do you think there are that have made a serious attempt to show that it was invalid? Do you honestly think that you “would have heard about it?” IMHO, no, you wouldn’t. Who would want to spend that much effort and put their reputation on the line defending such a controversial idea? We’d like to think we’d welcome such results, but history tells us something far, far different. It’s the rare individual who is willing to see it through all the way to the end.

I recommend to read Michael Sipser’s “Introduction to the Theory of Computation”. It helped me a lot to understand the area, because he doesn’t assume a lot of prior knowledge and takes care to explain how the proofs work. No kidding, he starts with introducing sets. He is not only giving the proof for the undecidability of the halting problem but he also shows how the proof is related to the diagonalization method. You should also note that an invalid proof doesn’t show the falseness of the statement. The halting problem may still be undecidable even if a proof given is invalid.

I definitely fall into the category “IT specialist with some formal math education”. But I still benefit from the fact that my first hard math teacher was a logician, so I had to learn the fundamentals of correct proofs. Timothy Gowers wrote “If a mathematician produces an argument that is strangely unconvincing, then the best way to see whether it is correct is to ask him or her to explain it more formally and in greater detail. This will usually either expose a mistake or make it clearer why the arguments work.” [in “The Princeton Guide to Mathematics”] I like that statement very much. It explains, why an amateur needs to “walk like an Egyptian” to convince the experts of the field. They professionals cannot be blamed for it, because the process of creating mathematical knowledge requires the capability to explain one’s ideas more formally.

kune,

About Mike Sipser. He was in my automata class years ago at Berkeley. I believe I can say that I taught him

somethinghe knows.I can’t comment on Sipsers book as I do not have a copy.

If it’s not too much trouble, perhaps you could check and see if the books description of the Halting Problem follows the same pattern. Basically: H can, by definition, “decide” the halting problem. Through some construction, a second program is created that uses H to do “exactly the opposite of H” (called Q in Aaronson description). This second program, Q, is used in a proof by contradiction to show that H can not exist, typically along the lines of “Q(Q(…))”.

However, and most importantly, the proof never considers the case of “H(Q(…))” or “H(Q(Q(…)))” before ‘proving’ that H can not exist. And really, there is exactly one case that needs to be considered: “H(…)”. Anything other than H as the “top level” program is irrelevant to a Halting proof. So while “Q(Q(…))” might be undecidable, the halting problem is undecidable if and only if “H(Q(Q(…)))” is undecidable.

Sipser’s book is definitely cool! He managed to solve problems in high-level descriptions while convincing you the proofs are reasonable. A ‘proof idea’ is provided before each proof to show the sketch and make the proof easy to follow. It greatly satisfies my principle of choosing a book – “clarity,continuity,conciseness”.

For the halting problem, at least two methods can be used. One is reducing from A_{TM} = {\langle M,w \rangle | M accepts w}, which is undecidable (by diagonalization or the Recursion Theorem) . The other is directly using the Recursion Theorem.

And the exercises in the book are also very valuable, and at a proper level of difficulty. I finished most of them after Chap 7, learned a lot and felt really excited. But a few problems such as “The language decided by a TM in o(n \log n) time is regular” gets me stuck…Who’d have some hints for this problem?

Johne, you might want to take a look at Freek (pronounced “Phrake”) Wiedijk’s web page Formalizing 100 Theorems.

Starting from a (somewhat arbitrary) list to 100 “greatest” theorems, Wiedijk establishes that (as of 1999) 85 of these theorems have been formally verified … including Godel’s first incompleteness theorem (for example).

What is really remarkable (IMHO) is that so far, there have been *no* discrepancies discovered.

So although we don’t (at present) have a clear understanding of the verification processes that expert mathematicians use in assessing whether a proof is “right” or not—except we know that human theorem-checking processes are *not* similar to computer theorem-checking processes—Wiedijk’s review suggests that proof verification by human experts is very reliable.

Dear Dr. Lipton,

There is a paper arxiv4.library.cornell.edu/pdf/1009.0884v1 which will be of interest to you.

The author is trying to explain that the conception of iterative set introduced by Gödel was exactly the logic which made Gödel foresee P = NP. Although the conception of iterative set is not new, it was not very well defined by Gödel either. The author defined this new “data structure” of bijection. The novelty is that one set is the set of class, and the other is a set of object. Then there exists a one-to-one relation between the two sets. Also there exist iteratively class-member relations. Based on this new data structure, one can process reducibility in polynomial time. That is, reducibility is not only a notion; it is a computable inverse function that we are looking for.

An amateur’s weakest point in tackling a monster like PvsNP is lack of humility.

Even tiniest-looking problems can bring down the fittest fighters. Wait until you are defeated a hundred times before you make a fool of yourself with PvsNP.

Me personally I don’t care t much for “proof” if anyone need to form a text in order to make sense of their (most often) rambling they are wrong by default. I’ve looked into this P=NP? problem at times but find the problem as such hard to understand, what is the big fuss really? Unless the problem can be presented in a way where an application of the proof can be applied it really isn’t that much use to anyone. Most mathematical “problems” follow this line, the issue is more focused on writing a paper that will stand a peer review instead of “if it works”.

I have understood that the P=NP problem is found in the classic Minesweeper (according to Ian Stewart) and this gives us a a way to apply the evidence of P=NP (making the assumption that it is true) if we ever design a machine that can solve Minesweeper under given conditions. Let say someone out there has such a machine, would they still need to prove that the problem is solved?

The answer is yes to this question (even though it shouldn’t be), a problem like the P=NP will not under any circumstances be allowed to be solved by some kid loitering about in his parents basement. Unless the “proof” is presented my an acknowledge mathematician it will simply be discarded no matter how this is presented. There are enough inventions and discoveries that has been stolen in history for this fact to be self-evident to anyone who care to look at the world we are living in.

A good point. A little humility would greatly reduce the number of P vs NP papers on the internet.

Can you please clarify what definition of the [formal] language limit is? http://en.wikipedia.org/wiki/Formal_language#Operations_on_languages doesn’t list such operation.

It would be nice if we could evaluate, for a given conjecture, the probability that mankind will eventually solve it. For the moment we only have “practically one-way” functions – i.e. functions that haven’t yet been inverted in polynomial time – and NP-complete problems that haven’t yet been solved in polynomial time. If it could be shown that the probability of succeeding is null, we could try to solve other conjectures without feeling anxious about the security of secret codes.