How to Check a Proof
What to do about claims of hard theorems?
Cropped from source 
Shinichi Mochizuki has claimed the famous ABC conjecture since 2012. It is still unclear whether or not the claimed proof is correct. We covered it then and have mentioned it a few times since, but have not delved in to check it. Anyway its probably way above our ability to understand in some finite time.
Today I want to talk about how to check proofs like that of the ABC conjecture.
The issue is simple:
Someone writes up a paper that “proves” that X is true, where X is some hard open problem. How do we check that X is proved?
The proof in question is almost always long and complex. So the checking is not a simple matter. In some cases the proof might even use nonstandard methods and make it even harder to understand. That is exactly the case with Mochizuki’s proof—see here for some comments.
Possible Approaches
Let’s further assume that the claimed proof resolves X which is the P vs. NP problem. What should we do? There are some possible answers:
 Ignore: I have many colleagues who will not even open the paper to glance at it. Ken and I get a fair number of these, but I do at least open the file and take a quick look. I will send a message to the author—it usually is a single author—about some issue if I see one right away.
 Show Me The Beef: I firmly believe that a proof of an open problem should have at least one simple to state new trick or insight that we all missed. I would suggest that the author must be able to articulate this new idea: if they cannot then we can safely refuse to read it. I have worked some on the famous Jacobian Problem. At one time an author claimed they had a proof and it was just “a careful induction.” No. I never looked at it because of the lack of “beef,” and in a few weeks the proof fell apart.
 Money: Several people have suggested—perhaps not seriously—that any one claiming a proof must be ready to post a “bond” of some money. If someone finds an error they get the bond money. If no one does or even better if the proof is correct, then the money can be donated to one of our conferences.
 Hire: I have seen this idea just recently. The author posts a request for someone to work on their paper as a type of consultant. They are paid a fair hourly rate and help find the error.

Timeout: An author who posts a false proof gets a timeout. They are not allowed to post another paper or submit a paper on X for some fixed time period. Some of the top journals like the Journal of the ACM already have a long timeout in place. The rationale behind this is that very often when an error is found in such a paper the author quickly “fixes” the issue and reclaims the result. In Stanislaw Ulam’s wonderful book Adventures of a Mathematician he talks about false proofs: Here “he” refers to an amateur who often joined Ulam at his habitual coffeehouse:
Every once in a while he would get up and join our table to gossip or kibitz Then he would add, “The bigger my proof, the smaller the hole. The longer and larger the proof, the smaller the hole.”

Knock Heads Together: Oxford University hosted in December 2015 a workshop to examine Mochizuki’s claimed proof, including contact by Skype with Mochizuki himself. A report by Brian Conrad on the MathBabe blog makes for engaging reading—we could quote extensively from its concluding section 6. This shorter news report cited feelings of greater understanding and promise but lack of definite progress on verifying the proof, noting:
…[N]o one wants to be the guy that spent years working to understand a proof, only to find that it was not really a proof after all.

Share The Credit: Building on the last point, perhaps proper credit can be given to someone who does spent a great deal of time working on trying to understand a long proof. If they find an unfixable error, then maybe they can publish that as a paper—especially if the error is nontrivial and not just a simple one. If they show that the proof is indeed correct, could they be rewarded with some type of coauthorship? Maybe a new type of authorship:
P Does Not Equal NP: A Proof Via NonLinear Fourier Methods
Alice Azure with Bob BlueHere the “with” signals that Alice is the main author and Bob was simply a helper. Recall a maxim sometimes credited to President Harry Truman: “It is amazing what you can accomplish if you do not care who gets the credit.”
Open Problems
What do you think about ways to check proofs? Any better ideas?
Good idea. But, nonalphabetical order in some case could be another way in the coauthorship.
I really like the sharethecredit idea, particullary for amateurs. Someone might understand a part of the new insight, but get lost in the complexities of formalizing it.
What about automated proofs? The ideas could be reformalized in a language like Coq, where the underlying ‘translation’ could be less taxing for each subpart, than having to understand the whole proof?
Paul.
Don’t ignore and as you said, at least take a quick look at it. Once computer aided proof checking is ACHIEVED than people can submit their claims to this Oracle.
S N
There may be an added complication when attempting to correlate ‘proven’ with ‘true’. To illustrate:
1. Assume that we have proven the formula [(Ax)R(x)] in a firstorder language L by some finitely enumerable set D of finitary rules that assign, to each wellformed proposition of L, either the value ‘provable’ or the value ‘unprovable’.
(Loosely speaking, the set D would correspond classically to the axioms, axiom schemata and rules of inference of both L and its underlying logic FOL.)
2. Do we, then, interpret [(Ax)R(x)] as:
(i) There is an algorithm (witness) T such that, for any given number n, T will deterministically assign to the interpreted proposition R(n) the value ‘true’?
Or as:
(ii) Given any number n, there is an algorithm (witness) T(n) that will deterministically assign to the interpreted proposition R(n) the value ‘true’?
3. Since Turing has demonstrated that Cantor’s diagonal argument establishes the existence of welldefined, but uncomputable, real numbers (eg. Chaitin’s family of Omega constants), it can be shown (paper cited below) that whilst (i) => (ii), the converse is not true.
Obviously, any ambiguity in the intended meaning of the universally quantified Lformula [(Ax)R(x)] under interpretation—in its intended domain M—would also be reflected in the intended meaning of the existentially quantified Lformula [(Ex)R(x)] under interpretation in M, since the latter formula is formally defined as [~(Ax)~R(x)].
4. One way of avoiding such ambiguity would be to view the logic of a language as assignment, rather than entailment, of truthvalues to the propositions of a language, thus:
A finite set Lambda of rules is a welldefined Logic of a formal mathematical language L if, and only if, Lambda constructively assigns (witnesses) unique truthvalues:
(a) Of provability/unprovability to the formulas of L; and
(b) Of truth/falsity to the sentences of the Theory T(U) which is defined semantically by the Lambdainterpretation of L over some intended structure U.
5. If we apply the above yardstick to the first order Peano Arithmetic PA, we find that PA admits two, essentially different, logical assignments of the truthvalues ‘true/false’ to the PAformulas under two, distinctly different, Tarskian interpretations.
(a) The first is an ‘algorithmically verifiable’ logic, which defines the classical, nonfinitary, standard Tarskian interpretation SI of PA over the domain N of the natural numbers.
It is the one where a PAprovable formula [(Ax)R(x)] interprets as:
Given any number n, there is an algorithm (witness) T(n) that will deterministically assign to the interpreted English proposition R(n) the value ‘true’.
As usual, the axioms of PA interpret as algorithmically verifiable truths under SI, and the rules of inference preserve algorithmically verifiable ‘truth’ under SI.
Classically, SI defines the standard model of PA over the domain of the natural numbers.
However, since there is no method of assigning unique truthvalues finitarily to quantified PA formulas under SI, the logic that defines the standard model of PA is not welldefined, and cannot therefore be said to finitarily model PA (although Gentzen has shown that SI can be treated as nonfinitarily defining a model of PA).
(b) The second is an ‘algorithmically computable’ logic, which defines a finitary Tarskian interpretation FI of PA over the domain N of the natural numbers.
It is the one where a PAprovable formula [(Ax)R(x)] interprets as:
There is an algorithm (witness) T such that, for any given number n, T will deterministically assign to the interpreted proposition R(n) the value ‘true’.
In this case too, the axioms of PA interpret as algorithmically computable truths under FI, and the rules of inference preserve algorithmically computable ‘truth’ under FI.
Classically, FI too therefore defines a model of PA over the domain of the natural numbers.
Since there is, however, always a method of assigning unique truthvalues finitarily to quantified PA formulas under FI, this is a welldefined logic that finitarily models Arithmetic.
6. The detailed arguments are given in the following paper which appears in the December 2016 issue of ‘Cognitive Systems Research’:
‘The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The EvidenceBased Argument for Lucas’ Gödelian Thesis’
The paper shows that—contrary to conventional argumentation—PA has a welldefined logic since the set of axioms and rules of inference of PA+FOL constructively assign unique truthvalues:
(a) Of provability/unprovability to the formulas of PA; and
(b) Of computably realizable truth/falsity to the sentences of Dedekind’s Peano Arithmetic which is defined semantically by the computably realizable interpretation of PA over the structure of the natural numbers.
7. The added complication, then, is that even if “‘someone writes up a paper that ‘proves’ that X is true, where X is some hard open problem” of a language L that has a welldefined interpretation over a domain M, and X is a Lformula of the form [(Ex)R(x)], any proof of X would only tell us that there is no algorithm that, for any given x, would negate the interpretation R(x) of the Lformula [R(x)] over M.
It would not allow us to conclude further—if that is the intended claim—that there is some k for which R(k) is a true proposition in M. For that we would need both a firstorder construction of [k] in L, and a proof of the Lformula [R(k)]
Regards,
Bhup
Is reading this comment any easier than reading an NP=?P proof?
Just focus on methods. If the method is useful, it will get checked, rewritten, reused and, eventually, taught. If the method is not useful, there is nothing lost if we declare the problem still unsolved. Can anyone think of a counterexample to this last claim? [honest question]
The “Ignore”, “Show me the beef”, and “Timeout” approaches are reasonable. After all, we are talking about a real problem here, and those approaches do have merit for dealing with this problem in practice. The “Money” and “Hire” approaches are too specific and cannot be implemented in practice to cope with the problem, not even as a remedy for the downsides of the “Ignore” approach. No objections to the “Knock Heads Together” approach, especially for credible major claims like Shinichi Mochizuki’s interuniversal Teichmüller theory. Regarding the “Share The Credit” approach, the “Acknowledgments” section of a paper is the place where it is expected that significant contributions from noncoauthors are mentioned. I guess Laszlo Babai will also use this section to praise the contribution of Harald Helfgott. Hence my advice: don’t fix it, if it ain’t broken.
Let me stress again that we are talking about a real problem here. And the problem is not just papers by unknown nobodies, an unchecked paper by a renowned researcher of a major result might be even riskier. Vladimir Voevodsky gives a shocking account of this fact here http://www.math.ias.edu/vladimir/files/2014_IAS.pdf the most disturbing episode being his wrong paper from 1989 where a counterexample was published by Carlos Simpson in 1998 and he finally accepted in 2013 that the counterexample was correct.
And let us also look at Carlos Simpson in this episode. Should he have challenged the authors of the wrong paper? Like Justin Archer does here https://justincarcher.wordpress.com/2016/10/05/problemscakecuttingazizmackenzie2016/ ? And if an outside observer comes along, should he try to determine who is right or wrong here? And then, what is the next step? I think the goal should always be to clarify with the original authors whether there is a real gap, or whether you just misunderstood something. But life may not be that simple.
Regarding credit sharing, I have never understood why science papers are so different in this respect from movies. We could also have a problem proposed by, approach suggested by, proof found by, verified by, written by etc. instead of just listing some authors.
I agree with you completely. But, you seem do not know this world of human being. Any law and rule has this nature. It will never change in the ‘right’ way as you think.
Even for a movie, the marketing material will list the director as sort of the author of the film, sometimes the producer might share the credit for authorship. The credit section at the end of the movie is comparable to the acknowledgement section of a paper. The difference between an author and a major contributor is that an author should have a right to block the publication of a paper, if he doesn’t agree with its content. Or at least an author should have the right to demand that his name be removed from the list of authors, if the paper is to be published nevertheless. I have seen this happening in the past, and I also have seen cases where people only learned after publication that they were listed as authors (they were not amused).
Originally proofs were used to convince people that a theorem is true. Now it seems that people write proofs to convince people that their proof is true.
mochizuki had a contrarian style working in near isolation for many years and thats increased the problem here. this also reminds me of the deolalikar proof. a collective cyber peer review seemed to work well in that case. another aspect is the massive # of pages in new complex proofs. another case study is perelmans work on poincare conjecture which took several years to verify by experts but it looks like perelman had the result at the early point but had to do further major work to communicate it.
I find it funny to compare how Edward Nelson retracted his claim that PA is inconsistent after Terence Tao explained to him a serious flaw in his proof (https://golem.ph.utexas.edu/category/2011/09/the_inconsistency_of_arithmeti.html#c039590), to how Shinichi Mochizuki “updated his papers” after Vesselin Dimitrov (quoted by Terence Tao: https://quomodocumque.wordpress.com/2012/09/03/mochizukionabc/#comment10863) located a robust family of counterexamples to the main Diophantine inequality claimed in IUTTIV. At least the “Acknowledgments” section was also updated to include the following: “as well as Vesselin Dimitrov and Akshay Venkatesh for useful comments concerning the analytic number theory aspects of the present paper”. (From a reviewer perspective, this is a point where you wish that the author(s) would first establish some solid uncontroversial intermediate results, such that they wouldn’t feel pressured to answer with significant updates to any holes or inconsistencies in their proof pointed out by “casual” readers.)
The NelsonMochizuki comparison is funny, because if PA was inconsistent then the ABC conjecture would be true. But maybe every theory is inconsistent, with a contradiction that’s way too hard for us to find. Thus we might have both P=NP and P!=NP. Along this line, NPcompleteness would be the device which protects our mental sanity…
fyi/ ps more links on mochizuki a fascinating/ “in play” case study. as for retracted claims, do not know the best way to do it, but web technology is certainly helping some, such that people can get updates quickly and dialogue with each other publicly. the only complaint might be that it tends to be decentrallized/ scattered at times.
maybe math is working exactly as it should and these episodes remind us of the inherently social aspects of math (see davies & hirsch, “mathematical experience”) whereas it has the appearance of mere logic/ formalism. that is the clothes but underneath, theres a human! RJL/KWR dont mention AI/ theorem proving/ verification technology but thats a dark horse that is increasingly active/ viable.
A clever method of spreading the news about a new proof, is the one that Bambai used recently. Make a lecture, in video or in person and people will get interested in your work. Also it is a great way to better explain the dark corners of your theory.
About the ignore case. It would be more appropriate for reviewers to specify the reasons of rejection of a paper. If a reviewer would respond that he cannot handle a paper, this would be acceptable and something that any author should respect. But a review thay says “I cannot understand it so it must be wrong” it does feel as an appropriate answer.
Sorry, a small typo in my comment. In the last sentence I wanted to write that it does not feel appropriate to receive a review that says “I cannot understand your proof so it must be wrong”.
Forgive me for going off topic, but since you mentioned the Jacobian conjecture, I wanted to remark that it seems to me that, judging from this survey, the Jacobian conjecture would be proven by the following trivial addition to known results:
According to the survey (Theorem 1.6), Bass, Connell, and Wright proved that the Jacobian conjecture is true for all polynomial functions if it is true for all polynomial functions of the form , where the are all either zero or homogeneous of degree three. It seems to me that, more generally, a function of the form where each is zero or homogeneous of degree and is a constant, can be proven to satisfy the Jacobian conjecture as follows:
Let be the Jacobian of at . Suppose, for sake of contradiction, that has the described form and that is regular everywhere, but that there are nevertheless distinct so that . Then the function satisfies , is of the form (with homogeneous of degree ), and its Jacobian satisfies , and is hence also regular everywhere. Define , and consider the function defined by . Then , and hence . On the other hand, . Hence , and therefore is singular at , and therefore we have a contradiction.
This argument isn’t new: It’s just Wang’s proof of the Jacobian conjecture for functions of degree , which may also be found in the survey (Proposition 1.5), with instead of .
Allow me to point out a small typo:
“Show Me The Beef: I firmly believe that a proof of an open problem should have at least one simple [missing] to state new trick or insight that we all missed.”
Did you mean “… one simple idea/statement ….” here?
http://academia.stackexchange.com/q/18491
You teachers should dedicate a whole semester to understanding the proof, with the project of debunking it. The pupils who graduate will be those who find the mistake. Or, in case the proof is correct, they’ll get the degree iff they manage to simplify it so as to make it understandable by their teacher.
Given a number x and a set S of n positive integers, MINIMUM is the problem of deciding whether x is the minimum of S. We can easily obtain an upper bound of n comparisons: find the minimum in the set and check whether the result is equal to x. Is this the best we can do? Yes, since we can obtain a lower bound of (n – 1) comparisons for the problem of determining the minimum and another obligatory comparison for checking whether that minimum is equal to x. A representation of a set S with n positive integers is a Boolean circuit C, such that C accepts the binary representation of a bit integer i if and only if i is in S. Given a positive integer x and a Boolean circuit C, we define SUCCINCTMINIMUM as the problem of deciding whether x is the minimum bit integer which accepts C as input. For certain kind of SUCCINCTMINIMUM instances, the input (x, C) is exponentially more succinct than the cardinality of the set S that represents C. Since we prove that SUCCINCTMINIMUM is at least as hard as MINIMUM in order to the cardinality of S, then we could not decide every instance of SUCCINCTMINIMUM in polynomial time. If some instance (x, C) is not in SUCCINCTMINIMUM, then it would exist a positive integer y such that y < x and C accepts the bit integer y. Since we can evaluate whether C accepts the bit integer y in polynomial time and we have that y is polynomially bounded by x, then we can confirm SUCCINCTMINIMUM is in coNP. If any single coNP problem cannot be solved in polynomial time, then P is not equal to coNP. Certainly, P = NP implies P = coNP because P is closed under complement, and therefore, we can conclude P is not equal to NP.
You could read the details in:
http://vixra.org/pdf/1704.0335v1.pdf
Certainly, you could check each step of the P versus NP proof above in an easy way!!!