What to do about claims of hard theorems?

Shinichi Mochizuki has claimed the famous ABC conjecture since 2012. It is still unclear whether or not the claimed proof is correct. We covered it then and have mentioned it a few times since, but have not delved in to check it. Anyway its probably way above our ability to understand in some finite time.

Today I want to talk about how to check proofs like that of the ABC conjecture.

The issue is simple:

Someone writes up a paper that “proves” that X is true, where X is some hard open problem. How do we check that X is proved?

The proof in question is almost always long and complex. So the checking is not a simple matter. In some cases the proof might even use nonstandard methods and make it even harder to understand. That is exactly the case with Mochizuki’s proof—see here for some comments.

## Possible Approaches

Let’s further assume that the claimed proof resolves X which is the P vs. NP problem. What should we do? There are some possible answers:

• Ignore: I have many colleagues who will not even open the paper to glance at it. Ken and I get a fair number of these, but I do at least open the file and take a quick look. I will send a message to the author—it usually is a single author—about some issue if I see one right away.

• Show Me The Beef: I firmly believe that a proof of an open problem should have at least one simple to state new trick or insight that we all missed. I would suggest that the author must be able to articulate this new idea: if they cannot then we can safely refuse to read it. I have worked some on the famous Jacobian Problem. At one time an author claimed they had a proof and it was just “a careful induction.” No. I never looked at it because of the lack of “beef,” and in a few weeks the proof fell apart.

• Money: Several people have suggested—perhaps not seriously—that any one claiming a proof must be ready to post a “bond” of some money. If someone finds an error they get the bond money. If no one does or even better if the proof is correct, then the money can be donated to one of our conferences.

• Hire: I have seen this idea just recently. The author posts a request for someone to work on their paper as a type of consultant. They are paid a fair hourly rate and help find the error.

• Timeout: An author who posts a false proof gets a timeout. They are not allowed to post another paper or submit a paper on X for some fixed time period. Some of the top journals like the Journal of the ACM already have a long timeout in place. The rationale behind this is that very often when an error is found in such a paper the author quickly “fixes” the issue and re-claims the result. In Stanislaw Ulam’s wonderful book Adventures of a Mathematician he talks about false proofs: Here “he” refers to an amateur who often joined Ulam at his habitual coffeehouse:

Every once in a while he would get up and join our table to gossip or kibitz ${\dots}$ Then he would add, “The bigger my proof, the smaller the hole. The longer and larger the proof, the smaller the hole.”

• Knock Heads Together: Oxford University hosted in December 2015 a workshop to examine Mochizuki’s claimed proof, including contact by Skype with Mochizuki himself. A report by Brian Conrad on the MathBabe blog makes for engaging reading—we could quote extensively from its concluding section 6. This shorter news report cited feelings of greater understanding and promise but lack of definite progress on verifying the proof, noting:

…[N]o one wants to be the guy that spent years working to understand a proof, only to find that it was not really a proof after all.

• Share The Credit: Building on the last point, perhaps proper credit can be given to someone who does spent a great deal of time working on trying to understand a long proof. If they find an unfixable error, then maybe they can publish that as a paper—especially if the error is nontrivial and not just a simple one. If they show that the proof is indeed correct, could they be rewarded with some type of co-authorship? Maybe a new type of authorship:

P Does Not Equal NP: A Proof Via Non-Linear Fourier Methods
Alice Azure with Bob Blue

Here the “with” signals that Alice is the main author and Bob was simply a helper. Recall a maxim sometimes credited to President Harry Truman: “It is amazing what you can accomplish if you do not care who gets the credit.”

## Open Problems

What do you think about ways to check proofs? Any better ideas?

1. January 25, 2017 7:26 pm

Good idea. But, non-alphabetical order in some case could be another way in the coauthorship.

2. January 25, 2017 9:10 pm

I really like the share-the-credit idea, particullary for amateurs. Someone might understand a part of the new insight, but get lost in the complexities of formalizing it.

What about automated proofs? The ideas could be re-formalized in a language like Coq, where the underlying ‘translation’ could be less taxing for each sub-part, than having to understand the whole proof?

Paul.

3. January 25, 2017 10:14 pm

Don’t ignore and as you said, at least take a quick look at it. Once computer aided proof checking is ACHIEVED than people can submit their claims to this Oracle.

S N

4. January 26, 2017 1:48 am

There may be an added complication when attempting to correlate ‘proven’ with ‘true’. To illustrate:

1. Assume that we have proven the formula [(Ax)R(x)] in a first-order language L by some finitely enumerable set D of finitary rules that assign, to each well-formed proposition of L, either the value ‘provable’ or the value ‘unprovable’.

(Loosely speaking, the set D would correspond classically to the axioms, axiom schemata and rules of inference of both L and its underlying logic FOL.)

2. Do we, then, interpret [(Ax)R(x)] as:

(i) There is an algorithm (witness) T such that, for any given number n, T will deterministically assign to the interpreted proposition R(n) the value ‘true’?

Or as:

(ii) Given any number n, there is an algorithm (witness) T(n) that will deterministically assign to the interpreted proposition R(n) the value ‘true’?

3. Since Turing has demonstrated that Cantor’s diagonal argument establishes the existence of well-defined, but uncomputable, real numbers (eg. Chaitin’s family of Omega constants), it can be shown (paper cited below) that whilst (i) => (ii), the converse is not true.

Obviously, any ambiguity in the intended meaning of the universally quantified L-formula [(Ax)R(x)] under interpretation—in its intended domain M—would also be reflected in the intended meaning of the existentially quantified L-formula [(Ex)R(x)] under interpretation in M, since the latter formula is formally defined as [~(Ax)~R(x)].

4. One way of avoiding such ambiguity would be to view the logic of a language as assignment, rather than entailment, of truth-values to the propositions of a language, thus:

A finite set Lambda of rules is a well-defined Logic of a formal mathematical language L if, and only if, Lambda constructively assigns (witnesses) unique truth-values:

(a) Of provability/unprovability to the formulas of L; and

(b) Of truth/falsity to the sentences of the Theory T(U) which is defined semantically by the Lambda-interpretation of L over some intended structure U.

5. If we apply the above yardstick to the first order Peano Arithmetic PA, we find that PA admits two, essentially different, logical assignments of the truth-values ‘true/false’ to the PA-formulas under two, distinctly different, Tarskian interpretations.

(a) The first is an ‘algorithmically verifiable’ logic, which defines the classical, non-finitary, standard Tarskian interpretation SI of PA over the domain N of the natural numbers.

It is the one where a PA-provable formula [(Ax)R(x)] interprets as:

Given any number n, there is an algorithm (witness) T(n) that will deterministically assign to the interpreted English proposition R(n) the value ‘true’.

As usual, the axioms of PA interpret as algorithmically verifiable truths under SI, and the rules of inference preserve algorithmically verifiable ‘truth’ under SI.

Classically, SI defines the standard model of PA over the domain of the natural numbers.

However, since there is no method of assigning unique truth-values finitarily to quantified PA formulas under SI, the logic that defines the standard model of PA is not well-defined, and cannot therefore be said to finitarily model PA (although Gentzen has shown that SI can be treated as non-finitarily defining a model of PA).

(b) The second is an ‘algorithmically computable’ logic, which defines a finitary Tarskian interpretation FI of PA over the domain N of the natural numbers.

It is the one where a PA-provable formula [(Ax)R(x)] interprets as:

There is an algorithm (witness) T such that, for any given number n, T will deterministically assign to the interpreted proposition R(n) the value ‘true’.

In this case too, the axioms of PA interpret as algorithmically computable truths under FI, and the rules of inference preserve algorithmically computable ‘truth’ under FI.

Classically, FI too therefore defines a model of PA over the domain of the natural numbers.

Since there is, however, always a method of assigning unique truth-values finitarily to quantified PA formulas under FI, this is a well-defined logic that finitarily models Arithmetic.

6. The detailed arguments are given in the following paper which appears in the December 2016 issue of ‘Cognitive Systems Research’:

‘The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Gödelian Thesis’

The paper shows that—contrary to conventional argumentation—PA has a well-defined logic since the set of axioms and rules of inference of PA+FOL constructively assign unique truth-values:

(a) Of provability/unprovability to the formulas of PA; and

(b) Of computably realizable truth/falsity to the sentences of Dedekind’s Peano Arithmetic which is defined semantically by the computably realizable interpretation of PA over the structure of the natural numbers.

7. The added complication, then, is that even if “‘someone writes up a paper that ‘proves’ that X is true, where X is some hard open problem” of a language L that has a well-defined interpretation over a domain M, and X is a L-formula of the form [(Ex)R(x)], any proof of X would only tell us that there is no algorithm that, for any given x, would negate the interpretation R(x) of the L-formula [R(x)] over M.

It would not allow us to conclude further—if that is the intended claim—that there is some k for which R(k) is a true proposition in M. For that we would need both a first-order construction of [k] in L, and a proof of the L-formula [R(k)]

Regards,

Bhup

• February 3, 2017 10:00 pm

5. January 26, 2017 2:42 am

Just focus on methods. If the method is useful, it will get checked, rewritten, reused and, eventually, taught. If the method is not useful, there is nothing lost if we declare the problem still unsolved. Can anyone think of a counterexample to this last claim? [honest question]

6. January 26, 2017 4:58 am

The “Ignore”, “Show me the beef”, and “Timeout” approaches are reasonable. After all, we are talking about a real problem here, and those approaches do have merit for dealing with this problem in practice. The “Money” and “Hire” approaches are too specific and cannot be implemented in practice to cope with the problem, not even as a remedy for the downsides of the “Ignore” approach. No objections to the “Knock Heads Together” approach, especially for credible major claims like Shinichi Mochizuki’s inter-universal Teichmüller theory. Regarding the “Share The Credit” approach, the “Acknowledgments” section of a paper is the place where it is expected that significant contributions from non-co-authors are mentioned. I guess Laszlo Babai will also use this section to praise the contribution of Harald Helfgott. Hence my advice: don’t fix it, if it ain’t broken.

Let me stress again that we are talking about a real problem here. And the problem is not just papers by unknown nobodies, an unchecked paper by a renowned researcher of a major result might be even riskier. Vladimir Voevodsky gives a shocking account of this fact here http://www.math.ias.edu/vladimir/files/2014_IAS.pdf the most disturbing episode being his wrong paper from 1989 where a counterexample was published by Carlos Simpson in 1998 and he finally accepted in 2013 that the counterexample was correct.

And let us also look at Carlos Simpson in this episode. Should he have challenged the authors of the wrong paper? Like Justin Archer does here https://justincarcher.wordpress.com/2016/10/05/problems-cake-cutting-aziz-mackenzie-2016/ ? And if an outside observer comes along, should he try to determine who is right or wrong here? And then, what is the next step? I think the goal should always be to clarify with the original authors whether there is a real gap, or whether you just misunderstood something. But life may not be that simple.

7. January 26, 2017 12:20 pm

Regarding credit sharing, I have never understood why science papers are so different in this respect from movies. We could also have a problem proposed by, approach suggested by, proof found by, verified by, written by etc. instead of just listing some authors.

• January 26, 2017 12:42 pm

I agree with you completely. But, you seem do not know this world of human being. Any law and rule has this nature. It will never change in the ‘right’ way as you think.

• January 26, 2017 1:15 pm

Even for a movie, the marketing material will list the director as sort of the author of the film, sometimes the producer might share the credit for authorship. The credit section at the end of the movie is comparable to the acknowledgement section of a paper. The difference between an author and a major contributor is that an author should have a right to block the publication of a paper, if he doesn’t agree with its content. Or at least an author should have the right to demand that his name be removed from the list of authors, if the paper is to be published nevertheless. I have seen this happening in the past, and I also have seen cases where people only learned after publication that they were listed as authors (they were not amused).

8. January 26, 2017 10:47 pm

Originally proofs were used to convince people that a theorem is true. Now it seems that people write proofs to convince people that their proof is true.

9. January 28, 2017 12:38 pm

mochizuki had a contrarian style working in near isolation for many years and thats increased the problem here. this also reminds me of the deolalikar proof. a collective cyber peer review seemed to work well in that case. another aspect is the massive # of pages in new complex proofs. another case study is perelmans work on poincare conjecture which took several years to verify by experts but it looks like perelman had the result at the early point but had to do further major work to communicate it.

• February 2, 2017 4:28 am

I find it funny to compare how Edward Nelson retracted his claim that PA is inconsistent after Terence Tao explained to him a serious flaw in his proof (https://golem.ph.utexas.edu/category/2011/09/the_inconsistency_of_arithmeti.html#c039590), to how Shinichi Mochizuki “updated his papers” after Vesselin Dimitrov (quoted by Terence Tao: https://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/#comment-10863) located a robust family of counterexamples to the main Diophantine inequality claimed in IUTT-IV. At least the “Acknowledgments” section was also updated to include the following: “as well as Vesselin Dimitrov and Akshay Venkatesh for useful comments concerning the analytic number theory aspects of the present paper”. (From a reviewer perspective, this is a point where you wish that the author(s) would first establish some solid uncontroversial intermediate results, such that they wouldn’t feel pressured to answer with significant updates to any holes or inconsistencies in their proof pointed out by “casual” readers.)

• February 5, 2017 6:45 am

The Nelson-Mochizuki comparison is funny, because if PA was inconsistent then the ABC conjecture would be true. But maybe every theory is inconsistent, with a contradiction that’s way too hard for us to find. Thus we might have both P=NP and P!=NP. Along this line, NP-completeness would be the device which protects our mental sanity…

• June 19, 2018 10:08 am

Serge’s comment that if PA were inconsistent then the ABC conjecture would be true is fallacious. If PA were inconsistent then the ABC conjecture would be a theorem of PA but that would not imply that ABC is true, because if PA were inconsistent then theorems of PA would not necessarily be true.

• February 4, 2017 10:48 am

fyi/ ps more links on mochizuki a fascinating/ “in play” case study. as for retracted claims, do not know the best way to do it, but web technology is certainly helping some, such that people can get updates quickly and dialogue with each other publicly. the only complaint might be that it tends to be decentrallized/ scattered at times.

maybe math is working exactly as it should and these episodes remind us of the inherently social aspects of math (see davies & hirsch, “mathematical experience”) whereas it has the appearance of mere logic/ formalism. that is the clothes but underneath, theres a human! RJL/KWR dont mention AI/ theorem proving/ verification technology but thats a dark horse that is increasingly active/ viable.

10. January 28, 2017 4:34 pm

A clever method of spreading the news about a new proof, is the one that Bambai used recently. Make a lecture, in video or in person and people will get interested in your work. Also it is a great way to better explain the dark corners of your theory.
About the ignore case. It would be more appropriate for reviewers to specify the reasons of rejection of a paper. If a reviewer would respond that he cannot handle a paper, this would be acceptable and something that any author should respect. But a review thay says “I cannot understand it so it must be wrong” it does feel as an appropriate answer.

• January 31, 2017 5:14 pm

Sorry, a small typo in my comment. In the last sentence I wanted to write that it does not feel appropriate to receive a review that says “I cannot understand your proof so it must be wrong”.

11. January 30, 2017 11:51 pm

Forgive me for going off topic, but since you mentioned the Jacobian conjecture, I wanted to remark that it seems to me that, judging from this survey, the Jacobian conjecture would be proven by the following trivial addition to known results:
According to the survey (Theorem 1.6), Bass, Connell, and Wright proved that the Jacobian conjecture is true for all polynomial functions if it is true for all polynomial functions of the form $F(X_1,\dots,X_n) = (H_1+X_1,\dots,H_n+X_n)$, where the $H_i$ are all either zero or homogeneous of degree three. It seems to me that, more generally, a function of the form $F(X) = F_3(X) + F_1(X) + C$ where each $F_j$ is zero or homogeneous of degree $j$ and $C$ is a constant, can be proven to satisfy the Jacobian conjecture as follows:
Let $JF(X)$ be the Jacobian of $F$ at $X$. Suppose, for sake of contradiction, that $F$ has the described form and that $JF(X)$ is regular everywhere, but that there are nevertheless distinct $a,b$ so that $F(a) = F(b)$. Then the function $G(X) := F(X+b)-F(b)$ satisfies $G(0) = 0 = G(a-b)$, is of the form $G(X) = G_3(X) + G_1(X)$ (with $G_i$ homogeneous of degree $i$), and its Jacobian $JG$ satisfies $JG(X-b) = JF(X)$, and is hence also regular everywhere. Define $c := a-b$, and consider the function $g:\mathbb{R} \longrightarrow \mathbb{C}^n$ defined by $g(t) = G(tc)$. Then $g'(t) = 3t^2G_3(c) + G_1(c)$, and hence $g'(\frac{1}{\sqrt{3}}) = G_3(c) + G_1(c) = G(c) = 0$. On the other hand, $g'(t) = JG(tc) \cdot c$. Hence $JG(\frac{1}{\sqrt{3}}c) \cdot c = g'(\frac{1}{\sqrt{3}}) = 0$, and therefore $JG$ is singular at $\frac{1}{\sqrt{3}}c$, and therefore we have a contradiction.
This argument isn’t new: It’s just Wang’s proof of the Jacobian conjecture for functions of degree $\leq 2$, which may also be found in the survey (Proposition 1.5), with $t = \frac{1}{\sqrt{3}}$ instead of $t = \frac{1}{2}$.

12. February 3, 2017 10:06 pm

Allow me to point out a small typo:

“Show Me The Beef: I firmly believe that a proof of an open problem should have at least one simple [missing] to state new trick or insight that we all missed.”
Did you mean “… one simple idea/statement ….” here?

13. February 4, 2017 12:05 am

14. February 5, 2017 6:11 am

You teachers should dedicate a whole semester to understanding the proof, with the project of debunking it. The pupils who graduate will be those who find the mistake. Or, in case the proof is correct, they’ll get the degree iff they manage to simplify it so as to make it understandable by their teacher.

15. April 26, 2017 8:08 am

Given a number x and a set S of n positive integers, MINIMUM is the problem of deciding whether x is the minimum of S. We can easily obtain an upper bound of n comparisons: find the minimum in the set and check whether the result is equal to x. Is this the best we can do? Yes, since we can obtain a lower bound of (n – 1) comparisons for the problem of determining the minimum and another obligatory comparison for checking whether that minimum is equal to x. A representation of a set S with n positive integers is a Boolean circuit C, such that C accepts the binary representation of a bit integer i if and only if i is in S. Given a positive integer x and a Boolean circuit C, we define SUCCINCT-MINIMUM as the problem of deciding whether x is the minimum bit integer which accepts C as input. For certain kind of SUCCINCT-MINIMUM instances, the input (x, C) is exponentially more succinct than the cardinality of the set S that represents C. Since we prove that SUCCINCT-MINIMUM is at least as hard as MINIMUM in order to the cardinality of S, then we could not decide every instance of SUCCINCT-MINIMUM in polynomial time. If some instance (x, C) is not in SUCCINCT-MINIMUM, then it would exist a positive integer y such that y < x and C accepts the bit integer y. Since we can evaluate whether C accepts the bit integer y in polynomial time and we have that y is polynomially bounded by x, then we can confirm SUCCINCT-MINIMUM is in coNP. If any single coNP problem cannot be solved in polynomial time, then P is not equal to coNP. Certainly, P = NP implies P = coNP because P is closed under complement, and therefore, we can conclude P is not equal to NP.

You could read the details in:

16. April 26, 2017 10:30 am

Certainly, you could check each step of the P versus NP proof above in an easy way!!!

17. April 27, 2017 10:47 am

P versus NP is considered one of the great open problems of science. This consists in knowing the answer of the following question: Is P equal to NP? This incognita was first mentioned in a letter written by John Nash to the National Security Agency in 1955. Since that date, all efforts to find a proof for this huge problem have failed.
I show a solution to that problem as follows:
Given a number x and a set S of n positive integers, MINIMUM is the problem of deciding whether x is the minimum of S. We can easily obtain an upper bound of n comparisons: find the minimum in the set and check whether the result is equal to x. Is this the best we can do? Yes, since we can obtain a lower bound of (n – 1) comparisons for the problem of determining the minimum and another obligatory comparison for checking whether that minimum is equal to x. A representation of a set S with n positive integers is a Boolean circuit C, such that C accepts the binary representation of a bit integer i if and only if i is in S. Given a positive integer x and a Boolean circuit C, we define SUCCINCT-MINIMUM as the problem of deciding whether x is the minimum bit integer which accepts C as input. For certain kind of SUCCINCT-MINIMUM instances, the input (x, C) is exponentially more succinct than the cardinality of the set S that represents C. Since we prove that SUCCINCT-MINIMUM is at least as hard as MINIMUM in order to the cardinality of S, then we could not decide every instance of SUCCINCT-MINIMUM in polynomial time. If some instance (x, C) is not in SUCCINCT-MINIMUM, then it would exist a positive integer y such that y < x and C accepts the bit integer y. Since we can evaluate whether C accepts the bit integer y in polynomial time and we have that y is polynomially bounded by x, then we can confirm SUCCINCT-MINIMUM is in coNP. If any single coNP problem cannot be solved in polynomial time, then P is not equal to coNP. Certainly, P = NP implies P = coNP because P is closed under complement, and therefore, we can conclude P is not equal to NP.

https://hal.archives-ouvertes.fr/hal-01509423/document

18. April 27, 2017 10:53 am

https://hal.archives-ouvertes.fr/hal-01509423/document

19. June 16, 2017 8:25 am

Hi!!! I tried to prove: P=NP⇒∃k∈N:TIME(n^{k/2})=NTIME(n^{k/2}). However, I intended to demonstrate that ∀k∈N:TIME(n^{k/2})≠NTIME(n^{k/2}). In this way, I showed that P≠NP. I think my proof could be easy to understand. I submitted to a journal and I received the following answer:

“Mistakes exist in the classes definition and the main argument. Please note also that many reviewers nowadays refuse reading papers about claims of P vs NP. We suggest that you first speak to some complexity experts (e.g. in your references) about your “proof” and/or publicize your “claims” via a web site.”

So that was what I done. I put the proof in a preprint website and you could access here:

https://hal.archives-ouvertes.fr/hal-01533051/document

I’m really interested in improve the proof. I think those mistakes can be fixed. Let me know in the following email if you are interested in improve this paper together and participate as a co-author:

frank.vega@yahoo.com

Best….