# Missing Mate in Ten

*Can we have overlooked short solutions to major problems?*

src |

Efim Geller was a Soviet chess grandmaster, author, and teacher. Between 1953 and 1973 he reached the late stages of contention for the world championship many times but was stopped short of a match for the title. The Italian-American grandmaster Fabiano Caruana was similarly stopped last week in the World Chess Federation Candidates Tournament in Moscow. He was beaten in the last round by Sergey Karjakin of Russia, who will challenge world champion Magnus Carlsen of Norway in a title match that is scheduled for November 11–30 in New York City.

Today we salute a famous move by Geller that was missed by an entire team of analysts preparing for the world championship in 1955, and ask how often similar things happen in mathematics and theory.

The 1955 Interzonal Tournament in Gothenburg, Sweden, included three players from Argentina: Miguel Najdorf, Oscar Panno, and Hermann Pilnik. In the fourteenth round they all had Black against the Soviets Paul Keres, Geller, and Boris Spassky, respectively. The Argentines all played a Sicilian Defense variation named for Najdorf and sprung a pawn sacrifice on move 9 that they knew would induce the Soviets to counter-sacrifice a Knight, leading to the following position after Black’s 12th move in all three games:

Chessgames.com source |

As related by former US Champion Lubomir Kavalek, who contested four Interzonals between 1967 and 1987, Najdorf indecorously walked up to Geller after Panno had left his chair and declared,

“Your game is lost. We analyzed it all.”

Unfazed, Geller thought for thirty more minutes and improvised 13. Bb5!!, a shocking second sacrifice that the Argentines had not considered. The Bishop cannot be taken right away because White threatens to castle with check and soon mate. The unsuspected point is that after Black’s defensive Knight moves to the central post e5 and is challenged by White’s other Bishop moving to g3, the other Black Knight on b8 cannot reinforce it from c6 or d7 because the rogue Bishop can take it. The Bishop also X-rays the back-row square e8 which Black’s Queen could use.

Whereas Najdorf’s speech was *in-delicto*, no rule prevented Keres and Spassky from walking over and noticing and “cribbing” Geller’s move. It is not known if they got it that way—some say Keres already knew the move—but both played it after twenty-plus more minutes of reflection. Panno perished ten moves later and the other Argentines were equally dead after failing to find the lone reply that lets Black live. Though there is stronger indication that Keres noted the draw-saving reply 13…Rh7! then or shortly afterward, the first time it ever was played on a board was in the next Interzonal three years later, by the hand of Bobby Fischer.

## The Easiest Riemann Proof?

Bernhard Riemann’s famous hypothesis has been in the news a lot recently. The second half of November saw one proof claim in Nigeria and another by Louis de Branges, who acts as a periodic function in this regard but has scored some other hits. We just covered some other news about the primes.

Then last week this paper came to our attention. It is titled, “A Direct Proof for Riemann Hypothesis Based on Jacobi Functional Equation and Schwarz Reflection Principle” by Xiang Liu, Ekaterina Rybachuk, and Fasheng Liu. The paper is *short*. My reaction from a quick look was,

It’s like saying that in an opening that champions have played for decades they all missed a mate in ten.

I must admit that I’ve spent more time thinking of a real missing mate-in-ten case in chess than probing the paper. The above is the closest famous case I could think of, and it wasn’t like the Argentines had been analyzing for the 157 years that Riemann has been open—they had only been doing it during the weeks of the tournament. Without taking time to find errors in the paper, let’s ask some existential questions:

Is it even possible to have missed so short a proof? What things like that have happened in mathematical history?

And even more existential, *what is the easiest possible kind of proof of Riemann that we might not know about?* This is a vastly different question from assessing Shinichi Mochizuki’s claimed proof of the ABC Conjecture. No one would be surprised at Riemann yielding to such complexity. Reader comments are welcome and invited.

## P = NP Status and Gearshifts

Dick had already been intending to make an update post on what is going on with the famous problem. We know that most, if not almost all, of our colleagues believe on clear principles that . One of us, Dick, has repeatedly argued that they might be different but it is no so clear. Recently Donald Knuth has voiced some opinions along those lines.

Major math problems do get solved from time to time. Rarely, however, do the solutions go “from zero to a hundred.” That is there often are partial or intermediate results—like gearshifts as a car accelerates. For example, the famous Fermat’s Last Theorem was proved for many primes until Andrew Wiles proved it for all cases, and Wiles built on promising advances by Ken Ribet and others. *En-passant*, we congratulate Sir Andrew on winning this year’s Abel Prize.

The recent breakthrough on the Twin Prime Problem by Yitang Zhang is another brilliant example of partial progress. Although his step from a prime gap that was near logarithmic to a constant—initially a huge constant—was unexpected yet obtained by mostly-known techniques, it needed pedal-to-the-metal on those techniques.

One might expect a similar situation with and . If they really are not equal then perhaps we would be able first to prove that SAT requires super-linear time; then prove a higher bound; and finally prove that and are not equal. Yet this seems not to be happening.

## Results and Partial Challenges

There are two kinds of “results” to report on about versus . We just recently again mentioned Gerhard Woeginger’s page with a clearinghouse of over a hundred proof attempts.

On the side, the usual idea is one that has been tried for decades. Take an -complete problem, such as TSP, and supply an algorithm that solves it. Often the “algorithm” uses Linear Programming as a subroutine, but some do use other methods.

There is the issue that certain *barriers* exemplified by this may prevent large classes of algorithms from possibly succeeding. So we can say at least that a proof might have an intermediate stage of saying why certain barriers do not apply. Otherwise, however, a proof of by algorithm is bound to be pretty direct. Plausibly it would have one new and pivotal algorithmic idea, one that might of itself furnish an explanation of why it was missed.

On the side, however, there are several concrete intermediate challenges on which one should be able to demonstrate progress to support one’s belief.

A third of a century ago, Wolfgang Paul, Nick Pippenger, Endre Szemerédi, and William Trotter showed that for the standard multitape Turing machine model,

This proves a sense in which guessing is more powerful than no guessing. Yet a result like

appears hopeless. Nor have we succeeded in transferring the result to other natural machine models, such as Turing machines with one or more planar tapes.

How about proving that SAT cannot be done in time and space for particular fixed and reasonable space functions ? There do exist a few results of this kind—can they be extended? How come we cannot prove that SAT is not in linear time? This however also seems hopeless today.

A related example is, can we prove that SAT needs Boolean circuits of size at least , let alone super-linear circuits? Can we prove that some natural problems cannot be solved by quadratic or nearly-linear sized circuits of depth?

## Open Problems

Is it worth making a clearinghouse—even more than a survey—of attempts on these intermediate challenges?

What are some possible kinds of mathematical proof elements that we might be missing?

Claiming that a proof cannot possibly be correct because it is too short and the problem is too difficult is essentially equivalent to claiming that P is equal to NP. But if one believes that P is not equal to NP, there is no problem with the notion that a difficult problem can have a simple solution.

Nice comparison. Though I always remember the game where Ivanchuk missed a mate in one move against Anand (black to move after 29. Kf1 http://www.chessgames.com/perl/chessgame?gid=1018510). I still think that a grandmaster missing a single move mate is far more scandalous, considering that Ivanchuk lost on time.

💡 the “gear shift” analogy is nice/ great, have mused on something similar and yes P vs NP seems to have no such “gear shift” like constructions or intermediate results. (many other examples in math; another example is hamiltons formulation of the poincare problem highly central to perelmans attack). my thinking after many years is that kolmogorov complexity might explain why the problem is so hard: its similar to asking about “compressed/ optimal circuits” for a particular problem. those optimal circuits must be found by TMs and therefore kolmogorov complexity, chock full of undecidable problems, seems to be the pivotal crux so to speak. further thoughts on this in a paper by fortnow analyzing lots of kolmogorov complexity connections to complexity theory.

another direction probably underexplored: empirical attacks on P vs NP. for example there are known formulations using graph complexity that would be amenable to empirical searches for small n.

for long time have been hoping that automated thm proving could be formulated to attack hard problems. have long thought that some missing link is near at hand. we can beat top grandmaster in Go with AI, think ATP approaches to very hard problems are very near at hand. imagine computers finding incomprehensible proofs that are later deconstructed by humans into more human readable format. similar ideas are being pursued by Gowers.

my long-contemplated idea, and more recently focused on is that there is some zen-like koan in the proof of infinity of primes. there are machine based versions of this but they seem highly unnecessarily complex. why are they so complex? maybe a simpler version, maybe somehow radically different approach, of the infinity of primes can be found and maybe it would be a signpost to an improved technique in ATP. invite anyone else interested in this to comment on my blog & would like to begin a longterm collaborative attack on this problem.

btw heres a math question “right down this alley” and my own answer citing a semifamous proof by erdos of bertrands postulate, a nice historical example.

http://math.stackexchange.com/questions/1111621/examples-of-open-problems-solved-through-short-proof

➡ ⭐ ah, some quick further thought, here is one possible recent “gear shift” candidate for P vs NP: the new fine grained complexity results that relate it to O(n^2) limits on edit distance (backurs/ indyk) look quite promising and possibly a foundation that can be built on; there already seems to be a lot of fine grained results also involving O(n^3) problems by williams et al, some of this already covered in your blog and here also

https://vzn1.wordpress.com/2015/07/10/backurs-indyk-connect-strong-exponential-time-hypothesis-to-edit-distance-other-cs-time-space-hiearchy-continuum-news/

actually coincidentally right along these lines here are hot-off-the-press results by Gao/ Impagliazzo relating to fine grained complexity and the newly starring orthogonal vectors problem, also covered by your blog

http://eccc.hpi-web.de/report/2016/053/

and there are recent major advances in arithmetic complexity that seem to highly impinge on NP vs P/Poly areas (re the uniform vs nonuniform distinction). could (nonuniform) NP vs P/Poly be resolved even before uniform P vs NP?

https://vzn1.wordpress.com/2015/05/01/algebraic-circuit-complexity-valiants-vpvnp-recent-advances-breakthroughs/

How did that paper came to your attention last week? Why did you brought it to our attention?

Maybe I should rather ask myself why I decided to read it. The paper is short indeed, and the willful (I mean it) error was introduced in a way which didn’t hurt the readability of the paper. So reading the paper was easy and pleasurable.

The error is (intentionally) introduced when after formula (3.8), a similar looking formula (3.9) is obtained from lemma 4. But while formula (3.8) has very little restrictions on the parameters, formula (3.9) is only valid for certain parameter ranges (the violated one is even explicitly stated in lemma 4), and this violation of the parameter ranges is immediately exploited by applying lemma 5 to get a fake proof of the Riemann hypothesis.

It’s indeed hard to believe that the authors didn’t notice the error (they really almost explicitly say what kind of mistake they are making). Maybe the article is some kind of Sokal hoax. The journal in question isn’t known for its thorough review process, apparently it once accepted a computer-generated nonsense paper, which wasn’t published only because the hoaxer didn’t pay the $500 “processing fee”.

I agree that (3.9) doesn’t follow from (3.8).

What _does_ seem to follow from (3.8) is

A = \int_0^{\infty} \( \sinh (\sigma y/t) – \sinh ((1-\sigma) y/t) \) \sin y dy

and I think that this does actually imply (with a suitable generalization of Lemma 5) that \sigma = 1/2.

The Riemann Zeta function is universal (see https://en.wikipedia.org/wiki/Zeta_function_universality). In computer science, when something is universal, it is in general unpredictable through finite methods. So if we were to extend this notion to complex analysis, we could conclude that the roots of the Riemann Zeta function are unpredictable through finite methods.

So does this mean that RH is impossible to prove?

Why would it, it would just mean it is unprovable by finite methods. It seems like if RH is false, it is not undecidable, and can be proved false in ZFC. It is still possible that it is undecidable in ZFC, if it is true, though apparently this is thought unlikely. See

http://mathoverflow.net/questions/79685/can-the-riemann-hypothesis-be-undecidable

for more discussion.

a few more stray thoughts/ musings building on the theme of this post that has occupied my own thinking much over the years. is P vs NP hard because of “intrinsic difficulty” or like alphago, what would happen if a few elite scientists focused on it, with a

major budget?the alphago team accomplished a ~10yr advance in less than a year with only about ~15 employees. ofc (my sense on this after yrs studying the community) most in computer science would not classify P vs NP as a similar problem that could yield a breakthru in near term, but on the other hand, thats just speculation.but anyway the alphago breakthru is worth some close/ serious study in how to go about actually achieving a breakthru, from theory to practice and an actual team/ project. there is a “random walk” aspect to science to some degree, its largely about scientists working on what they think can yield results, and to some degree avoiding the hard stuff that is not incremental. and even publicly funded research is very results oriented and generally not channelled toward esp “high risk” areas.

my feeling after decades of looking into it & studying similar problems/ breakthrus is that strong advances on P vs NP understanding could be made with (1) a very particular/ carefully chosen approach (2) a crack team (3) budget (4) state of the art technology such as machine learning, big data, supercomputers etc… am aware that some scientists might scoff at this idea, but again grandmaster Go play this year would have been scoffed at merely a year ago.

gasarch has made some neat analysis of this problem by polling computer scientists about their opinion. what if one went deeper? ask/ interview computer scientists a “wish list” / “dream scenario” how they would go about finding the solution, possibly with unlimited budget. what would they say? would there be any consensus in any areas?

A question, tangentially related to this topic: Could you comment on the importance (or perceived hardness) of the Collatz Conjecture as compared to the Riemann Hypothesis? In particular, according to Wikipedia, Erdos has made this odd comment about the Collatz Conjecture, saying that “Mathematics is not ready” for it. I wonder if this can be taken to mean that it is farther out of reach than the RH?

> Is it worth making a clearinghouse—even more than a survey—of attempts on these intermediate challenges?

Cannot stress this enough: Yes.

P vs NP is related to the topology of quartic positive polynomials that are, perturbation of . More specific, a perturbattion of zeros. It is possible that slution here is simple. Even if not it will greatly advance engineering, real life, application extending sum of squares technique by supplementing it with generated positive polinomials to obtain global minimum of quartic polynomials that are tightly related to generalized pca. I mean most general case.

Great post. It saliently shows the discrepancy between what we believe (solving NP-complete problems takes exponential time) and what we cannot even show (SAT takes a least super linear time). I was reading a recent interview by Scott Aaronson about why we should not be surprised at this sad state of affairs: after all it took 350 years to turn the Fermat conjecture into a theorem. But in this case there had been lots of partial progress that pointed in this direction. An enabling step (the Taniyama-Weil conjecture) was formulated 60 years ago. Cases for specifics exponents were also proved along the way. Numerical evidence began to accumulate massively in favor conjecture’s veracity long before Wiles’ theorem.

In the case of P vs NP, there has not been anything close to the same partial progress. Sure we have proved that if we deny NOT gates to Boolean circuits, they must be exponentially large to detect cliques in graphs. But the size of such monotone circuits must grow exponentially even for some problems in P. So I do not think it counts like a special case toward proving P!=NP. Proving for example that a SAT formula with say n=15 variables takes at least 10,000 gates would be a real progress in that direction (sure it would not prove anything asymptotic, but at least it would show that circuits must be “big” by a practical measure). I reckon Scott has proposed some numerical, brute-force, experiments along those lines for the Permanent/Determinant problem.

Thanks! I once tried some ideas along similar lines described here.