## Open Problems That Might Be Easy

*A speculation on the length of proofs of open problems*

Broad Institute source |

Nick Patterson is one of the smartest people I have ever known.

Today I would like to talk about something he once said to me and how it relates to solving open problems.

Nick now works at the Broad Institute on genomic problems—especially large data sets. For decades, he worked as a cryptographer for the British, and then the U.S. He also spent many years with Renaissance Technologies, Jim Simons’s investment hedge fund.

Years ago I consulted regularly at IDA—a think tank for the NSA that was based in Princeton. I cannot tell you what we did, nor what we did not. But I can say we worked on interesting problems in the area of communication. One of the top scientists there was Nick. He was a quite strong chess player, and at tea would often play the other top player at speed chess. I once asked Nick how he would fare against the then world champion Anatoly Karpov. He answered that it would be:

Sack, sack, mate.

I never really believed this. I always thought that he was strong enough to hold out a bit, but perhaps Nick was right. Perhaps it would be over quickly. That Karpov would find a simple, short, demonstration that would wipe Nick out.

But given how smart Nick was I often wondered if he was right when we translate his statement to mathematics:

Can open problems have simple solutions?

Could some open problems fall to: “We know […] and by induction […] we are done”?

## Some Short Solutions

Leonhard Euler thought long and hard about possible generalizations of Fermat’s Last Theorem. He was the first to fashion a nearly-correct proof that no cube could be a nontrivial sum of fewer than three cubes. He conjectured that no 4th-power could be a nontrivial sum of fewer than four 4th-powers, no 5th-power a sum of fewer than five 5th-powers, and so on. These cases were open for nearly two centuries, but in 1966, Leon Lander and Thomas Parker used a computer to find

It still took 20 more years for the other shoe to drop on the case:

This was the smallest member of an infinite family of solutions found by Noam Elkies—himself a master chess player—though not the smallest overall, which is

.

The shortest solution to a problem in complexity theory that was important and open for several years was:

Well this needs a little context. If either or is the identity, then this *group commutator* is the identity. Else in a large enough permutation group like , you can rig it so that is never the identity. Thus if you a piece of code such that if some property holds and otherwise, and likewise gives iff holds, then the code computes the AND gate .

Since it is easy to code by the code , and since AND and NOT is a complete basis, the upshot is that all -depth Boolean formulas can be computed by codes of size . Permutations of 5 elements allow codes in the form of *width-5 branching programs*, so what tumbles out is the celebrated theorem of David Barrington that the complexity class has constant-width branching programs. GLL’s old post on it played up the simplicity, but like many mate-in-*n* chess problems it was not simple to see in advance.

More recently, a seemingly complicated conjecture by Richard Stanley and Herbert Wilf about permutations was proved in an 8-page paper by Adam Marcus and Gábor Tardos. For any sequence of distinct integers define its *pattern* to be the unique permutation of that has the same relative ordering. For instance, the pattern of is . Given any length- pattern and define to be the set of permutations of for which *no* length- subsequence (not necessarily consecutive) has pattern . Whereas grows as , Stanley and Wilf conjectured that the maximum growth for is , for any fixed . Marcus and Tardos polished off a stronger conjecture by Zoltán Füredi and Péter Hajnal about matrices in basically 1.5 pages whose short lemmas read like sack, sack, mate.

We found the last example highlighted in a StackExchange thread on open problems solved with short proofs. Another we have seen mentioned elsewhere and covered in a post is Roger Apéry’s proof that is irrational, which centered on deriving and then analyzing the new identity

The Euler examples could be ascribed mostly to “computer-guided good fortune” but the other three clearly involve a new idea attached to a crisp new object: an equation or group or matrix construction. We wonder whether and when the possibility of such objects can be “sniffed out.”

## Problems Ripe For Short Solutions?

I sometimes wonder if we have missed a simple proof—no, short is a better term—a short proof that would resolve one of our favorite open problems. Here a couple that Ken and I suggest may fall into this category.

Péter Frankl’s conjecture. Surely it just needs an equation or formula for the right kind of “potential function”? We noted connections to other forms in a followup post.

Freeman Dyson’s conjecture. It stands out even among number-theory statements that are “overwhelmingly probable” in random models, and there ought to be some principle for boxing away potential counterexamples along lines of how certain numbers with fast approximations can be proved to be transcendental. (Then again hasn’t yet been proved transcendental.)

Problems involving deterministic finite automata (DFAs) are often easy to solve or show to be decidable. But here is one from Jeff Shallit that is not: Given a DFA, does it accept a string that is the binary representation of a prime number—is this decidable?

Even simpler is the question, given binary strings and of length , of finding the smallest DFA that accepts but not (or vice-versa). Can we put tight bounds on the size of in terms of ?

Can a Boolean circuit of size computing a function be converted into a circuit of size with gates each computing the XOR of with ? This would be a Boolean analogue of the famous Derivative Lemma of Walter Baur and Volker Strassen, which we have discussed here and here.

## Harder Ones?

By contrast, the 3n+1 conjecture of Lothar Collatz does not strike us as a candidate. John Conway proved that related forms are undecidable, and we suspect that it will need deep new knowledge of how intuitively multiplicative properties of integers duck-and-weave under the additive structure.

Regarding the Skolem problem we are on the fence: in a sense it involves DFAs which should be easy, but even for small the connections to deep approximation problems in number theory become clear.

Here are four others—what do you think?:

*Can polynomials modulo compute the majority function?* Of course we mean polynomials of low degree and we allow some flex on the meaning of “compute” provided it relates to polynomial-sized constant-depth circuits.

*Can we at least prove that SAT cannot be computed in linear time on a Turing Machine?* Note that this does not follow from the proof of as we discussed here. For all those of you who are sure that not only does , but also that ETH is true, how can we not prove this “trivial lower bound?”

*Can whether an -node graph has a triangle be decided in time?* Or at least better than matrix-multiplication time? Ken knew a visitor to Buffalo who admitted spending a year on this with little to show for it.

*Can we solve the lonely runner problem?* Or at least improve the number of runners that it is known to be true for?

## Open Problems

How can we possibly forecast how open an open problem is?

## How Joe Traub Beat the Street

*An insight into the computation of financial information*

Columbia memorial source |

Joseph Traub passed away just a week ago, on August 24th. He is best known for his computer science leadership positions at CMU, Columbia, CSTB, the *Journal of Complexity*—they all start with “C.” CSTB is the Computer Science and Telecommunications Board of the National Academies of Science, Engineering, and Medicine. At each of these he was the head and for all except Carnegie-Mellon he was the first head—the founder.

Today Ken and I wish to highlight one technical result by Traub and his co-workers that you may not know about.

Read more…

## Cancellation is a Pain

*How to avoid the pain of estimating tough sums*

Cricketing source |

Andrew Granville is a number theorist, who has written—besides his own terrific research—some beautiful expository papers, especially on analytic number theory.

Today Ken and I wish to talk about his survey paper earlier this year on the size of gaps between consecutive primes.

Read more…

## Cryptography And Quicksand

* A basic question about cryptography that we pretend is not there. *

Victor Shoup is one of the top experts in cryptography. He is well known for many things including a soon to be released book that is joint with Dan Boneh on, what else, cryptography; and the implementation of many of the basic functions of cryptography.

Today I want to talk about my recent visit to the Simons Institute in Berkeley where I heard Victor give a special lecture.

Read more…

## Changing Definitions

* How important is “backward compatibility” in math and CS? *

David Darling source |

Henri Lebesgue came the closest of anyone we know to changing the value of a mathematical quantity. Of course he did not do this—it was not like defining π to be 3. What he did was change the accepted definition of *integral* so that the integral from to of the characteristic function of the rational numbers became a definite . It remains even when integrating over all of .

Today we talk about changing definitions in mathematics and computer programming and ask when it is important to give up continuity with past practice. Read more…

## Four Weddings And A Puzzle

* An unusual voting problem? *

“Four Weddings” is a reality based TV show that appears in America on the cable channel TLC. Yes a TV show: not a researcher, not someone who has recently solved a long-standing open problem. Just a TV show.

Today I want to discuss a curious math puzzle that underlines this show.

The show raises an interesting puzzle about voting schemes:

Read more…

## Playing Chess With the Devil

* A new kind of ‘liar’ puzzle using Freestyle chess *

By permission of Vierrae (Katerina Suvorova), source |

Raymond Smullyan is probably the world’s greatest expert on the logic of lying and the logic of chess. He is still writing books well into his 10th decade. Last year he published a new textbook, *A Beginner’s Guide to Mathematical Logic*, and in 2013 a new puzzle book named for Kurt Gödel. His 1979 book *The Chess Mysteries of Sherlock Holmes* introduced *retrograde analysis*—taking back moves from positions to tell how they could possibly have arisen—to a wide public.

Today Ken and I wish to talk about whether we can ever play perfect chess—or at least better chess than any one chess program—by combining output from multiple programs that sometimes might “lie.” Read more…