Skip to content

Open Problems That Might Be Easy

September 3, 2015


A speculation on the length of proofs of open problems

NickPatterson
Broad Institute source

Nick Patterson is one of the smartest people I have ever known.

Today I would like to talk about something he once said to me and how it relates to solving open problems.

Nick now works at the Broad Institute on genomic problems—especially large data sets. For decades, he worked as a cryptographer for the British, and then the U.S. He also spent many years with Renaissance Technologies, Jim Simons’s investment hedge fund.

Years ago I consulted regularly at IDA—a think tank for the NSA that was based in Princeton. I cannot tell you what we did, nor what we did not. But I can say we worked on interesting problems in the area of communication. One of the top scientists there was Nick. He was a quite strong chess player, and at tea would often play the other top player at speed chess. I once asked Nick how he would fare against the then world champion Anatoly Karpov. He answered that it would be:

Sack, sack, mate.

I never really believed this. I always thought that he was strong enough to hold out a bit, but perhaps Nick was right. Perhaps it would be over quickly. That Karpov would find a simple, short, demonstration that would wipe Nick out.

But given how smart Nick was I often wondered if he was right when we translate his statement to mathematics:

Can open problems have simple solutions?

Could some open problems fall to: “We know […] and by induction […] we are done”?

Some Short Solutions

Leonhard Euler thought long and hard about possible generalizations of Fermat’s Last Theorem. He was the first to fashion a nearly-correct proof that no cube could be a nontrivial sum of fewer than three cubes. He conjectured that no 4th-power could be a nontrivial sum of fewer than four 4th-powers, no 5th-power a sum of fewer than five 5th-powers, and so on. These cases were open for nearly two centuries, but in 1966, Leon Lander and Thomas Parker used a computer to find

\displaystyle  27^5 + 84^5 + 110^5 + 133^5 = 144^5.

It still took 20 more years for the other shoe to drop on the {n=4} case:

\displaystyle  2682440^4 + 15365639^4 + 18796760^4 = 20615673^4.

This was the smallest member of an infinite family of solutions found by Noam Elkies—himself a master chess player—though not the smallest overall, which is

\displaystyle {95800^4 + 217519^4 + 414560^4 = 422481^4}.


The shortest solution to a problem in complexity theory that was important and open for several years was:

\displaystyle  c = a b a^{-1} b^{-1}.

Well this needs a little context. If either {a} or {b} is the identity, then this group commutator {c} is the identity. Else in a large enough permutation group like {S_5}, you can rig it so that {c} is never the identity. Thus if you a piece of code {P} such that {P(x) = a} if some property {A(x)} holds and {P(x) = 1} otherwise, and {Q} likewise gives {Q(x) = b} iff {B(x)} holds, then the code {PQP^{-1}Q^{-1}} computes the AND gate {C(x) = A(x) \wedge B(x)}.

Since it is easy to code {\neg A(x)} by the code {a^{-1} P}, and since AND and NOT is a complete basis, the upshot is that all {O(\log n)}-depth Boolean formulas can be computed by codes of size {n^{O(1)}}. Permutations of 5 elements allow codes in the form of width-5 branching programs, so what tumbles out is the celebrated theorem of David Barrington that the complexity class {\mathsf{NC^1}} has constant-width branching programs. GLL’s old post on it played up the simplicity, but like many mate-in-n chess problems it was not simple to see in advance.


More recently, a seemingly complicated conjecture by Richard Stanley and Herbert Wilf about permutations was proved in an 8-page paper by Adam Marcus and Gábor Tardos. For any sequence of {k} distinct integers define its pattern to be the unique permutation of {1,\dots,k} that has the same relative ordering. For instance, the pattern of {(110,27,133,144,84)} is {(3,1,4,5,2)}. Given any length-{k} pattern {p} and {n > k} define {R_n} to be the set of permutations of {1,\dots,n} for which no length-{k} subsequence (not necessarily consecutive) has pattern {p}. Whereas {|S_n|} grows as {2^{\Theta(n\log n)}}, Stanley and Wilf conjectured that the maximum growth for {|R_n|} is {2^{O(n)}}, for any fixed {p}. Marcus and Tardos polished off a stronger conjecture by Zoltán Füredi and Péter Hajnal about matrices in basically 1.5 pages whose short lemmas read like sack, sack, mate.

We found the last example highlighted in a StackExchange thread on open problems solved with short proofs. Another we have seen mentioned elsewhere and covered in a post is Roger Apéry’s proof that {\zeta(3)} is irrational, which centered on deriving and then analyzing the new identity

\displaystyle  \zeta(3) = \frac{5}{2}\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^3 {2n \choose n}}.

The Euler examples could be ascribed mostly to “computer-guided good fortune” but the other three clearly involve a new idea attached to a crisp new object: an equation or group or matrix construction. We wonder whether and when the possibility of such objects can be “sniffed out.”

Problems Ripe For Short Solutions?

I sometimes wonder if we have missed a simple proof—no, short is a better term—a short proof that would resolve one of our favorite open problems. Here a couple that Ken and I suggest may fall into this category.

{\bullet} Péter Frankl’s conjecture. Surely it just needs an equation or formula for the right kind of “potential function”? We noted connections to other forms in a followup post.

{\bullet} Freeman Dyson’s conjecture. It stands out even among number-theory statements that are “overwhelmingly probable” in random models, and there ought to be some principle for boxing away potential counterexamples along lines of how certain numbers with fast approximations can be proved to be transcendental. (Then again {\zeta(3)} hasn’t yet been proved transcendental.)

{\bullet} Problems involving deterministic finite automata (DFAs) are often easy to solve or show to be decidable. But here is one from Jeff Shallit that is not: Given a DFA, does it accept a string that is the binary representation of a prime number—is this decidable?

{\bullet} Even simpler is the question, given binary strings {x} and {y} of length {n}, of finding the smallest DFA {M} that accepts {x} but not {y} (or vice-versa). Can we put tight bounds on the size of {M} in terms of {n}?

{\bullet} Can a Boolean circuit {C} of size {s} computing a function {f(x_1,\dots,x_n)} be converted into a circuit {C'} of size {O(s)} with {n} gates {g_i} each computing the XOR of {f(x_1,\dots,x_i,\dots,x_n)} with {f(x_1,\dots,\neg x_i,\dots,x_n)}? This would be a Boolean analogue of the famous Derivative Lemma of Walter Baur and Volker Strassen, which we have discussed here and here.

Harder Ones?

By contrast, the 3n+1 conjecture of Lothar Collatz does not strike us as a candidate. John Conway proved that related forms are undecidable, and we suspect that it will need deep new knowledge of how intuitively multiplicative properties of integers duck-and-weave under the additive structure.

Regarding the Skolem problem we are on the fence: in a sense it involves DFAs which should be easy, but even for small {n} the connections to deep approximation problems in number theory become clear.

Here are four others—what do you think?:

{\bullet} Can polynomials modulo {6} compute the majority function? Of course we mean polynomials of low degree and we allow some flex on the meaning of “compute” provided it relates to polynomial-sized constant-depth circuits.

{\bullet} Can we at least prove that SAT cannot be computed in linear time on a Turing Machine? Note that this does not follow from the proof of {\mathsf{NLIN \neq DLIN}} as we discussed here. For all those of you who are sure that not only does {\mathsf{P \neq NP}}, but also that ETH is true, how can we not prove this “trivial lower bound?”

{\bullet} Can whether an {n}-node graph has a triangle be decided in {O(n^2)} time? Or at least better than matrix-multiplication time? Ken knew a visitor to Buffalo who admitted spending a year on this with little to show for it.

{\bullet} Can we solve the lonely runner problem? Or at least improve the number of runners that it is known to be true for?

Open Problems

How can we possibly forecast how open an open problem is?

How Joe Traub Beat the Street

August 31, 2015


An insight into the computation of financial information

JoeTraubColumbia
Columbia memorial source

Joseph Traub passed away just a week ago, on August 24th. He is best known for his computer science leadership positions at CMU, Columbia, CSTB, the Journal of Complexity—they all start with “C.” CSTB is the Computer Science and Telecommunications Board of the National Academies of Science, Engineering, and Medicine. At each of these he was the head and for all except Carnegie-Mellon he was the first head—the founder.

Today Ken and I wish to highlight one technical result by Traub and his co-workers that you may not know about.
Read more…

Cancellation is a Pain

August 27, 2015


How to avoid the pain of estimating tough sums

Andrew-Granville-senior-pro
Cricketing source

Andrew Granville is a number theorist, who has written—besides his own terrific research—some beautiful expository papers, especially on analytic number theory.

Today Ken and I wish to talk about his survey paper earlier this year on the size of gaps between consecutive primes.
Read more…

Cryptography And Quicksand

August 14, 2015


A basic question about cryptography that we pretend is not there.

Unknown

Victor Shoup is one of the top experts in cryptography. He is well known for many things including a soon to be released book that is joint with Dan Boneh on, what else, cryptography; and the implementation of many of the basic functions of cryptography.

Today I want to talk about my recent visit to the Simons Institute in Berkeley where I heard Victor give a special lecture.
Read more…

Changing Definitions

August 10, 2015


How important is “backward compatibility” in math and CS?

Lebesgue
David Darling source

Henri Lebesgue came the closest of anyone we know to changing the value of a mathematical quantity. Of course he did not do this—it was not like defining π to be 3. What he did was change the accepted definition of integral so that the integral from {a} to {b} of the characteristic function of the rational numbers became a definite {0}. It remains {0} even when integrating over all of {\mathbb{R}}.

Today we talk about changing definitions in mathematics and computer programming and ask when it is important to give up continuity with past practice. Read more…

Four Weddings And A Puzzle

August 2, 2015


An unusual voting problem?

“Four Weddings” is a reality based TV show that appears in America on the cable channel TLC. Yes a TV show: not a researcher, not someone who has recently solved a long-standing open problem. Just a TV show.
22258_4W_302_Group_1

Today I want to discuss a curious math puzzle that underlines this show.

The show raises an interesting puzzle about voting schemes:
Read more…

Playing Chess With the Devil

July 28, 2015


A new kind of ‘liar’ puzzle using Freestyle chess

VierraeSmullyanOilWithBook
By permission of Vierrae (Katerina Suvorova), source

Raymond Smullyan is probably the world’s greatest expert on the logic of lying and the logic of chess. He is still writing books well into his 10th decade. Last year he published a new textbook, A Beginner’s Guide to Mathematical Logic, and in 2013 a new puzzle book named for Kurt Gödel. His 1979 book The Chess Mysteries of Sherlock Holmes introduced retrograde analysis—taking back moves from positions to tell how they could possibly have arisen—to a wide public.

Today Ken and I wish to talk about whether we can ever play perfect chess—or at least better chess than any one chess program—by combining output from multiple programs that sometimes might “lie.” Read more…

Follow

Thank you for subscribing to “Gödel's Lost Letter and P=NP”

You’ll get an email with a link to confirm your sub. If you don’t get it, please contact us