# The Winner Is…

* Our selections for best results of the year *

Terry Gilliam is the director of the movie The Zero Theorem. This science fiction film came out just this past year, 2013, but completes a dystopian trilogy with Gilliam’s amazing 1985 film Brazil and his 1995 film 12 Monkeys. The new film’s lead character, named Qohen Leth, is working on a formula to determine whether life holds meaning.

Today Ken and I want to announce the GLL winners for best results of 2013. No, Qohen Leth is not among the winners.

The name Qohen Leth connects to Qoheleth, who narrates the book Ecclesiastes in the Bible, and Brian Cohen of Monty Python’s Life of Brian. Ecclesiastes famously talks about whether life has meaning or “all is vanity,” while Monty Python also made a movie called The Meaning of Life. We take no position on whether there is a Zero Theorem in Gilliam’s sense, and we do believe that life holds meaning. But we leave that discussion to another day.

## Best Results?

What makes a result a “best result?” This is a hard question, one that we cannot answer. But let’s try anyway.

We believe mathematical results, usually theorems, can be divided into two categories: those that solve an existing open problems and those that pose new problems. The former are easy to appreciate, since many people have already tried and failed; the latter are difficult to appreciate, since by their very nature they lead us into new territory. Freeman Dyson talked about the distinction as “frogs” vs “birds.” Frogs work down in the muck, where it is difficult to navigate, easy to get stuck, easy to get lost. Birds soar above in the clouds, free to create new vistas, but can miss details and clues that are not visible from above. We have discussed this before—see here.

Ken and I prefer to think of the former, the solvers of open problems, as people who try to set new records. We all can appreciate a runner who breaks the world record for a mile (currently 3:43.13 by Hicham El Guerrouj), the basketball player with more points in a game than anyone ever (Wilt Chamberlain with 100 in 1962; Kobe Bryant had 81 in 2006); the chess player who wins more games in a row against grandmasters (Bobby Fischer, 20 games in 1970–71). These achievements are relatively easy to appreciate.

We think of the latter as creators of new types of records—that is, creators of new types of games, which have new records. These by their very nature are hard to judge initially. Who would have guessed that basketball, created in 1891 by James Naismith, would eventually become one of the most popular games in the world? He set out not to revolutionize the sports world, but just to find:

[A] vigorous indoor game to keep his students occupied and at proper levels of fitness during the long New England winters.

Basketball was clunky at first, with peach baskets from which balls needed to be retrieved manually. It took awhile to realize that with hoop and netting the scoring is clear and the game has a beautiful flow.

The above is in way of an explanation of why we have elected “best results” that are all of the type of setting a record, solving an open problem, rather than creating a new field or a new class of problems.

## The Winners Are

We took all of your input, added our own ideas, and came up with a list of winners. We apologize to everyone that we have left out, but we only have so many slots. So drum roll please

**Best Result Overall**: *Yitang Zhang*. For his work on gaps between primes. Who doesn’t like a paper whose abstract is this straightforward?:

The remaining winners are likewise terrific, and we selected only results from computer theory:

**Best Result In Cryptography**: *Antoine Joux*. For his paper on the discrete logarithm, and others we covered last spring. His publications page has some new work on crypto-systems that use bi-linear *pairing* constructions, which also look highly interesting.

**Best Result In Complexity Theory**: *Ankit Gupta, Pritish Kamath, Neeraj Kayal, and Ramprasad Saptharishi*. For their work on algebraic circuits of bounded depth. Their main results are stated in a conditional manner that looks complicated at first sight, but their construction has some unconditional consequences. Consider the following extremes for computing the determinant of matrices, which is a polynomial of degree in variables:

- is computable by -sized arithmetical circuits that carry out Gaussian elimination, but their depth depends on .
- is computable by depth-2 arithmetical circuits, namely a sum of products of entries each, but there are such products, so the circuits have size .

What they show is that adding to the depth—making a sum-of-products-of-sums circuit—greatly reduces the size:

- is computable by depth-3 arithmetical circuits of size .

Here as usual means times some polynomial in , and as usual with us in posts that give an overview, please see their papers for full details.

**Best Result By Close Friends**: *Atri Rudra and Mary Wootters*. For their paper on list-decoding.

The last category is special, and we hope everyone gets that we are having some fun. We do think they work is great, but they are friends—so want to be upfront that we *might* be slightly biased. Just a bit.

## Open Problems

Hope you like the list. I think that every year there are lots of great results, and I look forward to seeing even more next year. The rate of growth of important mathematics and theory theorems is well known to be growing faster than inflation—at least it seems that way.

Regarding the complexity theory work for determinants- is a lower bound of $2^{\Omega(\sqrt{n})}$ known?

Actually I like atri rudra’s work. I think it is very important from a big picture view w.r.t complexity of current number theoretic cryptographic schemes. Comparing with Joux’s work and seeking a parallel with it, the region list decoding that rudra considers may be important to nail out the ultimate complexity of decoding DL systems and nail the $P\neq NP$ problem.

which work are you ref’ing to? do you mean LD, list decodable codes? what is the connection with P=?NP?

The region of list decoding that atri rudra study is shown to be important and falls between poly time and np complete problems. the complexity is not known and as they mention decoding in the region is eqvt to solving DL. They show this region has most likely poly size list but decoding complexity is unknown. But as expected if it turns out greater than poly time to locate even a single word in the list we are done separating P from NP.

On the other hand I still do not understand why the det complexity work is important. The hard part(permanent part) is swept as a conjecture.

thx for info… am always interested in new P=?NP angles. looked for recent papers on his web site & didnt see this [doesnt seem to have recent papers there]. here is a nice survey he wrote on algorithmic coding theory, is it in there somewhere?

It is not explicit. The DLog relation is explicit. Look at Daqing Wan and Qi Cheng’s papers. They have some nice results using relatively easy techniques.

BTW your last year prediction of Mochizuki’s proof being verified by now did not hold up! Any other predictions for the new year?

Reblogged this on Pink Iguana.

Could you do a full post on the recent results in algebraic circuit complexity? As an outsider it seems exciting but it’s hard to get to the frontier from scratch.

Please read Jiang book p170-200,theory of prime table

GPY,TAO,ZHANG,MAYNARD PAPERS ARE WRONG