# Avi Wins The Knuth Prize

*Wigderson wins the well deserved Knuth Prize*

From Avi’s 60Fest at IAS |

Avi Wigderson is this year’s selection for the ACM/IEEE Donald E. Knuth Prize.

Today Ken and I wish to congratulate Avi and say a bit more.

We thank first the Knuth Prize committee, chaired by Avrim Blum, for their excellent selection. It must be very difficult each year to select one person for this honor. There are so many strong candidates. Of course the Knuth Prize is awarded to individuals for their overall impact in the field. Who could deny that Avi has had vast impact on the whole field?

Here is a nice comment from the “in theory” blog by Luca Tresvian.

Avi has worked on all aspects of computational complexity theory, and he has had a transformative influence on the way theoretical computer science relates to pure mathematics. I will not repeat what I wrote about his work on the occasion of his 60th birthday here and here. Long-term readers of in theory will remember that I consider him as one of the saints of computational complexity.

Hmmm. Saints of computing—we find that a pretty interesting comment.

## The Three-Chain Race

Since the prize was created in 1996, it has been awarded to:

1996 Andrew Yao

1997 Leslie Valiant

1999 László Lovasz

2000 Jeffrey Ullman

2002 Christos Papadimitriou

2003 Miklos Ajtai

2005 Mihalis Yannakakis

2007 Nancy Lynch

2008 Volker Strassen

2010 David S. Johnson

2011 Ravi Kannan

2012 Leonid Levin

2013 Gary Miller

2014 Richard J. Lipton

2015 László Babai

2016 Noam Nisan

2017 Oded Goldreich

2018 Johan Håstad

2019 Avi Wigderson

I am now cheering for a number of potential future Knuth Prize winners. The reason is that just now Jeff Ullman and I have become tied for the longest chain of advisor-to-advisee all with the Prize. We both have a chain of length two: Ullman to Yannakakis is his. The next prize winner could tip the scale toward one of us or the other. Or could create another chain of length two. We will see.

## Lower Bounds and Randomness

We thought we would present some cool story about Avi. We have already given some good ones—the Chinese Remainder Theorem story is a favorite. Or the brilliant first theorem he proved while a graduate student: The approximation method for coloring three-colorable -vertex graphs in time with colors, when poly time with colors had been best-known. That was really beautiful. We could zig-zag through some of his great papers and surveys. We could go on and on.

What Ken and I find most emblematic is that Avi has been at the forefront of lower bounds in complexity throughout the four decades since I knew him as a student. He and Noam Nisan delineated the “Hardness Versus Randomness” tradeoff at the end of the 1980s, pulling together work that began with Adi Shamir and Andy Yao at the beginning of that decade and was furthered by others including Russell Impagliazzo, with whom Avi later had two famous papers. The main theorem of the former paper connects upper and lower bounds in iconic fashion:

Theorem 1If there are problems in exponential time having no -sized circuits, then .

Just like the flip side of a coin, this work has also put Avi at the forefront of our understanding of *randomness*. That was the subject of an interview in 2013 with the Polish magazine *Polityka* titled, “How to Rule Randomness Mathematically.” Quoting him in it (Ken’s translation):

The concept of secrecy on the Internet almost makes no sense without random bits. If you use a procedure, one that I know, I can impersonate you in a perfect way. I can become your master. Secrets must be generated in a random way. Of course, we do not have to do it ourselves—our laptop or phone does the work for us.

The interview goes on to marvel yet also warn how all of this is still founded on properties of prime numbers and the putative hardness of factoring. It looks to implementations of more advanced crypto whether “we will be forced to do it” or not. In all of this, Avi truly represents dedication to *foundations*. He backs that up with tireless advocacy and organizational work for people in our field—extending to the 2018 IAS workshop we covered last June.

## A Symbolic Gift

As a symbol of our affection we dedicate to Avi a simple riff on lower bounds. We start with arguably the oldest lower bound, except that the vocabulary to realize it was lacking 2,000 years ago—and Ken and I weren’t conscious of it until recently. This is that —the number of primes —satisfies

for , with logs to base 2. To show this, argue as in Euclid’s famous proof: *Suppose that *

are the first primes. The Euclid argument proves that the next prime, say , must be so that

The property is maintained by induction, because we have and thus

*So if , giving in a worst-case manner, then , so . Thus .*

Now we note from here an exponentially better lower bound that is almost as easy to prove: Take again. Now if , then it must be possible to write

with and each . This is because , the largest square dividing , can be at most , and then has no squared prime dividing it. How many different such numbers can we make? There are at most for the possible exponents, multiplied by the initial . This means

so

Taking the base-2 log of both sides gives

Again this works for all . This lower bound is much better, but it is still far from the truth that . The involvement of in the derivation, however, harks us back to Avi’s graph-coloring example. So in that spirit let us ask:

What is the easiest way to prove for some constant ? Is there a really easy way?

## Open Problems

We congratulate Avi once more. The same Polish interview notes at the end that the Gödel and Nevanlinna Prizes “did not do any evident damage to his personal charm,” nor does his 1956 birthday diminish the energy “to bound down stairs … several breakneck steps at a time.” We are sure the new prize will have no effect on either.

Of course, the Knuth Prize to Avi is incredibly well deserved!

On your open question: There is an elementary lower bound of Omega(x/log x) that I have seen attributed to Chebyshev. It comes from looking at the factorization of N=(2n choose n).

Each prime dividing N contributes to pi(2n).

For every prime p that divides N, one can easily count the index of p in the prime factorization of N as follows:

Consider the numerator of N first: The index of p in the prime factorization of (2n)! is [2n/p}+[2n/p^2] + … + [2n/p^k] where p^k = 2^(2n)/(2n+1) >= 2^(2n)/(2n)^2.

Therefore pi(2n) >= (2n)/log_2 (2n) – 2. []

Part of my reply didn’t show up because I used “less than” signs. Here it is again

————————————————————————————————

Of course, the Knuth Prize to Avi is incredibly well deserved!

On your open question: There is an elementary lower bound of Omega(x/log x) that I have seen attributed to Chebyshev. It comes from looking at the factorization of N=(2n choose n).

Each prime dividing N contributes to pi(2n).

For every prime p that divides N, one can easily count the index of p in the prime factorization of N as follows:

Consider the numerator of N first: The index of p in the prime factorization of (2n)! is [2n/p}+[2n/p^2] + … + [2n/p^k] where p^k is the largest power of p that is at most 2n.

Now consider the denominator (n!)^2: There the index of p is

2([n/p}+[n/p^2] + … + [n/p^k]) where the last term may be 0.

Therefore the index of p in the prime factoriization of N is:

[2n/p}-2[n/p])+([2n/p^2]-2[n/p^2]) + … +([2n/p^k]-2[n/p^k]).

It is easy to see that each term is at most 1 so the index is at most k.

Therefore the largest prime power dividing N is at most 2n.

It follows that pi(2n) is at least log_(2n) N.

Now – keeping things purely elementary – we have that N=(2n choose n) is the largest of the 2n+1 binomial coefficients (2n choose i) whose sum is 2^(2n). Hence

N is at least 2^(2n)/(2n+1) which is larger than 2^(2n)/(2n)^2.

Therefore pi(2n) is more than (2n)/log_2 (2n) – 2. []

Neat—thanks, Paul. There still might be room on the quality/ease-of-proof tradeoff to insert a point for .