Skip to content

Winner Of 2018 Knuth Prize Is:

August 16, 2018

A great choice

Cropped from 2016 KTH grant news source

Johan Håstad is the winner of the 2018 Donald E. Knuth Prize. We were going to keep you in suspense, but one of our “blog invariants” is to lead with the name(s) of those who are featured. He is very well deserving. And he has proved many wonderful theorems.

Today the entire GLL editorial staff would like to thank the committee for their great selection.

They were Allan Borodin, Alan Frieze, Avrim Blum, Shafi Goldwasser, Noam Nisan, and Shanghua Teng (chair). It must be hard to select a Knuth Prize winner, because there are so many strong candidates. So many. This year’s choice is an excellent one.

The Knuth Prize is “for outstanding contributions to the foundations of computer science.” The description used to mention that educational contributions such as textbooks and students could be considered as part of the impact. Usually one thinks of “education” as being for aspiring students or for the public outside of us researchers. But it strikes us that Johan has been pre-eminent for educating many currently within the field on what to pursue.

En-passant, we are glad to announce that Harvey Friedman has today posted a full draft paper, “Tangible Mathematical Incompleteness of ZFC,” which we previewed. We will be giving it further coverage with his continued expository assistance.

Being Best Possible

We draw our feeling on Johan’s role from the first two paragraphs about his “transformative techniques” in the long-form Knuth Prize citation. It first describes his famous 1986 “Switching Lemma” for lower bounds on the parity function. The first super-polynomial lower bounds on constant-depth Boolean circuits (of unbounded fan-in) had been shown in 1981 by Merrick Furst, James Saxe, and Mike Sipser. Andy Yao in 1985 obtained exponential size lower bounds that were strong enough to give oracles separating the polynomial hierarchy from polynomial space. But Johan sharpened the exponent using what has remained the best technique—even this 2018 extension tells us so.

His lemma concerns how a random assignment to many of the variables of a DNF causes it to become much simpler. See here for a modern explanation of the details. His original proof was a great achievement through its handling of the probabilistic method. The details were quite delicate because of the need to control certain technical issues. In any probabilistic proof one must be careful not to destroy independence, since keeping certain events independent is vital to make such proofs work. Johan’s original proof worked hard at this. More recent proofs have been able to skirt some of the vicissitudes, but they all prove the following pretty result:

Theorem 1 Any Boolean circuit of depth {k+1} for parity requires size at least

\displaystyle  2^{hn^{1/k}}

for some positive constant {h}.

The citation’s next paragraph describes Johan’s relation to the PCP Theorem. Now he is absent from that Wikipedia page and from a popular illustrated history. But he refined it to plutonium. This did not come suddenly with the 1996 paper, “Clique is hard to approximate within {n^{1-\epsilon}}” and its Gödel Prize-winning 1997 successor, “Some Optimal Inapproximability Results” mentioned in the citation. I (Ken adding to what Dick wrote) recollect associating ‘Håstad’ to the importance of “free bits” earlier in the 1990s and the influence of his FOCS 1993 paper, “The Shrinkage Exponent is 2.” This memory may include a conflation of the influence on Håstad of Ran Raz’s STOC 1995 paper on the parallel repetition theorem. In any event, the power of finding best-possible results shines through in the citation:

As complexity-theoretical breakthroughs, Håstad constructed almost optimal PCPs, where optimality holds with respect to parameters such as amortized free-bit complexity and total number of queries. He then established optimal “approximation resistance” of various constraint satisfaction problems—namely that one cannot do better in terms of worst-case performance ratio than the basic input-oblivious algorithm that simply picks a random assignment to the variables. These PCPs led to optimal inapproximability results for MaxClique, MaxLin2, and Max3SAT as well as to the best known hardness results for approximating other optimization problems such as MaxCut. His proofs introduced a treasure trove of ideas—in particular the Fourier analytic techniques—that has influenced nearly all subsequent work in hardness of approximation.

The citation’s third paragraph notes his role in proving the polynomial-time equivalence of one-way functions and pseudorandom generators. The history is that Russell Impagliazzo, Leonid Levin, and Mike Luby proved this first for nonuniform circuits—I remember a seminar talk by Russ at Cornell in late 1986 or 1987 when this was still in process. Then Johan for STOC 1990 saw how to push it to work in the uniform case.

Open Problems

What more is known about the constant {h} in the lower bounds for parity, as {k} grows? We find {h \geq 0.143781} (all {k}) here, and a simple upper bound asymptotic to {1} is easy to see as done here.

Johan recently pushed the lower bounds on parity in a different direction, improving the maximum {\epsilon} such that circuits of depth {k} and size {S} can achieve agreement {\frac{1}{2} + \epsilon} with the parity function. What further usefulness in complexity theory might pushing this have? In any event we all thank him for his beautiful work and congratulate him on winning this year’s Knuth Prize.

[some word changes]

One Comment leave one →
  1. August 20, 2018 12:32 pm

    Johan is great! Congratulations!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s