# Congratulations, Noam

!–more–>

*The winner of the 2016 ACM-IEEE Knuth Prize*

Coursera source |

Noam Nisan has been one of the leaders in computational complexity and algorithms for many years. He has just been named the winner of the 2016 Donald E. Knuth Prize.

Today we congratulate Noam and highlight some of his recent contributions to algorithmic economics and complexity.

I (Dick) think that Noam missed out on proper recognition in the past. I am thrilled that he is finally recognized for his brilliant work. I do differ with the impression given by ACM’s headline. His game theory work is clearly important, but his early work on pseudorandom generators was of first order. And personally I wonder if that almost alone would be enough to argue that Noam is one of the great leaders in complexity theory—let alone the work on communication complexity and interactive proofs.

I (Ken) had forgotten that he was the ‘N’ in the LFKN paper with Carsten Lund, Lance Fortnow, and Howard Karloff proving that interactive proofs can simulate the polynomial hierarchy, which presaged the proof of . The first sentence of the ACM article does list these three complexity areas before mentioning game theory. There are others: Noam co-authored with Nathan Linial and Yishay Mansour a seminal 1993 paper that promoted Fourier analysis to study Boolean circuits, and with Mario Szegedy a 1994 paper on the polynomial method that impacted a host of topics including Boolean function sensitivity, decision trees, and quantum query lower bounds.

## Walking Around and Back

In what Lance once termed “walking away” from complexity, Noam in the mid-1990s became interested in algorithmic economics. Really we should say it is the interface between complexity and economics. We can say that theory—in particular mathematical game theory—frames much of this interface, but the business end of it is *social*. The genius we note in his seminal paper with Amir Ronen, titled “Algorithm Mechanism Design,” is in mapping the social elements to structures in distributed computation and communication theory (routing and protocols as well as communication complexity) that were already developed. This paper was one of three jointly awarded the 2012 Gödel Prize. Regarding it, the Knuth citation says:

A mechanism is an algorithm or protocol that is explicitly designed so that rational participants, motivated purely by their self-interest, will achieve the designer’s goals. This is of paramount importance in the age of the Internet, with many applications from auctions to network routing protocols. Nisan has designed some of the most effective mechanisms by providing the right incentives to the players. He has also shown that in a variety of environments there is a tradeoff between economic efficiency and algorithmic efficiency.

The last part of this addresses a question of more general import to us in theory:

How much impact can complexity lower bounds and conditional hardness relationships have in the real world?

The impact is helped along when the lower bounds come from communication complexity. Bounds on the number of bits that must be communicated to achieve a common goal (with high probability) are generally more concrete and easier to establish than those in the closed system of classical complexity theory, whose internal nature makes the most central bound questions asymptotic.

Noam long ago co-authored the textbook *Communication Complexity* with Eyal Kushilevitz, and also co-edited the 2007 text *Algorithmic Game Theory* with Tim Roughgarden, Éva Tardos, and Georgia Tech’s own Vijay Vazirani. The whole package of great breadth as well as depth in foundational areas, imbued with computer science education, ably fits the Knuth Prize profile. But we are going to make a forward-looking point by exploring how some of his recent joint papers reach a synthesis that includes some implications for complexity.

## Four Recent Papers

“Public Projects, Boolean Functions, and the Borders of Border’s Theorem,” with Parikhshit Gopalan and Tim Roughgarden, in the 2015 ACM Conference on Economics and Computation. Border’s Theorem is an instance where the feasible space of an exponential-sized linear program has a natural projection onto the space of a polynomial-sized one. The linear programs express allocations of -many bid-for goods to bidders according to their expressed and private valuations of those goods. The paper shows complexity obstacles to extending this nice exponential-to-polynomial property to other economic situations.

The overall subject can be read about in two sets of lecture notes by Roughgarden. We adapt an example from section 2 of this lecture to give the flavor: Suppose you have 2 items for sale and one prospective buyer such that for each item there is a 50-50 chance the buyer is willing to pay $1 for it, but you lose $1 for each item you fail to sell (call it shelving cost). If you price each item at $1, you will expect to make one sale, reaping $1 but also losing $1 on the unsold item, for zero expected net revenue. That’s no better than if you gave each away for $0. If you set the price in-between, say $0.50, you will expect a net loss—because the buyer’s probability distribution of value is discrete not smooth. But if you bundle them at 2-for-$1, you will expect to sell the bundle three-fourths of the time, for expected net revenue ($1 + $1 + $1 – $2)/4 = $0.25, which is positive.

Our change from Roughgarden’s prices and values of $1,$2 is meant to convey that problems about (random) Boolean functions and set systems are lurking here. Suppose we have items; what is the optimal bundling strategy? Grouping them at $2-for-4 expects to net the same $0.125 per item as the above grouping $1-for-2. But grouping $1-for-3 expects to reap $1 seven-eights of the time and lose $3 one-eighth of the time, for $4/8 = $0.50 net for 3 items, which gives a slightly better $0.167 per item. Is this optimal? We could write a big linear program to tell—also in situations with other prices and distributions of buyer values and conditions.

In several of these situations, they show that a Border-like theorem would put a -hard problem into and hence collapse the polynomial hierarchy. A technical tool for later results in the paper is the expectation of a Boolean function over a random assignment and for each the expectation of , which constitute the zeroth and first-degree coefficients of the Fourier transform of . Certain -hard problems about vectors of these coefficients are related to economics problems for which smaller LPs would collapse them.

“Networks of Complements,” with Moshe Babaioff and Liad Blumrosen, in ICALP 2016. Imagine a situation where buyers will only buy pairs of items in bundles. We can make an undirected graph whose nodes are the items and each edge is weighted by the price the buyer is willing to pay. Graph theory supplies tools to analyze the resulting markets, while behavior for special kinds of graphs may yield insights about the structure of problems involving them.

“A Stable Marriage Requires Communication,” with Yannai Gonczarowski, Rafail Ostrovsky, and Will Rosenbaum, in SODA 2015. The input consists of permutations and of . A bijection is *unstable* if there exist such that and , otherwise it is *stable*. Figuratively, the latter inequality means that woman prefers man to her own husband and the former means that man likewise prefers woman to his own wife. There always exist “marriage functions” that are stable, but the problem is to find one.

This is a classic example of a problem whose best-known algorithms run in time order-of in worst case, but time in average case for permutations generated uniformly at random. This random model may not reflect “practical case” however, so the worst-case time merits scrutiny for possible improvement, perhaps under some helpful conditions. One avenue of improvement is not to require reading the whole input, whose size when the permutations are listed out is already order-of (ignoring a log factor by treating elements of as unit size), but rather allow random access to selected bits. These queries can be comparisons or more general. The paper finds a powerful new reduction from the communication complexity of disjointness (if Alice and Bob each hold an -bit binary string, verify there is no index where each string has a ) into preference-based problems to prove the query number must be even for randomized algorithms and even with certain extra helpful conditions.

“Smooth Boolean Functions are Easy: Efficient Algorithms for Low-Sensitivity Functions,” with Parikhshit, Rocco Servedio, Kunal Talwar, and Avi, in the 2016 Innovations conference.

The *sensitivity* of (a family of) -variable Boolean functions is the minimum over of the number of adjacent to such that . Here adjacent means and differ in one bit. The functions are *smooth* if , or variously, if . One way to achieve is for to be computed by a depth- decision tree, since along each branch determined by an there are at most bits on which the branch’s assigned value depends. Noam, first solo and then in the 1994 paper with Szegedy, conjectured a kind of converse: there is a universal constant such that every has a decision tree of depth .

Decision trees are broadly more powerful in corresponding depth and size measures than Boolean circuits, which in turn are more powerful than formulas. Although belief in Noam’s conjecture has grown, no one had determined upper bounds for circuits and formulas. The paper gives circuits of size and formulas of depth with somewhat laxer size . The size is scarcely the same as a polynomial in , but it’s a start, and when is polylog, the size is quasipolynomial and the depth is polylog. This most-recent paper will hopefully jump-start work on proving the conjecture and fundamental work on Boolean functions.

Of course we have given only a cursory overview intended to attract you to examine the papers in greater detail, even skipping some of their major points which are welcome in comments.

## Open Problems

We congratulate Noam warmly on the prize and his research achievements.

We can announce one new thing: LEMONADE. No it is not Beyoncé’s latest album but rather a new podcast series by Dick and Kathryn on positive approaches to problematic issues in academia, culture, and life in general.

[fixed “size”->”depth” in statement of sensitivity conjecture]

first non-nobel post on a nobel week

http://www.nytimes.com/2016/06/05/fashion/weddings/kathryn-farley-richard-lipton.html

Dear Dick,

Congratulations. I felt happy for you that your blog revolutionized every aspect of your life; personal as well as scientific.

Best,

Rafee Kamouna.

Your description of the sensitivity conjecture is mis-stated. That is, the conjecture is that every function has a decision-tree of /depth/ poly(s(f)) (as opposed to ‘size’ as you state).

The conjecture is false for size as you state, as s(f) is at most n, but most functions require exponential-size decision trees (as they require exponential-size circuits).

As such, the results on circuit size in the ITCS paper are actually close to what one would get if the sensitivity conjecture was true.

Thanks!—Those glitches can be hard to self-spot, since I had the “poly” part in mind as the “quasi-” aspect—but it was also extra misleading for that reason.