Two more tragic losses coming before a greater tragedy

 Composite of crops from src1, src2

Michael Cohen and Vladimir Voevodsky were in different stages of their careers. Cohen was a graduate student at MIT and was visiting the Simons Institute in Berkeley. He passed away suddenly a week ago Monday on a day he was scheduled to give a talk. Voevodsky won a Fields Medal in 2002 and was a professor at the Institute for Advanced Study in Princeton. He passed away Saturday, also unexpectedly.

Today we join those grieving both losses.

We are writing this amid the greater horror in Las Vegas. Dick and I speak our condolences and more, but the condolences that two of us can give seem to fade—they do not “scale up.” Hence we feel that the best we can do is talk about Cohen’s and Voevodsky’s roles in our scientific communities and some of what they contributed. That is a gesture of peace and serenity. It may not overcome the darkness, but something like it seems needed so that we all might do so.

## Cohen’s Work

Michael Cohen had already worked with a wide variety of people in over twenty joint papers. He had two all by himself: a paper at SODA 2016 titled, “Nearly Tight Oblivious Subspace Embeddings by Trace Inequalities,” and a paper at FOCS 2016 titled, “Ramanujan Graphs in Polynomial Time.”

A common theme through much of this work was wizardry with special kinds of matrices. They included Laplacian matrices in which every column sums to ${0}$ and only the diagonal entries can be positive. You can get one from a directed graph by negating the entries of its adjacency matrix and putting the in-degrees on the diagonal. One can further demand that the rows sum to zero, which happens for our graph if each node’s in-degree equals its out-degree. This is automatic for undirected graphs. As noted in this paper:

While these recent algorithmic approaches have been very successful at obtaining algorithms running in close to linear time for undirected graphs, the directed case has conspicuously lagged its undirected counterpart. With a small number of exceptions involving graphs with particularly nice properties and a line of research in using Laplacian system solvers inside interior point methods for linear programming […], the results in this line of research have centered almost entirely on the spectral theory of undirected graphs.

The paper, titled “Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More,” was joint with Jonathan Kelner, John Peebles, and Adrian Vladu of MIT, Aaron Sidford of Stanford, and Richard Peng of Georgia Tech, and also came out at FOCS 2016. In the case of symmetric matrices ${A}$, needing only that ${A[i,i] \geq \sum_{j\neq i} |A[i,j]|}$, he was part of a bigger team including Peng and Gary Miller of CMU that found the best-known time for solving ${Ax = b}$. That paper came out at STOC 2014.

Thus from early on he was working with a great many people in the community. This has been noted in tribute posts by Scott Aaronson, by Sébastien Bubeck, by Luca Trevisan, by former colleagues at Microsoft Research where Cohen spent this past summer, and by Lance Fortnow. The post by Scott includes communications from Cohen’s parents and information about memorials and donations.

## Constructing Ramanujan Graphs

We’ll talk about Cohen’s paper on Ramanujan graphs in a train of thought that will lead into aspects of Voevodsky’s work. Of course we know Srinivasa Ramanujan was a brilliant Indian mathematician who also died tragically young.

In mathematics we sometimes prove the existence of objects without knowing how to construct them. Sometimes we can prove that a random object works. This is often helpful, but one downside comes from cases where we would want different people given the same problem parameters to obtain the same object. Randomized algorithms usually do not usually have a single output that is arrived at with high probability. What we really want is an algorithm that constructs the object.

This has been the story for a long time with expander graphs. They were proved to exist long ago via the probabilistic method. The zig-zag product was a watershed in constructing some kinds of them. The goal is to get these objects constructively with the same parameters or close to them.

A Ramanujan graph is a particular kind of expander with a maximum dose of the spectral-gap condition for expansion. The adjacency matrix ${A}$ of a ${d}$-regular graph has ${d}$ as its largest eigenvalue. It cannot have an eigenvalue less than ${-d}$, which occurs if and only if the graph is bipartite. The graph is Ramanujan if all other eigenvalues have absolute value at most ${2\sqrt{d - 1}}$. This creates a quadratic spectral gap between ${d}$ and the next-largest eigenvalue and this is asymptotically the largest possible.

Again, a randomly chosen ${d}$-regular ${n}$-node graph will be almost certainly a Ramanujan graph, for any ${n}$ and nontrivial ${d}$. Adam Marcus, Dan Spielman, and Nikhil Srivastava (MSS) proved in 2013 that such graphs exist for all ${n}$ and ${d}$ even when required to be bipartite. But can we build one for any ${n}$ and ${d}$? This was not known in deterministic ${n^{O(1)}}$ time until Cohen’s paper. The main advance was to use a beautiful concept of trees of degree-${d}$ polynomials with interlacing roots from MSS and improve it so that the requisite trees have polynomial rather than exponential maximum branch length, which governs the time of the algorithm. The paper well rewards further reading.

What this does is put bipartite Ramanujan graphs onto the list of structures that we can apprehend and use in deterministic polynomial-time algorithms. Thus Cohen added his name to the honor roll of those constructing good expanders and making random objects concrete.

## Constructivity and Voevodsky’s Work

Voevodsky’s work is set against a backdrop where mathematicians do the following over and over again. They start by knowing how to build certain kinds of algebraic structures on, say, differential manifolds or curves. They then want to carry this structure over to more general settings.

Voevodsky won his Fields Medal for this kind of work. He showed how to carry over topological ideas of homotopy from differential manifolds to algebraic manifolds—that is, any manifold that is the zero set of a polynomial. We discussed homotopy and its computational relevance in our own terms here. To quote his 2002 Fields review by Christophe Soulé:

It is quite extraordinary that such a homotopy theory of algebraic manifolds exists at all. In the fifties and sixties, interesting invariants of differentiable manifolds were introduced using algebraic topology. But very few mathematicians anticipated that these “soft” methods would ever be successful for algebraic manifolds. It seems now that any notion in algebraic topology will find a partner in algebraic geometry

Voevodsky’s medal was also for his proof of a noted conjecture by John Milnor that a structure of algebraic groups he built on a field ${F}$ of characteristic other than 2, with the algebra taken mod 2, would be isomorphic to an étale cohomology of ${F}$ with coefficients mod 2. Voevodsky overcame difficulty with tools from algebraic ${K}$-theory by developing and systematizing prior ideas of motivic cohomology that, as the review says, “turned out to be more computable.” He later proved the general conjecture for moduli other than 2, drawing on work by others in the meantime.

In the most ambitious cases of such “carry-overs,” however, mathematicians are able to prove that the objects needed for such structure exist but not concretely. It’s not just that the objects cannot be apprehended, but that these proofs are often not subject to being algorithmically checked.

To remedy this, Voevodsky delved deeper into constructive mathematics, which aims not to limit knowledge but rather to streamline and solidify it. He built up homotopy type theory (HoTT), which we talked about here. His ideas were programmed in the software system Coq, which grew out of Thierry Coquand’s “calculus of constructions” in partnership with Gérard Huet. Thus he was led to consider the foundations of mathematics as deeply as David Hilbert did a century ago.

## Voevodsky and Foundations

The term “foundations,” which lives in the names of conferences such as FOCS and MFCS, tends to be spoken as an umbrella term for “theory.” We have argued that it ought to mean continued and concerted attention to the core problems in our field like ${\mathsf{P = NP}}$—notwithstanding that many of them are “like” ${\mathsf{P = NP}}$ in the sense of not having budged for decades. But when Voevodsky talked about foundations, he really meant the foundations: how do we know the whole edifice we have built out of proofs—all kinds of proofs—won’t collapse?

We have blogged about Ed Nelson’s attempts to show that Peano Arithmetic is inconsistent. Voevodsky took this possibility seriously. In memorials to Voevodsky on the HoTT Google Group, André Joyal contributed the following:

My first contact with Vladimir and his ideas was at a meeting in Oberwolfach in 2011. He gave a series of talks on constructive mathematics and homotopy theory, framed as a tutorial with the proof assistant Coq. His notion of a contractible object and of an equivalence were striking. I had a hard time understanding his ideas, because they were described very formally. He apparently distrusted informal expressions of mathematical ideas. One evening, he expressed the opinion that Peano arithmetic was inconsistent! He later came to distrust the applications of his ideas to homotopy theory!

Voevodsky indeed gave a talk at IAS titled, “What If Current Foundations of Mathematics are Inconsistent?” Very controversially, it tries to turn the understanding of Kurt Gödel’s Second Incompleteness Theorem on its head as a vehicle for possibly proving the inconsistency of certain classical first-order theories. Whereas, he concluded:

In constructive type theory, even if there are inconsistencies, one can still construct reliable proofs using the following “workflow”:

• A problem is formalized.

• A solution is constructed using all kinds of abstract concepts. This is the creative part.

• An algorithm which verifies “reliability” is applied to the constructed solution (e.g., a proof). If this algorithm terminates then we know we have a good solution of the original problem. If not, then we may have to start looking for another solution.

The workflow on this effort will continue. The IAS announcement notes that a memorial workshop is being planned and more information will be available soon. Update 10/7: The IAS posted a full obituary and there are also one in today’s New York Times and one in today’s Washington Post.

## Open Problems

Again we express our condolences to their family, loved ones, and colleagues, and the same to everyone affected by the horror in Las Vegas.

This is the 750th post on this blog. We were holding onto two other ideas for marking this milestone, while busy with papers and much else these past two weeks ourselves. Those will still come out in upcoming weeks.