# The World Turned Upside Down

* Some CS reflections for our 700th post *

MacArthur Fellowship source |

Lin-Manuel Miranda is both the composer and lyricist of the phenomenal Broadway musical *Hamilton*. A segment of Act I covers the friendship between Alexander Hamilton and Gilbert du Motier, the Marquis de Lafayette. This presages the French co-operation in the 1781 Battle of Yorktown, after which the British forces played the ballad “The World Turned Upside Down” as they surrendered. The musical’s track by the same name has different words and melodies.

Today we discuss some aspects of computing that seem turned upside down from when we first learned and taught them.

Yesterday was halfway between our Fourth of July and France’s Bastille Day, and was also the last day of Miranda performing the lead on-stage with the original *Hamilton* company. They are making recordings of yesterday’s two performances, to be aired at least in part later this year. A month ago, Miranda wrote an op-ed in the New York Times against the illegal (in New York) but prevalent use of “bots” to snap up tickets the moment they become available for later marked-up resale.

This is also the 700th post on this blog. It took until 1920 for a Broadway show of any kind to reach 700 performances. The *Playbill* list of “Long Runs on Broadway” includes any show with 800 or more performances. That mark is within our reach, and our ticket prices will remain eminently reasonable.

## What Has Changed?

This list is just what strikes us now—far from exhaustive—and we invite our readers to add opinions about examples in comments.

### NP-Complete Means Easy

Forty-five years ago, Dick Karp showed how the difficulty of SAT represented by NP-completeness spreads to other natural problems. As the number of complete problems from many areas of science and operations research soared into the thousands by the 1979 publication of the book *Computers and Intractability*, people regarded NP-completeness as tantamount to intractability.

Today the flow is in the other direction—as expressed for instance in this talk by Moshe Vardi. Dick Karp himself has been among many in the vanguard—I remember his talk on practical solvability of Hitting-Set problems at the 2008 LiptonFest and here is a relevant paper. We now reduce problems *to* SAT in order to *solve* them. SAT-solvers that work in many cases are big business. In some whole areas the SAT-encodings of major problems are well-behaved, as we remarked about rank-aggregation and voting theory in the third section of this post from last October. The solvers can even tackle huge problems. Marijn Heule, Oliver Kullback, and Victor Marek proved that every 2-coloring of the interval has a monochromatic Pythagorean triple in a proof of over 200 terabytes in uncompressed length.

### Quadratic Time Means Hard

Quadratic time is notionally on the low end of polynomial time, and “polynomial time” has long been used as a synonym for “easy.” But as the amount of data we can and need to handle has mushroomed, the difference in scaling between *quasi-linear* and *quadratic* is more and more felt. This difference has even been argued for cryptographic security. A particular definition of quasi-linear time is time as named by Claus Schnorr for his theorem on quasi-linear completeness of SAT; see also this.

In genomics the quadratic time of algorithms for full edit-distance measures is felt enough to warrant approximative methods, as we covered in our memorial a year ago for Alberto Apostolico. This also puts meaning behind theoretical evidence that time for edit distance cannot be improved.

These two items seem to contradict each other, but point up a difference in scale between data and logical control. Often a thousand data points are nothing. A formula with a thousand clauses can say a lot.

### AI Winter Into Spring

My first doctoral student had been working on neural networks before I became his advisor in 1991, and I remember the feeling of their being under a cloud. The so-called AI Winter traced in part to lower bounds shown against certain shallow neural nets in the 1969 book *Perceptrons* by Marvin Minsky and Seymour Papert. We discussed complexity aspects of this in our memorial of Minsky last January.

Since then what has emerged is that composing a bunch of these nets, as in a convolutional neural network (CNN), is both feasible and algorithmically effective. The recent breakthrough on playing Go is just a headline among many emerging applications of CNNs and larger systems. We are not saying neural nets and deep learning are the be-all or anything more than a “cartoon” of the brain, but rather noting them among many reasons that AI and machine learning are resurgent.

### Data-Parallelism and Functional Languages Are Revenant

The same AI-winter article on Wikipedia mentions the collapse of Lisp-dedicated systems in 1987, and more widely, many companies devoted to data-parallel architectures “left nothing but their logos on coffee mugs” as a colleague once put it. Subsequently I perceived signs of stagnation in functional languages in the late 1990s and early 00s. This lent a ghostly air to John Backus’s famous 1978 Turing Award lecture, “Can Programming Be Liberated From the von Neumann Style?”

Unlike the revenant in last year’s award-winning movie of that name, this one has come back with a different body. Not a large-scale dedicated machine system, but rather the pan-spectral pervasion we call the *Cloud*. A great lecture we heard by Mike Franklin on Amplab activities highlighted the role of programs written in the functional language Scala running on the Apache Spark framework.

A common thread in all these items is the combined efficacy and scalability of algorithmic primitives whose abstract forms characterize quasi-linear time: sorting, parallel prefix sum (as one of several forms of map-reduce), convolution, streaming count-sketching, and the like.

## Open Problems

We considered mentioning some subjects that have seen changes such as digital privacy and block ciphers, but maybe these are not so “upside-down.” Doubtless we are missing many more. What developments in computing have carried shock on the order of the discovery that neutrinos have mass in particle physics? We invite your suggestions and opinions.

Here also is a web folder of photos from Dick’s wedding and honeymoon.

[added link to Vardi on SAT solving]

Regarding “the collapse of Lisp-dedicated systems in 1987” and functional programming, Clojure should also not be left unmentioned. It is a very successful (though not as much as Scala) functional programming language based on the JVM that heavily relies on the ideas and syntax of ye olde LISP.

In general, it is nice to see how not only aspects of functional programming have become wide-spread in modern OO languages like Java and C# but also how more and more pure functional languages like Scala, Clojre and F# are used in big projects. People have come to realize that immutable data structures and other benefits of functional programming are invaluable in parallel computing.

It seems that teaching and researching functional languages as an alternative to the “von Neumann style” finally pays off.

If SAT was really easy in the worst case, one could not do public-key cryptography. Any idea how many variables and clauses are needed to encode, say, the problem of factoring an integer of “typical” size in crypto applications?

It’s quasilinear since verifying x = pq by multiplication is quasilinear. I don’t know if such an efficient translation has been done concretely. The other operative consideration is formulas coming from a particular application area being generally “well-behaved”—as those from factoring may not be.

What about http://toughsat.appspot.com/ or http://cgi.cs.indiana.edu/~sabry/cnf.html ?

“verifying x = pq by multiplication is quasilinear” where is this proven ? Are you saying at most quasilinear?

Congrats on your posting milestone! Come visit me please, if you can, and follow if you like what you read; I’m hoping to get my 1500th follower this month!

Working in geometry processing for computer graphics, if something takes quadratic time in the number of polygons, we usually considered it a bug, because the first thing that’s going to happen is someone will put in a model with 10 million polygons and have to cancel the operation that isn’t finishing. Even if the constant factor on the n^2 is 1 nanosecond, that’s over a day to wait.

Neil–

T = O(n^2) = 10^12 * 10^-9 = 1000 secs.

10 million is 10^7. T = (10^7)^2 * 10^-9 = 100000 seconds, which is longer than a day.

You meant Oliver Kullmann, not Kullback.