Skip to content

The World Turned Upside Down

July 10, 2016

Some CS reflections for our 700th post

Lin-Manuel Miranda is seen in New York, New York on Tuesday September 2, 2015.
MacArthur Fellowship source

Lin-Manuel Miranda is both the composer and lyricist of the phenomenal Broadway musical Hamilton. A segment of Act I covers the friendship between Alexander Hamilton and Gilbert du Motier, the Marquis de Lafayette. This presages the French co-operation in the 1781 Battle of Yorktown, after which the British forces played the ballad “The World Turned Upside Down” as they surrendered. The musical’s track by the same name has different words and melodies.

Today we discuss some aspects of computing that seem turned upside down from when we first learned and taught them.

Yesterday was halfway between our Fourth of July and France’s Bastille Day, and was also the last day of Miranda performing the lead on-stage with the original Hamilton company. They are making recordings of yesterday’s two performances, to be aired at least in part later this year. A month ago, Miranda wrote an op-ed in the New York Times against the illegal (in New York) but prevalent use of “bots” to snap up tickets the moment they become available for later marked-up resale.

This is also the 700th post on this blog. It took until 1920 for a Broadway show of any kind to reach 700 performances. The Playbill list of “Long Runs on Broadway” includes any show with 800 or more performances. That mark is within our reach, and our ticket prices will remain eminently reasonable.

What Has Changed?

This list is just what strikes us now—far from exhaustive—and we invite our readers to add opinions about examples in comments.

NP-Complete Means Easy

Forty-five years ago, Dick Karp showed how the difficulty of SAT represented by NP-completeness spreads to other natural problems. As the number of complete problems from many areas of science and operations research soared into the thousands by the 1979 publication of the book Computers and Intractability, people regarded NP-completeness as tantamount to intractability.

Today the flow is in the other direction—as expressed for instance in this talk by Moshe Vardi. Dick Karp himself has been among many in the vanguard—I remember his talk on practical solvability of Hitting-Set problems at the 2008 LiptonFest and here is a relevant paper. We now reduce problems to SAT in order to solve them. SAT-solvers that work in many cases are big business. In some whole areas the SAT-encodings of major problems are well-behaved, as we remarked about rank-aggregation and voting theory in the third section of this post from last October. The solvers can even tackle huge problems. Marijn Heule, Oliver Kullback, and Victor Marek proved that every 2-coloring of the interval {[1,2,\dots,7285]} has a monochromatic Pythagorean triple in a proof of over 200 terabytes in uncompressed length.

Quadratic Time Means Hard

Quadratic time is notionally on the low end of polynomial time, and “polynomial time” has long been used as a synonym for “easy.” But as the amount of data we can and need to handle has mushroomed, the difference in scaling between quasi-linear and quadratic is more and more felt. This difference has even been argued for cryptographic security. A particular definition of quasi-linear time is {n(\log n)^{O(1)}} time as named by Claus Schnorr for his theorem on quasi-linear completeness of SAT; see also this.

In genomics the quadratic time of algorithms for full edit-distance measures is felt enough to warrant approximative methods, as we covered in our memorial a year ago for Alberto Apostolico. This also puts meaning behind theoretical evidence that {O(n^2)} time for edit distance cannot be improved.

These two items seem to contradict each other, but point up a difference in scale between data and logical control. Often a thousand data points are nothing. A formula with a thousand clauses can say a lot.

AI Winter Into Spring

My first doctoral student had been working on neural networks before I became his advisor in 1991, and I remember the feeling of their being under a cloud. The so-called AI Winter traced in part to lower bounds shown against certain shallow neural nets in the 1969 book Perceptrons by Marvin Minsky and Seymour Papert. We discussed complexity aspects of this in our memorial of Minsky last January.

Since then what has emerged is that composing a bunch of these nets, as in a convolutional neural network (CNN), is both feasible and algorithmically effective. The recent breakthrough on playing Go is just a headline among many emerging applications of CNNs and larger systems. We are not saying neural nets and deep learning are the be-all or anything more than a “cartoon” of the brain, but rather noting them among many reasons that AI and machine learning are resurgent.

Data-Parallelism and Functional Languages Are Revenant

The same AI-winter article on Wikipedia mentions the collapse of Lisp-dedicated systems in 1987, and more widely, many companies devoted to data-parallel architectures “left nothing but their logos on coffee mugs” as a colleague once put it. Subsequently I perceived signs of stagnation in functional languages in the late 1990s and early 00s. This lent a ghostly air to John Backus’s famous 1978 Turing Award lecture, “Can Programming Be Liberated From the von Neumann Style?”

Unlike the revenant in last year’s award-winning movie of that name, this one has come back with a different body. Not a large-scale dedicated machine system, but rather the pan-spectral pervasion we call the Cloud. A great lecture we heard by Mike Franklin on Amplab activities highlighted the role of programs written in the functional language Scala running on the Apache Spark framework.

A common thread in all these items is the combined efficacy and scalability of algorithmic primitives whose abstract forms characterize quasi-linear time: sorting, parallel prefix sum (as one of several forms of map-reduce), convolution, streaming count-sketching, and the like.

Open Problems

We considered mentioning some subjects that have seen changes such as digital privacy and block ciphers, but maybe these are not so “upside-down.” Doubtless we are missing many more. What developments in computing have carried shock on the order of the discovery that neutrinos have mass in particle physics? We invite your suggestions and opinions.

Here also is a web folder of photos from Dick’s wedding and honeymoon.

[added link to Vardi on SAT solving]

11 Comments leave one →
  1. Christopher permalink
    July 11, 2016 2:20 am

    Regarding “the collapse of Lisp-dedicated systems in 1987” and functional programming, Clojure should also not be left unmentioned. It is a very successful (though not as much as Scala) functional programming language based on the JVM that heavily relies on the ideas and syntax of ye olde LISP.

    In general, it is nice to see how not only aspects of functional programming have become wide-spread in modern OO languages like Java and C# but also how more and more pure functional languages like Scala, Clojre and F# are used in big projects. People have come to realize that immutable data structures and other benefits of functional programming are invaluable in parallel computing.

    It seems that teaching and researching functional languages as an alternative to the “von Neumann style” finally pays off.

  2. Pascal permalink
    July 11, 2016 3:06 am

    If SAT was really easy in the worst case, one could not do public-key cryptography. Any idea how many variables and clauses are needed to encode, say, the problem of factoring an integer of “typical” size in crypto applications?

    • July 11, 2016 10:30 am

      It’s quasilinear since verifying x = pq by multiplication is quasilinear. I don’t know if such an efficient translation has been done concretely. The other operative consideration is formulas coming from a particular application area being generally “well-behaved”—as those from factoring may not be.

  3. July 13, 2016 1:12 am

    Congrats on your posting milestone! Come visit me please, if you can, and follow if you like what you read; I’m hoping to get my 1500th follower this month!

  4. July 13, 2016 8:43 am

    Working in geometry processing for computer graphics, if something takes quadratic time in the number of polygons, we usually considered it a bug, because the first thing that’s going to happen is someone will put in a model with 10 million polygons and have to cancel the operation that isn’t finishing. Even if the constant factor on the n^2 is 1 nanosecond, that’s over a day to wait.

    • paradigmnnf permalink
      July 14, 2016 10:45 am

      T = O(n^2) = 10^12 * 10^-9 = 1000 secs.

      • July 14, 2016 9:30 pm

        10 million is 10^7. T = (10^7)^2 * 10^-9 = 100000 seconds, which is longer than a day.

  5. András Salamon permalink
    July 16, 2016 4:01 pm

    You meant Oliver Kullmann, not Kullback.


  1. Princeton Bank Consortium « Pink Iguana

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s