Levin’s Great Discoveries
Levin’s discovery of universal problems and more
Leonid Levin is one of the creators of the question of computer science—the versus question. He published this work back in 1972–1973,
Today I want to discuss Levin’s original paper.
The reason I thought it would be interesting to look at his original paper is that I believe many have not actually studied it. I could be wrong, but my informal poll, with a tiny sample, shows that this is the case. While there are beautiful modern treatments of all contained in his paper, I think we can learn quite a bit from seeing the original masterpiece.
A book treatment of a topic, like the question, is likely to be perfect, completely clear, filled with details, and easy to follow. But the original papers often yield insights into how hard it was to get the ideas right. I have another prized reason for introducing his work for discussion today, but that will come in a future post.
The Original Paper
In his famous paper he studied universal problems. The paper is from June 7, 1972, although it appeared in 1973. It has many beautiful ideas, several key definitions, six important examples, and not a single proof. I tried to get a copy of the paper in English in TeX, but the best I could do is to get a scanned version. Thanks to Farbod Shokrieh for helping to find even this version. It is from a paper by the eminent Boris Trakhtenbrot. Note that Levin’s paper has words changed with a strikeout.
The following are the six problems that Levin lists as his examples of universal search problems.
Problem 1. A list determines a finite set and a covering of that set by 500-element subsets. Find a subcovering having a prescribed cardinality (determine whether such a subcovering exists).
Problem 2. A table generates a partial Boolean function. Find a disjunctive normal form of the prescribed dimensions realizing that function in its domain (determine whether such a DNF exists).
Problem 3. Determine whether a given formula of the propositional calculus is deducible or refutable (or, equivalently, whether a given Boolean formula is equal to a constant).
Problem 4. Two graphs are given. Find a homomorphism of one onto the other (determine whether such a homomorphism exists).
Problem 5. Two graphs are given. Find an isomorphism of one into the other (onto part thereof).
Problem 6. Consider matrices composed of integers from 1 to 100 and a certain stipulation as to which integers can be vertically adjacent and which can be horizontally adjacent. When the outermost integers are given, continue over the entire matrix, observing the given stipulation.
Here are some comments on these problems.
- Problem 1 is: Given each a 500-element size set. Find a subcollection so that
He makes no comment why he made the sets cardinality .
- Problem 2 is: Let be pairs for such that is an -bit number and is a single bit. Is there a Boolean function with a DNF of size so that
for all ? If so, find one.
- Problem 3 is a what we would call formula : is a Boolean formula satisfiable, or not? Of course we usually are strict in distinguishing between the problem and its complement. But this is essentially .
- Problems 4 and 5 are subgraph isomorphism and subgraph homomorphism.
- Problem 6 is quite interesting; it is essentially a finite tiling problem. Again Levin does not explain why there are 100 types of tiles. I assume that he had worked out that these were enough to encode a universal Turing machine.
Levin’s Theorems and Gems
Levin’s main theorem is:
Theorem: If there exists a search (quasi search) problem unsolvable in a time less than for an argument of length comparable with , then Problems 1-6 also have this property.
This essentially shows that all the problems are the same in their deterministic computational cost—up to a polynomial. It is curious that he states the theorem in the negative: if any are hard, then all are hard. Perhaps that is a translation issue. Indeed the eminent Trakhtenbrot says Although Levin’s 1973 paper is laconic (as are Levin’s publications) in general, the formulations are absolutely crisp in the Russian original. Unfortunately, the English translation is awkward and contains misrepresentations and confusing formulations.
Levin does not give a conventional proof. Instead, he witnesses his ability to prove it by giving a gem of a definition. Edited down from Trakhtenbrot’s version and rearranged, it reads:
Search problem reduces to if there are three polynomial-time algorithms , , and such that and .
Here is what we know as a Karp-reduction from the language to the corresponding language , and merely gives the transformed witness in a typical Karp-reduction proof. The distinctive part of Levin’s definition is the function. This says that any\/ solution to the target search problem can be transformed back to get a solution to the source problem . Some sources relax the definition to functions and that depend on , but this does not change the essence. This is now called a Levin-reduction. These slides by Tal Kramer and Gillat Kol put Levin in the context of the whole 20th-century history of -completeness.
Levin simply gave without proof a statement equivalent to saying the above six problems are complete under Levin reductions.
Lemma: Problems 1–6 are universal search problems.
Thus his great counterpart to Cook’s Theorem was “just a lemma.” His reductions give the extra information that is equivalent to the sub-problem of defined by the range of . If is onto , then is equivalent to in turn.
Levin’s paper has one further gem, one that Trakhtenbrot claims was missed because of the excitement of the rest of the paper. This is Levin’s insight, his theorem 2:
Theorem: For any search problem, there exists an algorithm that solves it in time that is optimal up to multiplication by a constant and the addition of a number polynomial in the length of the input.
Levin gave no indication of how to do this, and Trakhtenbrot points out that Vladimir Sazonov eventually worked out a proof.
Some Quotes By Levin
Besides his original work on versus , Levin has done many other wonderful things. His work on average complexity is very important, as are his many other papers on various areas of complexity.
His homepage includes his list of publications, but also a number of essays and position papers. To start, he is not a fan of quantum computing. In person he has told me so, many times. Here is a quote of his:
Archimedes made a great discovery that digital representation of numbers is exponentially more efficient than analog ones (sand pile sizes). Many subsequent analog devices yielded unimpressive results. It is not clear why QC’s should be an exception.
By QC’s he refers, of course, to quantum computers. This does not seem positive to my ears.
Here is another quote from a paper on incompleteness theorems:
Gödel’s Incompleteness Theorem leaves open a way around it, vaguely perceived for a long time but not clearly identified. (Thus, Gödel believed informal arguments can answer any math question.)
I would like to discuss this paper in the future, since it is related to a problem that a few of us are thinking about right now.
To me, one of the continuing mysteries of mathematics and theory is why results are often discovered independently at almost the same time. This seems to happen repeatedly, here Cook and Levin, and I still have no completely satisfying reason on why it happens.