Math and Theory Proof Methods: A Comparison
A comparison of methods used in mathematics and theory for solving problems
John Rhodes is a world expert on semigroups and related mathematical objects. He is the “Rhodes” in the famous Krohn-Rhodes Theorem. Rhodes recently has worked on the P=NP question, and even held a conference at Berkeley on his approach—it is based on ideas related to semigroup theory.
Today I want to talk about methods mathematicians use to attack hard problems. Some of these methods are used often by theorists too, while others do not seem to be used as much as they could be.
One of the powerful tools that mathematicians use is related to early the work of Rhodes. But first I want to relate two stories about him. In the early 1960’s Rhodes was a graduate student at MIT and was working with another graduate student at Harvard, Kenneth Krohn. Together they discovered a theorem that became the Krohn-Rhodes Theorem. There is a long story here, but for now let me just say that they got Ph.D.’s for the identical work in 1962. Identical. The theses were the same—a pretty rare event.
When I was at Berkeley in the late 1970’s I was walking to lunch one day with Dick Karp. As we crossed campus Dick pointed out someone in the distance, and said, “do you know who he is?” I answered that I did not recognize him. Dick added in a low voice,
That is Rhodes; he is in real estate.
I have never heard before or since Karp say something with such awe. I believe that Dick respected Rhodes as a mathematician, but he was filled with wonder at Rhodes the heavily leveraged real estate investor.
Approaches Used By Mathematics and Theory
There are a number of approaches to problems that are used by both mathematicians and by theorists.
Restrict The Solutions: In mathematics often if a problem is too difficult to solve, then progress can be made by restricting the solution space. Thus, rather than looking for the general solution to an equation, sometimes only linear solutions are considered. Another example is to consider only highly symmetric solutions.
In circuit complexity theory this is a common method too. Proving general lower bounds is so far a dream for any “natural” problem. Progress can be made, however, by restricting the class of circuits—boolean or arithmetic. For instance we have lower bounds when the circuits are restricted to be:
- Constant depth
Even with these restrictions the results can be quite deep and difficult, but at least we can prove something.
For monotone the “we” refers to Alexander Razborov whose famous paper proves that the clique problem cannot be solved efficiently by any polynomial size monotone circuit. This does not rule out that P=NP. It does give an important insight into the structure of this famous NP-complete problem.
For a relatively recent paper on constant depth arithmetic circuits take a look at the paper of Maurice Jansen and Ken Regan. They use a discrete uncertainty principle to prove their lower bound.
And for the work on non-commutative see the paper of Pavel Hrubĕs, Avi Wigderson, and Amir Yehudayoff.
Generalize The Problem: In mathematics generalizing a problem can often make it more tractable. This sounds paradoxical, but it is not. By generalizing the problem we are asking a different question, and that question may be quite a bit easier to solve. The hope, of course, is that the generalized problem will yield some insight into the original problem.
A simple example from mathematics is the famous problem, also called the Collatz conjecture after Lothar Collatz. It has been open over 70 years. Various generalizations have been studied and some of these solved, but the original problem stands open. Paul Erdős is quoted as saying:
Mathematics is not yet ready for such problems.
In computational complexity we have used this method too. One example is the use of oracles. The famous results of Theodore Baker, John Gill, Robert Solovay showing that P=NP can be true or false relative to different oracles, certainly give an important insight. As you know I am not a fan of oracle results, but these results are important. Of course the real question is when we generalize a problem, how much does the generalization help us understand the original question.
Approaches Unique To Mathematics?
There are a number of approaches to problems that are used often by mathematics. But they are not used by complexity theorists—at least I do not think as often as they should be.
Classification: In mathematics a powerful method is to try to build a theory that classifies all objects of some type. The most famous of such a program is probably the classification of all finite simple groups. This tremendous result allows many problems to be solved that would other wise be impossible. An example is the famous conjecture of the number of generators of a finite simple group.
Another classification type theorem is the Krohn-Rhodes Theorem. It states that any finite automaton can be decomposed into “simple” automata. The hope at the time was that this decomposition theorem would help resolve many questions. I believe that this expectation at best was way too optimistic.
It is unclear if the classification method makes sense for complexity theory. There are many other classification theorems in mathematics that are very powerful, but there are many areas where such theorems seem to be hopeless. For example, I think trying to classify the models of set theory is completely impossible. Not just impossible—completely impossible.
However, it would be neat if there were useful classification theorems in complexity theory. What if we could prove a theorem like:
Theorem: If the boolean function is computable in linear time by a Turing Machine, then can be decomposed into
I have no idea what this would look like, but it is interesting to think about it. One issue, however, is that every computable function has a “padded” version that is linear-time computable, and it is not clear what decomposable structure would apply to but not .
There are some examples of classification theorems in complexity that have been quite useful. In the study of SAT there are the pretty results of Thomas Schaefer which are called his Dichotomy Theorem. Roughly his theorem does classify the type of SAT problems that are easy and the types that are NP-complete.
Les Valiant and Jin-Yi Cai have proved some quite beautiful results on the boundary between counting problems that are either in P or are #P complete. Jin-Yi has several papers based on ideas of Valiant, but they extend that work greatly. The work of Cai is also joint with numerous others: including Xi Chen, Sangxia Huang, Michael Kowalczyk, Pinyan Lu, and Mingji Xia.
Make the Objects Really Big: One tool to better understand a problem that mathematicians use and we rarely seem to is: replace “finite” by various notions of “infinite.” The study of finite groups is extremely difficult: one reason is that the interplay between small primes and each other can be very complex and subtle. Studying instead infinite groups is not easy, but there are some cases where the theory does become much easier. For example, studying Lie groups is not easy, but it does avoid much of the “accidental” number issues that arise in finite groups. One way to see this is their classification occurred first and is simpler than the classification of finite simple groups.
Continuous: Often replacing finite objects by continuous ones can yield a very powerful method. The entire field of analytic number theory is based on the ability of continuous functions to be used to solve discrete problems. Johann Dirichlet’s famous Theorem on primes in arithmetic progressions was one of the first of these great results:
Theorem: If and are relatively prime, then the sequence
contains an infinite number of primes.
The original proof of this used the structure of the following functions over the complex numbers where is a Dirichlet character:
Dirichlet had to prove that they were not zero at .
In complexity theory we do sometimes use continuous methods, but perhaps not as much as we could.
Model Theoretic: One of my favorite methods that has been used very successfully in mathematics is model theoretic methods. My personal favorite is the James Ax and Alexander Grothendieck Theorem:
Theorem: Let be a polynomial map from to . If is injective, then it is also surjective.
The miracle of the proof is that the theorem is obvious for any map from a finite set to itself: if it is one-to-one, then it is onto. The great thing about model theory is that essentially the same argument works for the very non-finite set .
I wonder if we could use model theoretic methods to solve some of our major problems.
Are there other methods that mathematics uses that we could try to exploit? Also any suggestions to add to the ones that I already have listed here?