Skip to content

Types of Papers

June 6, 2011


More to our fields than proofs

Imre Lakatos was a famous philosopher, who wrote a brilliant book called Proofs and Refutations.

Today I wish to talk about types of papers.

Lakatos’ book opposes the formalist view that a mathematical theorem is a one-shot entity whose meaning resides only in the symbols of its statement and proof. It presents mathematics as a transactional process, by which the concepts and statements themselves are selected according to human appreciation of growth and progress. The book itself is in dialogue formed around the famous formula

\displaystyle  V - E + F = 2

for planar graphs, due to Leonhard Euler. Lakatos is also known for focusing attention not on individual advances but on the idea of a research programme, advocating that the mathematical community’s collective first order of business should be to identify which programmes are `progressing’ and which are `degenerating.’

An Example? New Collatz Claim

Ken received e-mail today with an example of mathematics as a social process: a serious claimed proof of the Collatz conjecture by Gerhard Opfer, a professor in the Faculty of Mathematics, University of Hamburg, Germany. The Collatz conjecture states every natural number is reachable by a path from {1} in the graph with edges from {n} to {2n} and (if it exists and is odd) {(n-1)/3}. Here is his paper, which was submitted on May 25 to the well-known journal Mathematics of Computation.

Some doubts and arguments over the proof in two discussion forums are linked from this June 1 announcement in the blog of John Cook. Cook includes an update noting the doubts and saying:

Perhaps the gap can be filled in, or perhaps an idea in the paper can be of use somewhere else.

The latter clause especially accords with Lakatos’ thinking. The Collatz conjecture itself, however, is a definite statement and not by itself part of a programme—so Lakatos’ ideas of how refutations shape changes in concepts seem not to apply. The statement is so simple that it can be represented by a Turing machine with binary alphabet and only seven-or-so states, the question being whether the machine halts for all inputs. We have not yet had time to give the new paper considered judgment.

Thus we return to the originally-drafted post, on types of papers and what they say about research programmes and the health of the field in general.

Paper Types

This is on my mind since I am currently working on the PC for this next year’s FOCS chaired by R.O. Sorry, I meant to say: Recall that I am currently working on the program committee (PC) for this year’s Foundations of Computer Science (FOCS) symposium chaired by Rafail Ostrovsky (R.O.).

As usual, the call for papers for FOCS had a long list of areas of papers that are relevant to the conference. Such a topic list is used by most conferences to control what kinds of papers are sent to the conference. A brilliant paper on a method to cure the common cold would probably not be accepted—or even be understood. So authors follow the topic list pretty closely.

Thus the call for the conference divides papers by topics, not by type. I have been wondering if we should also label papers with their type. You might ask, if you are still reading, what do I mean by type?

There are several types of papers, besides the obvious division into clear and unclear. This latter division I have always thought was more one of ownership: a paper is clear if I wrote it and unclear if someone else did. This has happened to me even within joint papers. My pass seemed crystal clear, but my co-authors’ next pass I could not follow. No, by type I mean something else. I mean a classification of papers based on what they do, not on what they do it to. For example, a paper that solves an open problem that is well known is one type. There are many others that I have been able to identify—see the next section for a list of some of them.

Examples of Types

Here is my preliminary list of types:

{\bullet } Open Problem: This is one of the most exciting types of papers. Even if the problem is not a Clay problem or on Steven Smale’s list, it is neat to be able to say: We solve the open problem of X and show that {\dots} is a theorem. Such a paper is easy to pigeon-hole: if the open problem was well known, then the paper is easily seen to be important. Even if the open problem was only known to some, they will be excited—perhaps upset—but usually excited that the problem is solved.

{\bullet } Open Problem—Modified: This adds a restriction on the class of objects and proves a result. Prime examples are: make circuits monotone or non-commutative, make the algorithm have to handle only random inputs, and so on. This is very common, since many of our open problems seem to be extremely hard. Most of the results on lower bounds, for instance, are forced to use this method. There is nothing wrong with this type; one could argue that it is natural and reasonable when faced with impossible problems to try and attack modified or weaker versions of them. The long-term goal is to shed light on the real open problem, and perhaps yield insights that will allow us to eventually crack it.

{\bullet } Old Result—Enhanced: This takes a known result and improves the result on some dimension. If the previous result was an algorithm, then the new result may make it go faster, or use less or no randomness, or reduce its space usage. A canonical example is: The previous work of X showed how to solve {\dots} on graphs in time {O(|V|^2)}, which we improve to {O(|E|)}. All types of results can be enhanced: bounds can be improved, restrictions can be removed, and so on.

{\bullet } Old Result—New Correct Proof: This type of result takes a published result and fixes the proof. According to Lakatos there are two ways a proof can fail to be correct. It can contain a local error—an error in a proof of a lemma or claim that is true—but the proof was deficient. Or the error could be a global error. This means that the lemma or claim was just plain wrong. The old result then needs major rework. This type of result is very important to the field, since leaving incorrect results around can damage future research. Also many times such results discover an assumption that seemed “obvious,” but was not. The discovery of this can be quite important and helpful in our understanding of what is happening.

No offense, but some areas of theory are more susceptible to this phenomenon; in particular cryptography seems to contain relatively more results of this type. I believe that some of this is due to the notions used in security being more subtle that those used in other areas.

{\bullet } Old Result—New Proof: This type of paper re-proves an old result by a new method. These papers can be the most important of all, since they enrich the field with new tools. These tools can then be used to solve, perhaps in the future, other results. Often the first proof of a result is clumsy and not clear; often it can be filled with technical details. Finding a new proof, especially a proof that is shorter and clearer, is potentially very valuable to the field. A canonical example is Irit Dinur’s new proof of the PCP Theorem, which even merited its own survey in the AMS Bulletin by Jaikumar Radhakrishnan and Madhu Sudan.

{\bullet } New Real Problem: These results try to model a practical problem and make it into a formal one. If this is the first result of the kind, the result may be relatively easy, but the problem can be quite important. Sometimes the model is interesting for its own sake—even if the model doe not exactly capture the real-world problem.

{\bullet } New Problem: This may or may not be driven by a practical problem, but may capture an important question. Even without a real-world justification, the problem can be very important. A good problem can help us discover new results that may not have been looked at before. I think this is hard to do, but when it does happen the payoff can be immense.

{\bullet } New Technique: Such papers can be extremely important. Sometimes the application that drove the authors to create the new technique is not as nearly as interesting as the technique. This happens frequently—I am not sure why—but it just does.

{\bullet } Computational Proof: This type uses actual computations to show that something is true. We get relatively few of these relatively, but we do see some of them in the theory area. One of the most famous is the original proof of the Four Color Theorem by Kenneth Appel and Wolfgang Haken. This gives us a chance to mention that there has been recent progress on improving the proof so that it is practically human-checkable, including this proof and better algorithm by Neil Robertson, Daniel Sanders, Paul Seymour, and Georgia Tech’s very own Robin Thomas.

We also acknowledge and thank commenters in this blog’s April 24 item on the 4CT who have continued noting the progress and discussion of other ideas that emanate from this theorem right up through this year. See comments at the bottom including some neat videos. We also note this January 2005 article by Keith Devlin saying the last doubts about 4CT were finally removed, almost thirty years after the original paper. Although there is nothing more formal than a computer proof, Imre Lakatos would have felt vindicated to see this process.

Open Problems

What are some other types of papers? Do you like this classification?

Is the Collatz proof correct? How can we help the social process of reviewing such claims?

[fixed Collatz statement with extra words "and is odd".]

About these ads
14 Comments leave one →
  1. June 6, 2011 1:30 pm

    Hat-tip with thanks to the e-mailer, who wishes to remain anonymous. Post is mostly by Dick, but he is on a plane and I am not.

  2. Mario permalink
    June 6, 2011 5:54 pm

    Very interesting. Please keep on posting about that claimed proof, as you did with the P=NP claim some time ago – that was very fun to read/follow.

    Thanks!

    • vloodin permalink
      June 12, 2011 3:55 pm

      The proof is incorrect, though it is written with precision. The key point – Theorem 4.17 – is glossed over, referring to the discussion after Lemma 4.13. In this discussion it is claimed that the “annihilation graph” is a tree, which it would be, if it were connected. The key issue is proving that there is only one component, which is not done at all. This seems to be the only one – but crucial – mistake in the paper.

      Essentially, each connected component of the “annihilation graph” gives a linearly independent member of the intersection of kernels of linear operators they consider. The two linear operators encode the graph on natural numbers mentioned above in the blog. The kernel condition for the first operator encodes edges from 2n to n, the kernel condition for the second operator encodes edges from n to (n-1)/3 (again, every connected component of a subgraph with edges of given type corresponds to a linearly independent component of the kernel of corresponding operator), and the intersection of the two kernels encodes the full graph. The key issue is to prove that the full graph is connected, and this key point was not dealt with in the paper. Apparently, this is due to a rough oversight of the fact that a directed graph with incoming degrees of all non-”root” vertices equal to one needs not to be a tree – it could be a forest with more than one infinite components, which brings us back to the connectedness issue we started with in the first place.

  3. June 6, 2011 6:05 pm

    Thank you! Proofs and Refutations is my favorite book. I think there is very important fact in this book. The fact is that every error is not fatal error.

  4. June 6, 2011 8:10 pm

    For my procedure and experiment data(see arxic.org,author:lizhi du), I want to mention the standard testbed on http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/. On this web, there are 9 files for hamilton cycles. Our procedure can calculate all the 9 files, very easy, very fast. For each file, I can quickly get a hamilton cycle which is different from the web owner’s, because each one has more than one hamilton cycle. Also,for each file, I can quickly get a hamilton path from vertex 1 to vertex n.
    Can I up load my results here?

  5. June 7, 2011 5:29 am

    For engineers, an especially wonderful class of articles—very congenial to Lakatos’ philosophy—might be called “Proofs That Suggest New Reasons to Doubt”. Such proofs are described by two of my favorite quotations (it was fun to discover that Lakatos’ Proofs and Refutations includes both of them):

    “The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts.” (Henry George Forder, 1927, page viii, the Foundations of Euclidean Geometry)

    “It is one of the chief merits of proofs that they instill a certain skepticism about the result proved.” (Russell, 1903, page 360, Principles of Mathematics

    Kolmogorov’s celebrated proof that no Turing machine can generate random numbers provides a very good example. As von Neumann put it:

    “Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin

    …—and many of the most intriguing and delightful elements of modern mathematics have been conceived by the enthusiastic embrace of von Neumann-style sinfulness.

    Along similar lines, and even more broadly, wonderful new possibilities for mathematical sin are associated to our modern understanding of sampling and simulation, in which typical theorems read like:

    “We can rule out a polynomial-time classical algorithm to sample the output distribution of a quantum computer, under the sole assumption that the polynomial hierarchy is infinite.”

    To engineers, such theorems are sign-posts that direct us to a “red-light” district of mathematics where we are beckoned to enthusiastically embrace (what might be called) the disciplines of pseud0-randomness, pseud0-sampling, and pseudo-simulation…— mathematical practices that are fun for a deliciously sinful reason: they achieve mathematical effects that formal logic says are impossible. What could be more magical, and more fun, that that? :)

    That is why (if we are lucky) Lakatos’ Proofs and Refutations describes not only the past practices of mathematics, but also the future practices too.

  6. June 7, 2011 10:25 am

    The statement of the Collatz Conjecture is slightly wrong; as stated, the Collatz graph has an edge from 7 to 2. In the original formulation, there is an edge between n and (n-1)/3 only if n is even. I don’t know if the accidentally-posed new problem with these extra edges is any easier than the original.

    The Collatz Conjecture seems to attract amateurs (with the usual leavening of outright cranks) with uncanny effectiveness; it’s a pleasure to see that professionals are still paying attention.

    • June 7, 2011 10:40 am

      Ah, touché! I was trying to pack the statement into one sentence along lines of Opfer’s paper, but my parentheses need to say “(if it exists and is odd)”. Which I will do there, and note at the bottom as-usual.

      Whether the extra edges make a difference is a ticklish question…

      • June 7, 2011 10:53 am

        The extra edges could only make the problem easier. I will be surprised if it turns out that nobody has considered the question.

  7. June 7, 2011 4:34 pm

    It’s not completely distinct from your new proof or new technique categories, but I think that the development of theory deserves a category of its own. Sometimes it is misleading to say that a new theory is giving new proofs — it’s more like the old proofs but generalized and coherently reorganized. (Of course, this process often leads to solutions of further problems.) Because this isn’t really the kind of mathematics I do, I don’t have a good supply of examples — the one in the back of my mind is the development of K-theory.

    • June 8, 2011 2:35 am

      Theory builders vs problem solvers papers again?

      One example that comes to my mind is Schreiber’s work on differential cohomology in a coherent infinity-topos. (or, to most people: jargon in a jargon infinity-jargon). A lot of what it boils down to (and I’m under-selling it here) is recognising that the formalism of differential forms, curvature and so forth from geometry can be formulated in an abstract way such that all the results follow from a few abstract axioms. Most of the examples were known before in more pedestrian disguises, but the unification provides a powerful viewpoint that wasn’t there before.

Trackbacks

  1. Seventh Linkfest
  2. Quantum Refutations and Reproofs « Gödel’s Lost Letter and P=NP
  3. What is the proper role of verification in quantum sampling, simulation, and extended-Church-Turing (E-C-T) testing? | Question and Answer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,538 other followers