Halmos, Non-Standard Analysis and Names
The importance of names and notation
Paul Halmos was a mathematician who did seminal work in many areas of mathematics. He also was one of the greatest writers of mathematics that ever lived. His books and essays are all gems: from his book on set theory to his book on linear algebra they are all wonderful. Halmos claims to have invented the “iff”, but he definitely invented the end of proof sign:. I have, like many, have mixed feelings about “iff”, but the end of proof symbol is something else.
“The symbol is definitely not my invention it appeared in popular magazines (not mathematical ones) before I adopted it, but, once again, I seem to have introduced it into mathematics. It is the symbol that sometimes looks like [an empty square], and is used to indicate an end, usually the end of a proof. It is most frequently called the ‘tombstone’, but at least one generous author referred to it as the ‘halmos’.” Paul Halmos.
I have no personal story to share about Halmos–I never had the honor of meeting him. However, I do think that if you have not yet, you might look at his essays on the art of writing mathematics. I try to follow some of his advice, but he is definitely in a class by himself. One of his ideas he called the “circular method”. Roughly if your paper has sections write section and revise it, then write section and revise both and . And so on. At the last step you write section
While it is a process that takes quadratic time, rather then the “optimal” linear time, the sections tend to flow together much better.
One story that you may like concerns the notion of non-standard analysis. This is an area created by Abraham Robinson to make precise the notion of infinitesimals and infinity that Newton and others used in developing calculus. We have long changed over from this to the and precision of Karl Weierstrass. Sometimes the burden of keeping track of many and can become overwheling. See Terry Tao’s post for a modern view of this.
Robinson used non-standard analysis to partially solve a long standing open problem–I believe it is still unsolved today. The problem asks whether every bounded linear operator on a complex Hilbert space has a invariant subspace: is there always a non-trivial closed subspace so that maps to ? For finite dimensional spaces this is true, and follows from basic matrix theory. It is based on the fact that over the complex numbers the matrix has eigenvalues.
What Robinson and his co-author Allen Bernstein proved was that a large class of operators have invariant subspaces. Their proof used non-standard analysis, which caused quite a stir at the time. First, it made a major contribution to a long standing open problem. Second, it used a totally new proof technology–non-standard analysis. Very roughly they showed that the class of operators they considered behaved like a finite dimensional operator. Of course this was not exactly correct, but in non-standard analysis very large objects are still “finite” in a suitable sense. This allowed them to prove their theorem, since the finite case is true.
Note, the analogy to my last post on the quantum method. Their proof used non-standard analysis, yet the statement of the theorem was classic. There was no mention of infinitesimals or infinite numbers in the statement of the theorem. This is exactly what the quantum method does today.
Halmos, an expert on Hilbert spaces, was determined to show that the use of non-standard analysis was not essentially. Relatively quickly, but not over night, he found a classic and proof of the same theorem. He was happy about this, since he could avoid learning Robinson’s theory. However, non-standard analysis has proved its worth. Since then there have been a number of additional proofs that were first discovered by non-standard methods.
Names and Notation
I thought that today I would have a short post on “names and notation.” I am working on a more technical post that will be done shortly.
Halmos said he often had a nightmare that someone would write this in their paper:
This is, of course, perfectly correct mathematics, but the conventions that we have on symbols make it hard on the eyes. It is not nice to have go to nor have tend to .
Notation and names are important, more than we often imagine. In our paper on “Turing Machines with Advice” we introduced to complexity theory the now standard notion and the more general notion . Actually, while the results of that famous paper are joint, the notation is solely due to Dick Karp. He was working on a draft of the final paper and thought of this wonderful notation. I wish we had a $1 for every use of the notation, even a reference every time it is used would be fine. Oh well.
Karp has a gift for naming things. The P=NP question itself is hard to beat: it is short, pretty, informative–just right. Would the question be nearly as cool if it was “Hypothesis K” or something else? When our new building was built at Princeton University, the chair of the department, Bob Sedgewick, had “P=NP?” encoded into the bricks on one side of the building. In ASCII code there are bricks that by their shape encode the phrase “P=NP?”. I doubt this would have happen if the problem was not stated in just five symbols.
One of my advisors when I was in graduate school was a logican named Don Loveland. Don had worked on a type of theorem proving system that he called Linear Resolution. Another researcher at almost the same time had worked on the same idea, but he called his version “Ancestral Filter Form”. Loveland’s name stuck, and certainly did not hurt his ability to get recognition.
Another perfect example concerns one of my favorite problems. What is now called the Junta Problem. I believe that the problem is originally due to Avrim Blum. The problem is: Suppose that is a boolean function that depends only on bits. The problem is to determine the from only random samples, i.e. you get to see
for many randomly selected . Typically is much smaller than ; the main range that we are interested in is . I plan a detailed post on this problem. There is a trivial bound of . This has been improved for arbitrary functions and even better results are known for symmetric functions.
The name the Junta Problem is terrific. It describes the problem well and is fun. It should be called “Blum’s problem”, but that is not to be.
My advisor Loveland used to tell me about naming concepts in your papers. Either pick a really good name or a terrible one for a new concept. The idea of a terrible one is then perhaps the community would name it after you. Perhaps if Karp had not used P=NP the conjecture would now be the Karp Hypothesis or the Karp Conjecture. I kind of like P=NP. It just sounds right.
The open problem for you to consider is to pick good names for your new ideas. Although, as Don points out, perhaps you should pick terrible ones