The importance of names and notation

Paul Halmos was a mathematician who did seminal work in many areas of mathematics. He also was one of the greatest writers of mathematics that ever lived. His books and essays are all gems: from his book on set theory to his book on linear algebra they are all wonderful. Halmos claims to have invented the “iff”, but he definitely invented the end of proof sign:${\square}$. I have, like many, have mixed feelings about “iff”, but the end of proof symbol is something else.

“The symbol ${\square}$ is definitely not my invention it appeared in popular magazines (not mathematical ones) before I adopted it, but, once again, I seem to have introduced it into mathematics. It is the symbol that sometimes looks like [an empty square], and is used to indicate an end, usually the end of a proof. It is most frequently called the ‘tombstone’, but at least one generous author referred to it as the ‘halmos’.” Paul Halmos.

I have no personal story to share about Halmos–I never had the honor of meeting him. However, I do think that if you have not yet, you might look at his essays on the art of writing mathematics. I try to follow some of his advice, but he is definitely in a class by himself. One of his ideas he called the “circular method”. Roughly if your paper has ${n}$ sections write section ${\S 1}$ and revise it, then write section ${\S 2}$ and revise both ${\S 1}$ and ${\S 2}$. And so on. At the last step you write section

$\displaystyle {\S n}$ and revise ${\S 1 \text{ and } \S 2 \text{ and } \dots \text { and } \S n}$

While it is a process that takes quadratic time, rather then the “optimal” linear time, the sections tend to flow together much better.

One story that you may like concerns the notion of non-standard analysis. This is an area created by Abraham Robinson to make precise the notion of infinitesimals and infinity that Newton and others used in developing calculus. We have long changed over from this to the ${\epsilon}$ and ${\delta}$ precision of Karl Weierstrass. Sometimes the burden of keeping track of many ${\epsilon}$ and ${\delta}$ can become overwheling. See Terry Tao’s post for a modern view of this.

Robinson used non-standard analysis to partially solve a long standing open problem–I believe it is still unsolved today. The problem asks whether every bounded linear operator ${A}$ on a complex Hilbert space has a invariant subspace: is there always a non-trivial closed subspace ${S}$ so that ${A}$ maps ${S}$ to ${S}$? For finite dimensional spaces this is true, and follows from basic matrix theory. It is based on the fact that over the complex numbers the matrix has eigenvalues.

What Robinson and his co-author Allen Bernstein proved was that a large class of operators have invariant subspaces. Their proof used non-standard analysis, which caused quite a stir at the time. First, it made a major contribution to a long standing open problem. Second, it used a totally new proof technology–non-standard analysis. Very roughly they showed that the class of operators they considered behaved like a finite dimensional operator. Of course this was not exactly correct, but in non-standard analysis very large objects are still “finite” in a suitable sense. This allowed them to prove their theorem, since the finite case is true.

Note, the analogy to my last post on the quantum method. Their proof used non-standard analysis, yet the statement of the theorem was classic. There was no mention of infinitesimals or infinite numbers in the statement of the theorem. This is exactly what the quantum method does today.

Halmos, an expert on Hilbert spaces, was determined to show that the use of non-standard analysis was not essentially. Relatively quickly, but not over night, he found a classic ${\epsilon}$ and ${\delta}$ proof of the same theorem. He was happy about this, since he could avoid learning Robinson’s theory. However, non-standard analysis has proved its worth. Since then there have been a number of additional proofs that were first discovered by non-standard methods.

Names and Notation

I thought that today I would have a short post on “names and notation.” I am working on a more technical post that will be done shortly.

Halmos said he often had a nightmare that someone would write this in their paper:

$\displaystyle \lim_{\epsilon \rightarrow \infty} n_{\epsilon} = 0.$

This is, of course, perfectly correct mathematics, but the conventions that we have on symbols make it hard on the eyes. It is not nice to have ${\epsilon}$ go to ${\infty}$ nor have ${n_{\epsilon}}$ tend to ${0}$.

Notation and names are important, more than we often imagine. In our paper on “Turing Machines with Advice” we introduced to complexity theory the now standard notion ${{\cal C}/poly}$ and the more general notion ${{\cal C}/f(n)}$. Actually, while the results of that famous paper are joint, the notation is solely due to Dick Karp. He was working on a draft of the final paper and thought of this wonderful notation. I wish we had a \$1 for every use of the notation, even a reference every time it is used would be fine. Oh well.

Karp has a gift for naming things. The P=NP question itself is hard to beat: it is short, pretty, informative–just right. Would the question be nearly as cool if it was “Hypothesis K” or something else? When our new building was built at Princeton University, the chair of the department, Bob Sedgewick, had “P=NP?” encoded into the bricks on one side of the building. In ASCII code there are bricks that by their shape encode the phrase “P=NP?”. I doubt this would have happen if the problem was not stated in just five symbols.

One of my advisors when I was in graduate school was a logican named Don Loveland. Don had worked on a type of theorem proving system that he called Linear Resolution. Another researcher at almost the same time had worked on the same idea, but he called his version “Ancestral Filter Form”. Loveland’s name stuck, and certainly did not hurt his ability to get recognition.

Another perfect example concerns one of my favorite problems. What is now called the Junta Problem. I believe that the problem is originally due to Avrim Blum. The problem is: Suppose that ${f:\{0,1\}^{n} \rightarrow \{0,1\}}$ is a boolean function that depends only on ${k \ll n}$ bits. The problem is to determine the ${f}$ from only random samples, i.e. you get to see

$\displaystyle (x,f(x))$

for many randomly selected ${x}$. Typically ${k}$ is much smaller than ${n}$; the main range that we are interested in is ${k = \Theta(\log n)}$. I plan a detailed post on this problem. There is a trivial bound of ${n^{k+O(1)}}$. This has been improved for arbitrary functions and even better results are known for symmetric functions.

The name the Junta Problem is terrific. It describes the problem well and is fun. It should be called “Blum’s problem”, but that is not to be.

Open Problems

My advisor Loveland used to tell me about naming concepts in your papers. Either pick a really good name or a terrible one for a new concept. The idea of a terrible one is then perhaps the community would name it after you. Perhaps if Karp had not used P=NP the conjecture would now be the Karp Hypothesis or the Karp Conjecture. I kind of like P=NP. It just sounds right.

The open problem for you to consider is to pick good names for your new ideas. Although, as Don points out, perhaps you should pick terrible ones ${\dots}$

1. March 30, 2009 11:15 am

According to Fortnow “NP” is not the best name for the complexity class.

http://weblog.fortnow.com/2006/07/naming-complexity-classes.html

Lot of my non-tcs friends tend to forget what is NP and think that it is non-polynomial-time.

March 30, 2009 8:16 pm

Dear Dr. Lipton:

I agree with you that Halmos’ writings are powerful and influential.

On the other hand, Halmos was too much of an old school and derided computers (and computer science) at every opportunity. One of his (in)famous statements was “Computers are important – but not for mathematics”.

You can find a tirade against Halmos (and Attiyah in the same article) by another charming (if garrulous) mathematician Zeilberger. I heartily recommend his entries for the readers of this BLOG.

3. April 1, 2009 3:05 pm

Dear Dr. Lipton:

I thought I had previously submitted this comment, but it may not have gotten through. Here’s another try:

Halmos and I overlapped at Indiana University, although we were in different departments. I too am an admirer of Halmos’ superb writing ability, on display in e.g., “Selecta: Expository Writing” [Halmos (1983)] and in his fascinating “I Want to be a Mathematician” [Halmos (1988a). Halmos was also interested in photography [Halmos (1988b)] and in education. On page 258 of Halmos (1988a) he wrote:

“Some say that the only possible effect of the Moore method is to produce research mathematicians, but I don’t agree. The Moore method is, I am convinced the right way to teach anything and everything. It produces students who can understand and use what they have learned. It does, to be sure, instill the research attitude in the student – the attitude of questioning everything and wanting to learn answers actively — but that’s a good thing in every human endeavor, not only in mathematical research. There is an old Chinese proverb that I learned from Moore himself: ‘I hear, I forget; I see, I remember. I do, I understand.’ “

See also my post “Re: Teaching Problem Solving – Moore Method” [Hake (2002)].

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
Honorary Member, Curmudgeon Lodge of Deventer, The Netherlands.

http://www.physics.indiana.edu/~hake/

http://www.physics.indiana.edu/~sdi/

http://HakesEdStuff.blogspot.com/

REFERENCES
Hake, R.R. 2002. “Re: Teaching Problem Solving – Moore Method,” online at http://tinyurl.com/3droym . Post of 22 Oct 2002 10:02:56-0700 to Phys-L, PhysLrnR, & Math-Learn. To access the archives of PhysLnR one needs to subscribe, but that takes only a few minutes by clicking on http://listserv.boisestate.edu/archives/physlrnr.html and then clicking on “Join or leave the list (or change settings).” If you’re busy, then subscribe using the “NOMAIL” option under “Miscellaneous.” Then, as a subscriber, you may access the archives and/or post messages at any time, while receiving NO MAIL from the list!

Halmos, P.R. 1983. “Selecta: Expository Writing” Springer Verlag. From the forward: “Paul Halmos has been famous for several decades as a past master in all forms of mathematical exposition: articles, books, lectures.”

Halmos, P.R. 1988a. “I Want to be a Mathematician: An Automathography in Three Parts,” Mathematical Association of America.

Halmos, P.R. 1988b. “I Have a Photographic Memory. Mathematical Association of America. Contains a nice photo of Magnus R. Hestenes. Halmos’s caption reads: “Magnus was a part of the Chicago crowd when I first met him, but he has been at UCLA for a long time. I associate his name with the calculus of variations, the old Bliss school.” Magnus was also co-producer of that well-known geometric-calculus variant David Hestenes (2001).

Hestenes, D. 2001. “Oersted Medal Lecture: Reforming the Mathematical Language of Physics,” AAPT Announcer 31 (4): 64: online as a pdf at http://modelingnts.la.asu.edu/ / “Overview of GC” where “/” means “click on.” See also “Spacetime Physics with Geometric Algebra” loc. cit.

4. April 20, 2009 1:22 pm

I also never met Halmos but was connected to him through several paths by professors and collaborators. Two comments to add to Dick Lipton’s:

1. Halmos understood the power of notation to conceal what is really going on. He is most likely the PRH Anonymous who was the author of the following (published in a 1957 Summer School on Logic)

If you think that your paper is vacuous
Use the first order functional calculus.
It then becomes logic and as if by magic
The obvious is hailed as miraculous.

2. Years before Dana Scott’s Boolean Valued Models were used to simplify forcing arguments in set theory independence proofs, Halmos became convinced that the algebraic formulation of predicate calculus was the right way to unify both the notation and the machinery of predicate logic. His 1956 exposition of algebraic logic in Volume 53 of The American Mathematical Monthly famously concludes: “In view of the algebraic characterization of syntactic completeness [the Godel incompleteness theorem] can be rephrased thus: not every Peano algebra is a simple polyadic algebra…The conclusion is that the crowning glory of modern logic (algebraic or not) is the assertion: mathematics is not necessarily simple.”