# Halmos, Non-Standard Analysis and Names

* The importance of names and notation *

Paul Halmos was a mathematician who did seminal work in many areas of mathematics. He also was one of the greatest writers of mathematics that ever lived. His books and essays are all gems: from his book on set theory to his book on linear algebra they are all wonderful. Halmos claims to have invented the “iff”, but he definitely invented the end of proof sign:. I have, like many, have mixed feelings about “iff”, but the end of proof symbol is something else.

“The symbol is definitely not my invention it appeared in popular magazines (not mathematical ones) before I adopted it, but, once again, I seem to have introduced it into mathematics. It is the symbol that sometimes looks like [an empty square], and is used to indicate an end, usually the end of a proof. It is most frequently called the ‘tombstone’, but at least one generous author referred to it as the ‘halmos’.”

Paul Halmos.

I have no personal story to share about Halmos–I never had the honor of meeting him. However, I do think that if you have not yet, you might look at his essays on the art of writing mathematics. I try to follow some of his advice, but he is definitely in a class by himself. One of his ideas he called the “circular method”. Roughly if your paper has sections write section and revise it, then write section and revise both *and * . And so on. At the last step you write section

and revise

While it is a process that takes quadratic time, rather then the “optimal” linear time, the sections tend to flow together much better.

One story that you may like concerns the notion of *non-standard* analysis. This is an area created by Abraham Robinson to make precise the notion of infinitesimals and infinity that Newton and others used in developing calculus. We have long changed over from this to the and precision of Karl Weierstrass. Sometimes the burden of keeping track of many and can become overwheling. See Terry Tao’s post for a modern view of this.

Robinson used non-standard analysis to partially solve a long standing open problem–I believe it is still unsolved today. The problem asks whether every bounded linear operator on a complex Hilbert space has a invariant subspace: is there always a non-trivial closed subspace so that maps to ? For finite dimensional spaces this is true, and follows from basic matrix theory. It is based on the fact that over the complex numbers the matrix has eigenvalues.

What Robinson and his co-author Allen Bernstein proved was that a large class of operators have invariant subspaces. Their proof used non-standard analysis, which caused quite a stir at the time. First, it made a major contribution to a long standing open problem. Second, it used a totally new proof technology–non-standard analysis. Very roughly they showed that the class of operators they considered behaved like a finite dimensional operator. Of course this was not exactly correct, but in non-standard analysis very large objects are still “finite” in a suitable sense. This allowed them to prove their theorem, since the finite case is true.

Note, the analogy to my last post on the quantum method. Their proof used non-standard analysis, yet the statement of the theorem was classic. There was no mention of infinitesimals or infinite numbers in the statement of the theorem. This is exactly what the quantum method does today.

Halmos, an expert on Hilbert spaces, was determined to show that the use of non-standard analysis was not essentially. Relatively quickly, but not over night, he found a classic and proof of the same theorem. He was happy about this, since he could avoid learning Robinson’s theory. However, non-standard analysis has proved its worth. Since then there have been a number of additional proofs that were first discovered by non-standard methods.

** Names and Notation **

I thought that today I would have a short post on “names and notation.” I am working on a more technical post that will be done shortly.

Halmos said he often had a nightmare that someone would write this in their paper:

This is, of course, perfectly correct mathematics, but the conventions that we have on symbols make it hard on the eyes. It is not nice to have go to nor have tend to .

Notation and names are important, more than we often imagine. In our paper on “Turing Machines with Advice” we introduced to complexity theory the now standard notion and the more general notion . Actually, while the results of that famous paper are joint, the notation is solely due to Dick Karp. He was working on a draft of the final paper and thought of this wonderful notation. I wish we had a $1 for every use of the notation, even a reference every time it is used would be fine. Oh well.

Karp has a gift for naming things. The P=NP question itself is hard to beat: it is short, pretty, informative–just right. Would the question be nearly as cool if it was “Hypothesis K” or something else? When our new building was built at Princeton University, the chair of the department, Bob Sedgewick, had “P=NP?” encoded into the bricks on one side of the building. In ASCII code there are bricks that by their shape encode the phrase “P=NP?”. I doubt this would have happen if the problem was not stated in just five symbols.

One of my advisors when I was in graduate school was a logican named Don Loveland. Don had worked on a type of theorem proving system that he called *Linear Resolution*. Another researcher at almost the same time had worked on the same idea, but he called his version “Ancestral Filter Form”. Loveland’s name stuck, and certainly did not hurt his ability to get recognition.

Another perfect example concerns one of my favorite problems. What is now called the *Junta Problem*. I believe that the problem is originally due to Avrim Blum. The problem is: Suppose that is a boolean function that depends only on bits. The problem is to determine the from only random samples, i.e. you get to see

for many randomly selected . Typically is much smaller than ; the main range that we are interested in is . I plan a detailed post on this problem. There is a trivial bound of . This has been improved for arbitrary functions and even better results are known for symmetric functions.

The name the Junta Problem is terrific. It describes the problem well and is fun. It should be called “Blum’s problem”, but that is not to be.

** Open Problems **

My advisor Loveland used to tell me about naming concepts in your papers. Either pick a really good name or a terrible one for a new concept. The idea of a terrible one is then perhaps the community would name it after you. Perhaps if Karp had not used P=NP the conjecture would now be the Karp Hypothesis or the Karp Conjecture. I kind of like P=NP. It just sounds right.

The open problem for you to consider is to pick good names for your new ideas. Although, as Don points out, perhaps you should pick terrible ones

According to Fortnow “NP” is not the best name for the complexity class.

http://weblog.fortnow.com/2006/07/naming-complexity-classes.html

Lot of my non-tcs friends tend to forget what is NP and think that it is non-polynomial-time.

Dear Dr. Lipton:

I agree with you that Halmos’ writings are powerful and influential.

When I was an undergraduate, my professors advised me to read “Naive Set Theory” and my understanding of mathematics dramatically changed after reading it! Later when was in grad school, I read his article titled “Heart of Mathematics” (in Math Monthly). In this article, Halmos concludes that the heart of mathematics is not in grand theories or applications, but in well-posed (olympiad-like) problems. He then surveys problem books (including Polya-Szego, Sierpinski etc.) I guess this article influenced many to take up puzzle and problem solving seriously. Halmos should also be commended for donating a sizable fortune to MAA.

On the other hand, Halmos was too much of an old school and derided computers (and computer science) at every opportunity. One of his (in)famous statements was “Computers are important – but not for mathematics”.

You can find a tirade against Halmos (and Attiyah in the same article) by another charming (if garrulous) mathematician Zeilberger. I heartily recommend his entries for the readers of this BLOG.

Dear Dr. Lipton:

I thought I had previously submitted this comment, but it may not have gotten through. Here’s another try:

Halmos and I overlapped at Indiana University, although we were in different departments. I too am an admirer of Halmos’ superb writing ability, on display in e.g., “Selecta: Expository Writing” [Halmos (1983)] and in his fascinating “I Want to be a Mathematician” [Halmos (1988a). Halmos was also interested in photography [Halmos (1988b)] and in education. On page 258 of Halmos (1988a) he wrote:

“Some say that the only possible effect of the Moore method is to produce research mathematicians, but I don’t agree. The Moore method is, I am convinced the right way to teach anything and everything. It produces students who can understand and use what they have learned. It does, to be sure, instill the research attitude in the student – the attitude of questioning everything and wanting to learn answers actively — but that’s a good thing in every human endeavor, not only in mathematical research. There is an old Chinese proverb that I learned from Moore himself: ‘I hear, I forget; I see, I remember. I do, I understand.’ “

See also my post “Re: Teaching Problem Solving – Moore Method” [Hake (2002)].

Richard Hake, Emeritus Professor of Physics, Indiana University

24245 Hatteras Street, Woodland Hills, CA 91367

Honorary Member, Curmudgeon Lodge of Deventer, The Netherlands.

rrhake@earthlink.net

http://www.physics.indiana.edu/~hake/

http://www.physics.indiana.edu/~sdi/

http://HakesEdStuff.blogspot.com/

REFERENCES

Hake, R.R. 2002. “Re: Teaching Problem Solving – Moore Method,” online at http://tinyurl.com/3droym . Post of 22 Oct 2002 10:02:56-0700 to Phys-L, PhysLrnR, & Math-Learn. To access the archives of PhysLnR one needs to subscribe, but that takes only a few minutes by clicking on http://listserv.boisestate.edu/archives/physlrnr.html and then clicking on “Join or leave the list (or change settings).” If you’re busy, then subscribe using the “NOMAIL” option under “Miscellaneous.” Then, as a subscriber, you may access the archives and/or post messages at any time, while receiving NO MAIL from the list!

Halmos, P.R. 1983. “Selecta: Expository Writing” Springer Verlag. From the forward: “Paul Halmos has been famous for several decades as a past master in all forms of mathematical exposition: articles, books, lectures.”

Halmos, P.R. 1988a. “I Want to be a Mathematician: An Automathography in Three Parts,” Mathematical Association of America.

Halmos, P.R. 1988b. “I Have a Photographic Memory. Mathematical Association of America. Contains a nice photo of Magnus R. Hestenes. Halmos’s caption reads: “Magnus was a part of the Chicago crowd when I first met him, but he has been at UCLA for a long time. I associate his name with the calculus of variations, the old Bliss school.” Magnus was also co-producer of that well-known geometric-calculus variant David Hestenes (2001).

Hestenes, D. 2001. “Oersted Medal Lecture: Reforming the Mathematical Language of Physics,” AAPT Announcer 31 (4): 64: online as a pdf at http://modelingnts.la.asu.edu/ / “Overview of GC” where “/” means “click on.” See also “Spacetime Physics with Geometric Algebra” loc. cit.

I also never met Halmos but was connected to him through several paths by professors and collaborators. Two comments to add to Dick Lipton’s:

1. Halmos understood the power of notation to conceal what is really going on. He is most likely the PRH Anonymous who was the author of the following (published in a 1957 Summer School on Logic)

If you think that your paper is vacuous

Use the first order functional calculus.

It then becomes logic and as if by magic

The obvious is hailed as miraculous.

2. Years before Dana Scott’s Boolean Valued Models were used to simplify forcing arguments in set theory independence proofs, Halmos became convinced that the algebraic formulation of predicate calculus was the right way to unify both the notation and the machinery of predicate logic. His 1956 exposition of algebraic logic in Volume 53 of The American Mathematical Monthly famously concludes: “In view of the algebraic characterization of syntactic completeness [the Godel incompleteness theorem] can be rephrased thus: not every Peano algebra is a simple polyadic algebra…The conclusion is that the crowning glory of modern logic (algebraic or not) is the assertion: mathematics is not necessarily simple.”

I had the distinct pleasure of being one of Dr. Halmos’s students at UC Santa Barbara from 1976-1978. On particularly delightful course was an undergraduate number theory course which he conducted in accordance with the Moore Method. We had no text and he basically challenged each of us to get up to chalk board try to prove theorems from first principles. It was probably the most exciting math class I had ever encountered. Halmos’s vast interest in all of mathematics was contagious.

He invited me once to his office and took a picture of me with his Polaroid camera. We hit if off real well probably because I shared his enthusiasm for the beauty of mathematics. He actually invited me to become his PhD student, although he indicated that UCSB was probably not the place to do it (he was dropping a hint that he was thinking about going elsewhere himself). Being young and foolish and not really knowing what I wanted to do with my life at the time, I fail to grasp onto that unique opportunity. I believe Sunder, who was a PhD student with Dr. Halmos at UCSB, turned out to be Halmos’s last PhD student. I might have had that distinction if I had taken him up on his invitation to me.

At any rate, Dr. Halmos was a superb teacher whose spirit has kept me interested in mathematics all throughout my subesequent career as an aerospace engineer. It was a unique privilege to have studied under him.