# Notation And Thinking

* Does notation shape our thinking? *

Kenneth Iverson was a mathematician who is most famous for designing APL. This was the name of his programming language, and it cleverly stood for “A Programming Language.” The language is unique—unlike almost any other language—and contains many powerful and interesting ideas. He won the 1979 Turing Award for this and related work.

Today I want to talk about notation in mathematics and theory, and how notation can play a role in our thinking.

When I was a junior faculty member at Yale University, in the early 1970’s, APL was the language we used in our beginning programming class. The reason we used this language was simple: Alan Perlis, the leader of the department, loved the language. I was never completely sure why Alan loved it, but he did. And so we used it to teach our beginning students how to program.

Iverson had created his language first as a notation for describing complex digital systems. The notation was so powerful that even a complex object like a processor could be written in his language in relatively few lines of code: the lines might be close to unreadable, but they were few. Later the language was implemented and had a small but strong set of believers. Clearly, Alan was one of them who once said:

A language that doesn’t affect the way you think about programming, is not worth knowing.

The language APL was great for working with vectors, matrices, and even higher order objects. It had an almost uncountable number of built-in symbols that could do powerful operations on these objects. For example, a program to find all the primes below is:

This is typical APL: very succinct, lots of powerful operators, and no keywords. The explanation of this program is here.

One comment about APL is:

Famous for its enormous character set, and for being able to write whole accounting packages or air traffic control systems with a few incomprehensible key strokes.

Not quite right, not very nice, but this comment actually captures the spirit of Iverson’s creation: make complex functions expressible as very short expressions was an interesting idea. For example, Michael Gertelman has written Conway’s Game of Life as one line of APL:

This should make it clear that this language was very powerful, perhaps too powerful.

When I had to learn APL in order to teach the beginning class I decided to start a project on building a better implementation. The standard implement of the language was as an interpreter and one of my Yale Ph.D. students, Tim Budd, eventually wrote a compiler for APL. We also proved in modern terms a theorem about the one liners of the language. Each one liner could be implemented in logspace: this was not completely trivial, since the operators were so powerful. Perhaps another time I will discuss this work in more detail. For now see this for Tim’s book on his compiler.

Let’s turn to discuss various notations used in math and theory.

** Good Notation **

It is hard to imagine but there was a time when mathematicians did not even have basic symbols to express their ideas. Many believe that the notation helps to shape the way that you think, clearly without basic symbols modern mathematics would be impossible. Or at least extremely difficult.

**Robert Recorde** is credited with introducing the equality symbol in 1557. He said

to avoid the tedious repetition of these words: “is equal to,” I will set (as I do often in work use) a pair of parallels of one length (thus ), because no two things can be more equal.

**René Descartes** is know for the first use of superscripts to denote powers:

François Viète introduced the idea of using vowels for unknowns and consonants for known quantities. Descartes changed this to: use letters at the end of the alphabet for unknowns and letters at the beginning for knowns.

Descartes thought that would be equally used by mathematicians. But it is this and that The story—may be a legend only—is that there is a reason that became the dominant letter to denote an unknown. Printers had to set Descartes papers in type, and they used many ‘s ‘s, since the French language uses them quite a bit. But it almost never uses , so Descartes’ La Géométrie used as the variable most of the time.

**Isaac Newton and Gottfried Leibniz** invented **the** calculus. There is a controversy that continues to this day on who invented what and who invented what first. This is sometimes called the calculus war—see here for some information.

Independent of who invented what they did use different notations, at least that is without controversy. Newton used the dot notation and Leibniz the differential notation. Thus Newton would write while Leibniz would write for the same thing. The clear winner, most agree, is the better notion of Leibniz. It is even claimed that the British mathematicians by sticking to Newton’s poorer notation lagged behind the rest of Europe for decades.

**Leonhard Euler** introduced many of the common symbols and notation we still use today. He used and for the famous constants and for the square root of . For summation Euler used and also introduced the notation for functions: . One can look at some his old papers and see equations and expressions that look quite modern—of course they are old, but look modern because we still use his notation.

**Johann Gauss** introduced many ideas and proved many great theorems, but one of his most important contributions concerns the congruence notion. Leonhard Euler earlier introduced the notion, but without Gauss’s brilliant notation for congruences, they would not be so easy to work with. Writing

is just magic. It looks like an equation, can be manipulated like an equation—well almost—and is an immensely powerful notation.

**Paul Dirac** introduced in 1939 his famous notation for vectors. The so called *bra-ket* notation is used in quantum everything. It is a neat notation that takes some getting used to, but seems very powerful. I wonder why it is not used all through linear algebra. A simple example is:

The power of the notation is that can be a symbol, expression, or even words that describe the state values.

For a neater example, the outer-product of a vector with itself used in the APL program for primes above is written this way in Dirac notation:

which flips around the inner product . Now to multiply the matrix formed by the outer product by a row vector on the left and a column vector on the right, we write

The bra-kets then associate to reveal that this is the same as multiplying two inner products. Another neat feature is that versus captures the notion of a *dual vector*, and when has complex entries, the notation implicitly complex-conjugates them.

**Albert Einstein** introduced a notation to make his General Relativity equations more succinct. I have never used the notation, so I hope I can get it right. According to his rule, when an index variable appears twice in a single term, once as a superscript and once as a subscript, then it implies a summation over all possible values. For instance,

is

where are not powers but objects.

**Dick Karp** introduced the notation to complexity theory in our joint paper on the Karp-Lipton Theorem. Even though the notation was his invention, as a co-author I will take some credit.

** Good Notation? **

Not all notation that we use is great, some may even be called “bad.” I would prefer to call these good with a question mark. Perhaps the power of notation is up to each individual to decide. In any event here a few “good?” notations.

**John von Neumann** was one of the great mathematicians of the last century, who helped invent the modern computer. He once introduced the notion

where the number of parentheses modified the function . It does not matter what they denoted, the notation could only be used by a brilliant mind like von Neumann’s. This notation is long gone as used by von Neumann, but in ideal theory is not the same as . Oh well.

The letter for pi is is fine, but maybe it denotes the wrong number? Since denotes the full unit circle and occurs all the time in physics, maybe we should have used the symbol for that number? Lance Fortnow and Bill Gasarch once posted about this here, with many interesting comments.

Why is the charge of the electron negative? Evidently it is because Benjamin Franklin believed the flow of an unseen fluid was opposite to the direction the electron particles were actually going.

Why has humanity been unable to establish that means proper subset and only means subset, by analogy to and ? Hence for proper subset one often resorts to the inelegant notation.

I will end with one example of notation that many feel strongly is “good?”: to denote the derivative of a function. See here for a lively discussion.

** Open Problems **

Does good notation help make mathematics easier? Are there some notions that are in need of some better notation? Would you rather discover a great theorem or invent a great notation?

### Trackbacks

- Tweets that mention Notation And Thinking « Gödel’s Lost Letter and P=NP -- Topsy.com
- questions over answers » Programming Notation
- » links for 2010-11-30 (Dhananjay Nene)
- Notation And Thinking « Gödel’s Lost Letter and P=NP « Netcrema – creme de la social news via digg + delicious + stumpleupon + reddit
- Top Posts — WordPress.com
- Richard Elwes – In the name of Gauss
- Notation And Thinking at Dick Lipton’s Blog « Computing Education Blog
- Logarithms, Tables, and Babylonians « Gödel’s Lost Letter and P=NP
- Leibniz, Gottfried Wilhelm « Earthpages.ca
- Winter tributes to APL | Chasing Men Who Stare at Arrays
- Unmasking Quantum Computation « Gödel’s Lost Letter and P=NP
- Random Axioms and Gödel Incompleteness « Gödel’s Lost Letter and P=NP
- Thinking Out of the Notation Box « Gödel’s Lost Letter and P=NP
- One Man’s Floor Is Another Man’s Ceiling « Gödel’s Lost Letter and P=NP
- More Interview With Kurt Gödel « Gödel’s Lost Letter and P=NP
- Thanks For Sharing « Gödel’s Lost Letter and P=NP
- Teaching Helps Research | Gödel's Lost Letter and P=NP
- Weekend miscellany | John D. Cook
- Did Euclid Really Mean ‘Random’? | Gödel's Lost Letter and P=NP

As your nice article discusses Kenneth Iverson, I feel I have to highlight a perfect example of good notation, apparently invented by Iverson in APL: the convention of using [X] to mean a term which evaluates to 1 if the expression X is true, and 0 if X is false. Wikipedia knows this as the Iverson bracket.

This notation is used throughout “Concrete Mathematics” by Graham, Knuth and Patashnik to great effect, and I personally find it invaluable.

One example is the use of commutative diagrams in algebra, which I am sure was essential to the development of homological algebra. I don’t know much about this, though.

Another fantastic example is string diagram notation in linear algebra. I think some version of it was first discovered by Penrose, but nowadays people recognize it is much more than just a notation: it is related to higher category theory, Feynman diagrams, quantum groups, braid groups, knot theory, topological quantum field theory… there is a nice collection of resources at this MO question.

Personally, I hate the notation for a function. If you write as an arrow from a box labeled A going right to a box labeled B, and as an arrow from a box labeled B going right to a box labeled C, then diagrammatically their composite looks like it ought to be written , but it is actually written . To be consistent with the arrow notation one really ought to write or , and then composition looks the way it ought to.

Qiaochu Yuan, I too am exceedingly fond of commutative diagrams in category theory … Feynman diagrams in field theory … block diagrams in control theory … projective diagrams in simulation theory … graph diagrams in … uh … graph theory … even dataflow diagrams in “B” (which is LabView’s native language).

These various diagrammatic notations can be regarded (it seems to me) as one base language, that for cultural reasons has been instantiated in multiple dialects … rather like German, French, and English.

Regarding prefix-versus-postfix notations, it is interesting that (in effect) all of the above diagrammatic languages are postfix. Perhaps this is why both you and I (and many folks) find prefix symbolic conventions to be annoying nowadays: the ubiquity of graphical representations in mathematics is conditioning readers to associate a postfix representation to any mathematically natural expression.

A great virtue of

Mathematicais that it supports an eclectic blend of prefix, postfix, and infix notations—just like the mathematical literature. Very conveniently,Mathematica’sfront end automatically associates common mathematical symbols to build-in operators. For example, Mathematica automatically parses the string “f : a →b” to the internal representation “Colon[f, RightArrow[a, b]]”, where the built-in operators “Colon” and “RightArrow” have no assigned evaluation rules, and thus can readily be defined to mean anything one wants. Good!In consequence, pretty much any “Yellow Book” symbolic notation automatically parses to a valid (internal)

Mathematicaexpression. In exploiting this freedom, our experience has been that postfix conventions turn out to be generally the most natural choice for associating symbolic expressions to diagrams.That is why postfix mathematical notations are preferred for most purposes—preferred diagrammatically, preferred symbolically, and preferred computationally. And should this observation inspire a counter-post from ardent fans of prefix notation (or even infix notation) … well … heck … that would be fun!

I’d like to concur with the rant about the function composition notation fg or (f o g). In algebra texts discussing permutations I am never sure what AB represents exactly where A,B are permutations.

Interestingly, older functional languages like ML have the ‘o’ operator for composition, but in the language F# it was replaced by a ‘>>’ operator such that (f >> g) is the “postfix composition” of g over f.

Using the other arrow, f: B ← A makes the meaning (g∘f) obvious enough.

In a logic context, I still see people using $\supset$ for $\rightarrow$ (implication), which I find definitely confusing — it always takes a few seconds to adapt. (http://en.wikipedia.org/wiki/Material_implication)

Ha! Michael, you’ve supplied the one example that Dick and I decided to leave out of my revision of his original post. The reason it’s confusing, my note would have said, is this: Logically A implies B should be a subseteq relation, exactly the opposite, insofar as it means the denotation of A is contained in the denotation of B.

Feynman diagrams! In terms of showing the simplicity behind what used to looked highly intimidating, Feynman diagrams rank waaay up there … probably alongside “=”.

Iverson also invented floor and ceiling notation, which is pretty indispensable. The old notation [x] for the floor of x is clearly much worse.

Thanks to all,

I did forget to include Iverson’s other notations. Correction: I knew some, but did not know about all. Thanks for all the comments.

I’m surprised you left out Iverson’s indicator notation. [P] equals 1 if the predicate P is true and 0 otherwise. This is one of my favourites. Knuth even thought it was worth writing half an article about:

http://mathdl.maa.org/images/upload_library/22/Ford/knuth403-422.pdf

(the other half is about Stirling numbers).

Numerals themselves are also notation, and I’ve always liked this from Alfred North Whitehead:

“By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race. Before the introduction of the Arabic notation, multiplication was difficult, and the division even of integers called into play the highest mathematical faculties. Probably nothing in the modern world would have more astonished a Greek mathematician than to learn that … a large proportion of the population of Western Europe could perform the operation of division for the largest numbers. This fact would have seemed to him a sheer impossibility … Our modern power of easy reckoning with decimal fractions is the almost miraculous result of the gradual discovery of a perfect notation. […] By the aid of symbolism, we can make transitions in reasoning almost mechanically, by the eye, which otherwise would call into play the higher faculties of the brain. […] It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”

–from An Introduction to Mathematics, 1911

bravo, great quote

I think it would be interesting to poll established mathematics users as well as students first learning the subjects. It could be that some notations are difficult to get a handle on (as you mentioned,

bra-ketnotation takes some getting used to), but eventually pay off, whereas others are obtuse at first and never really get past tolerable. Perhaps there are even some notations that are magically intuitive from the very beginning?That’s the first time I’ve ever heard him called “Johann Gauss.” But what do you know, that actually was his legal first name! I know it’s rare to ever know everything about a topic, but somehow I’m still surprised that Gauss’s name was one of those topics for me. At any rate, “Carl Friedrich Gauss” is the usual way to refer to him.

“Johann Carl Friedrich Gauss (30 April 1777 – 23 February 1855)” from http://biblio.mat.uc.pt/bbsoft/woc_ucma/matematicos/November10EN.pdf

At first I thought: “Who is Johann Gauss?” , but after checking I discovered that Johann indeed was one of his first names. I think he is more known as “Carl Friedrich Gauss”, though.

Iverson was also responsible for the denotation of truth as 1 and false as 0, “Iverson’s Convention”. See my (now ancient) post trying to promote the name.

Why is improper use of asymptotic notation tolerated — even in research papers?

Good question – and a fine example: on one hand, asymptotic notation (the Omicron, Omega and Theta)

definitelyshaped my thinking in a profound way. I believe that it does that to many students of Algorithmics (this is also a notation that Knuth found worthy of writing an article about).On the other hand, it is a fact (which to me is puzzling) that many “Computer Scientists” do not use it properly – such as not distinguishing lower bounds from upper bounds. Personally, I do not tolerate such improper use, and point out the correct usage when I have the chance to do so.

One field which could really benefit from a notational genius like Leibniz: my own field, Logic. For just about any concept beyond the very basics (and sometimes even those), there are many different notations and the only thing they have in common is arbitrariness. I guess it’s all part of the mutation one develops from constant exposure to Church’s Thesis😉

“I wonder why [Dirac notation] is not used all through linear algebra.”

Well, probably because the notation is not really useful without the notion of scalar product, so it really makes most sense for Hilbert spaces. Surely linear algebra is not just about scalar product but rather about generic linear transformations.

Besides, the most interesting examples of Hilbert spaces are infinite-dimensional (at least from a physicists point of view) so this is not even linear algebra but rather functional analysis.

““I wonder why [Dirac notation] is not used all through linear algebra.”One answer can be found in Michael Stone and Paul Goldbart recent graduate-level textbook

Mathematics for Physics:Nowadays, undergraduate-level quantum mechanics textbooks are pretty much the last citadel of undiluted Dirac bra-ket notation … most new physics textbooks develop geometry and dynamics (linear and nonlinear, classical and quantum) in the unified context that Cartan’s supple notation provides. A wonderfully entertaining polemic on this subject is William L. Burke’s unpublished (but on-line) 1996 manuscript

Div, Grav, Curl Are Dead.Sadly, Burke was killed in an accident before his book-length manuscript was published … the Stone-Goldbart textbook is (in effect) a well-written, 800-page realization of Burke’s pioneering geometric and dynamical vision.

Odd: This post does not show up at the front page: https://rjlipton.wordpress.com/

Thanks for the great article Mr. Lipton. I am a huge admirer of Mr. Iverson and his beautiful notation.

For another historical perspective of Iverson’s notation I highly recommend Prof. Donald McIntyre’s paper “Language as an intellectual tool: from hieroglyphics to APL”, which appeared in the IBM systems journal in 1991.

The article can be found here: http://www.electricscotland.com/mcintyre/index_f/menu_f/j_f/sysj_r_29.pdf

I am always wondering whether all those different types of notations for operators from different fields actually are a benefit for the reader. Instead of having superscript, subscript, over, under, parenthesized, bracketed, braced, infix, postfix, prefix notations, with lots of arbitrary precedence rules, and that mean different things depending on context, it might be better to have a uniform syntax for operators.

Ken Iverson’s notational approach involved just this manner of rationalization, making consistently linear what has been a hodge-podge of typographic relationships. There are costs involved with such a drastic reformation, but there is also the shining benefit that simple rules can be applied without exceptions.

Iverson settled on a combination of prefix notation (for simple functions) and postfix notation (for higher-order functions, a.k.a. Heaviside operators.) He retained infix notation as an extension of these. He abandoned precedence rules other than those implied by these same prefix/postfix conventions.

My experience has been that the longer I work with Iverson’s notation the better I like it as a means for expressing mathematics.

A very timely post. A few days ago I was having a discussion with some other programmers about the use of comprehensions in Python. They were arguing that comprehensions, while more concise, make the code less readable. My argument was that they make the code more readable, because, like Sigma notation for summation, they express an idea concisely.

Linear algebra prefers Householder notation. I think it is more of a concern on who got there first, but for sure Householder is much less verbose.

When it comes to “good?” notation, Principia Mathematica takes the cake: http://plato.stanford.edu/entries/pm-notation/

I believe it was Conway who invented symbols for the trigonometric functions when he was a child. For sine, a big lower-case sigma extended over the number like a square root symbol, and for tan a big lower-case theta. I can’t recall the symbol for cosine.

That was Feynman (it’s a story from “Surely You’re Joking…!”).

Never having learned many math notation when I was young, I find much of math difficult to comprehend and consequently remember unless used regularly – especially those used in multiple contexts with different meaning. Looking at those math examples given, I can’t help but think how important the geometry of the symbols and syntax is to their comprehension.

The positional (or decimal) notations for numbers is considered as one of the turning points in human colture and perhaps the greatest oversight of the ancient greeks. (I think there is a quote of Gauss to that effect.)

Great topic. I wonder which notations we could have invented had we lived in 4 dimentions with 3-dimensional books…

Nice comment on a great post, but I don’t think that there is an intrinsic problem in creating 3-d notations given that we are 3-d beings. Maybe it is still a little unnatural, awkward, because technology is not yet developed enough for it, but in principle it seems possible. Could you think on something that would improve if a 3-d notation was available?

Reading this article was a joy! Thanks for the good start in my day

Great read, many thanks

I am surprised you do not mention \forall and \exists. Complex statements can almost never be understood correctly in natural language or take several sentences. This could well be extended to logical operators in general without which precise proofs are hard to write down.

As a computer scientist, Landau notation immediately comes to my mind. It is a short way to grasp the essential growth of a function without all the hassle. It is often misused, though.

I think the entire idea of what I call “circumfix notation” is bad: with this I mean the case when an operator on some expression x is denoted by prefixing and suffixing x with the same symbol.

Of course the most prominent example would be the popular method to denote de cardinality of a set S by |S|. This is especially confusing when you combine this with set-builder notation using a vertical bar, e.g., |{x | 1 < x < 10}|. Clearly superior notation would for example be #{x | 1 < x 0}.

Two well-accepted TCS-specific notations (of which there really are surprisingly few other than names of complexity classes): A ≤ B for reduction and superscripts to denote the use of oracles. The former clearly seems “good”. I am not sure about the latter.

Improvable with a minor tweak: The usual mutual information notation of I(X;Y) is symmetric but with the semi-colon looks asymmetric. Some authors write I(X:Y) which looks much better.

Often it seems that what we want to discuss are complicated processes rather than simple functions or relations and this may be the reason for a shortage of specific notations. A really accurate notation that is popular in crypto to handle this is to write probabilities that are defined based on the outcomes of specific program fragments, e.g.

Pr_{r’} [ A(C,r’)=b : b <~ {0,1}; r <~ U^{p(n)}; C <- E(m_b,r) ] <1/2 + epsilon.

Excellent article Mr. Lipton. Thank you.

I know this is slightly off topic, but I can’t resist bragging. I’ve inherited a copy of a talk given by Alan Perlis at an APL conference in 1978!

http://lathwellproductions.ca/wordpress/2010/10/10/alan-perlis-and-apl-is-more-like-french/

I thought that Dirac introduced his bra-ket notation in his famous book “The principles of quantum Mechanic” in 1930 (and not 1939)?

Another very powerful idea/notation contributed by Dirac is of course the delta-Dirac distribution function; legend has that his entire PhD thesis consisted of 1 single page explaining this delta distribution.

How did computers change the evolution of notation?

The constant evolution of programming languages is another great, and a very contemporary example of how notation can improve our expressive and deductive efficiency, and how computers can aid with it (although in this case, they only aid in communicating with themselves). Great efforts are taken until today to further improve notation in programming.

Typesetting software also greatly simplified (part of) the process of preparing research papers for publication. Unfortunately, researches now more often typesetting their documents themselves also has some caveats. The effort of using a particular notation now depends very much on technical properties lying outside the scope of the particular topic. I think this fact tempts to re-use existing, easy-to-access notation, sometimes overfreighting it with too many unrelated meanings.

I challenge every researcher to look at their hand-written notes (or blackboard scribbles). These mind-maps – made with notation-neutral tools – often innocently invent good notation. If you used little squares with marked corners to denote a particular thing there, use it in your papers as well. Don’t call it gamma or angle-bracket-something just because it’s easier to get your typesetting software to produce that. If more people read your paper because of this, get the grasps of it more quickly scanning over it or get insights that otherwise would be hidden in odd notation because of this, it pays off very well.

Probability theory also has some nice examples of good and hugely successful notation. Take for example the notation [; \mathbb{P}(X \le c) ;] as a simple way to write the probability measure of the set of all [; \omega \in \Omega ;] such that [; X(\omega) \le 2 ;], or the notation [; \mathbb{E}[X|X \le c] ;] for conditional expectations, which are mathematically not a trivial construction, but can be manipulated quite intuitively in this way.

Two comments. 0ne is about the $\subset$ vs $\subseteq$. What I personally find annoying is that in English, $\subset$ almost always means “proper subset” while in French, it almost always means “subset or equal”. I cannot remember but I am pretty sure other notations have different meaning depending on the language of the text you are reading.

My second comment is kind of disrespectful: I would have suggested you to put the C/f(n) notation in the “good?” section. There is always a kind of confusion with a division sign, so much so that the Complexity Zoo Pronunciation Guide has to precise the way it is pronounced. Moreover, this “slash” pronunciation does not mean anything. I have to acknowledge that I do not have any better idea!

Anybody who is interested in the relationship between language and thought should start with the classic papers by Benjamin Lee Whorf reprinted in Language, Thought & Reality, MIT Press, 1956.

Tim,

Thanks for this pointer.

I am going to offer a few thoughts on notation that are adapted from two of Edsger Dijkstra’s most celebrated essays, namely his

On the cruelty of Really Teaching Computing Scienceand hisGOTO statement considered harmful.It appears that Dick (and Ken) have not yet devoted a

Dödel’s Lost Letter and P=NPto Dijkstra’s work … any such Dijkstra-themed post would be well worth reading. As Lemony Snickett would say, students are advised to skip the rest of my post, and read Dijkstra’s essays instead.We can transpose the dyspeptic themes of Dijkstra’s essays into a friendler, more optimistic key by focussing upon two koan-like questions: (1) “What is the notation that has no symbols?” (1) “What are the elements that have no names?”

These questions find natural answers in the language

G(of LabVIEW), as discussed in an essay by LabView cofounder Jeff Kodosky, titledIs LabVIEW a general purpose programming language?(all of the essays that this post mentions are readily found on-line).To convert Kodosky’s essay into the answers to our two koans, we exploit the following natural duality between the mathematical diagrams of category theory and the dataflow diagrams of

G… keeping in mind that these diagrams don’t represent a notation … they *ARE* a notation.The duality is: (I) category diagram nodes ↔

Gdataflow lines, and (II) category diagram lines ↔Gdataflow nodes. Then the koan’s answer is that the natural elements of category flow alongGlines that have no names, and the natural maps of category theory are instantiated byGnodes that have no symbols (but rather are associated to pictograms).Our experience (as engineers) has been that

Gis the only programming language such that the thesis software of Student A can (usually) be given to Student B, without undue fuss an difficulty … and the lack of names and symbols inGis the primary reason for this portability.The tremendous virtue of category theory as instantiated in

G, is that concrete realizations can be generated simply by entering elements via controls … and the induced flow of elements is completely transparent. You wanna integrate a concrete dynamical trajectory on a symplectic manifold? No problem … just wire-up the elements and have fun.To the best of our knowledge, no textbook has ever been written that systematically instantiates category theory in a

G-like environment. A near approach is Gerald Sussman’s and Jack Wisdom’sStructure and Interpretation of Classical Mechanics (SICM). This is the same Sussman who co-authored with Harold Abelson the classicStructure and Interpretation of Computer Programs(SICP).But alas … Sussman and Wisdom implemented their notation in the symbol-burdened and name-laden language LISP rather than the zen-like

G… moreover they restricted their formalism’s domain narrowly to dynamics rather than broadly to category theory … with the result that (by Amazon sales rank at least) SICM has not found the same popularity as SICP. But theirs was a brave attempt!It is clear that category theory’s pioneering emphasis on mathematical naturality is becoming ubiquitous in all branches of the STEM enterprise, with consequences that we can hope are consonant with Dijkstra’s vision:

Aye, lasses and laddies … now *that’s* what good mathematical notation is all about!

The above was prepared for today’s

Litoticaseminar, and the theme of the following seminar will also be Dijkstra-inspired, namely,“Dirac notation considered harmful.”When one focusses a seminar around a Dijkstra essay, the results are pretty much guaranteed to be lively and fun.PS: At the seminar, my colleague Jon Jacky told us that the aphorism

“APL is a mistake, carried through to perfection”is Dijkstra’s … Jon also has this funny poster of Dijkstra hanging above his desk.In June of 1975, Edsger W. Dijkstra wrote an essay called: “How do we tell truths that might hurt?” Which is characterized as a series of aphorisms about computer programming languages, one of which is APL.

This quote has transformed into most disastrous anti-APL marketing campaign in history. I once counted the number of times it has been referenced in cyberspace and then decided this was a depressing tract to follow. However, in his biography Grace Hopper, Kurt W. Beyer, makes the excellent case that innovation needs advocacy (see p.318 “Proactive Invention: Inventor as Salesman”)

When I wrote about this publicly, I got a little bit of heat from my father (“I always liked Dijkstra”).

It’s a really interesting topic, I think.

As someone who has learnt to write down things very precisely from functional analysis lectures, I’m often annoyed with abuses of notation, especially in stochastics. I admit, it is handy to write {f < 0} for {ω∈Ω | f(ω) < 0}, supposing the space Ω you’re talking about is fixed at the moment. But then probabilists overdo (imho) and write things like {lim X_n = 0}.

Also, freezing of variables is probably a widely used notational technique. So Ω is often use as symbol for a sample space, where in functional analysis, X usually denotes some Banach or topological vector space, while in differential geometry, X might be the standard symbol for some vector field. But staying in a certain (sufficiently small part of a) field of mathematics, symbols like that seldomly clash. It’s different for objects used throughout most mathematics, such as integers. Mostly, i,j, k, n and m are letters denoting integers, such as indices. Hardly anyone would write “let n be a complex number”, but in another context, n might also denote some normal vector, which is why in this case I would always decorate n somehow.

Another thing is the use of different fonts and upper and lower case letters. Sets are usually denoted by standard upper case letters, if they contain “basic” objects such as numbers, vectors etc. Having those sets and introducing sets of “higher order”, so power sets of those sets, some people resort to script upper case letters. This can of course be iterated with other fonts.

Great article and comments — notation is an awesome topic! I really enjoyed reading Knuth’s article, in which he echoed the Whitehead quote: “I like to have mechanical operations like this available so that I can do manipulations reliably, without thinking.”

Every teenager knows that emoticons rank among the most powerful symbols

I guess parentheses, absolute value |x|, the decimals point, percents, commas, bit operators (especially ~ ^ | &) and conjunction/disjunction are all too obvious to mention? Many more examples abound in programming languages, such as: quoted strings, curly braces, and indentation.

Niklaus Wirth’s EBNF has had a huge impact on how we reason about regular languages and have become part of every modern programming language via regular expressions.

http://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form

I think it would be remiss not to mention orbifold notation (Thurston et al., popularized by Conway).

By understanding how to construct the concise “names” of the symmetry groups that describe periodic tilings of the strip, plane, sphere, space etc.; one can follow a very natural path to

understandingthose groups and why there are only so many of them. This is presented wondrously and accessibly (with many pretty pictures) in “The Symmetry Of Things” by Conway et al. This synthesis of knowledge is surprisingly recent when compared to how long we’ve been looking at tiling patterns as humans. By understanding the link from symmetry to topology of the pattern, one even begins to understand why particular symbols are used in the notation (which seem whimsical, at first).As a crystallographer, we learn about symmetries of tilings and they are useful (although not really necessary to understand deeply, with the amount of automation present these days). I used to be very frustrated with the fact that I could never really understand

whythere were only 17 ways tile the plans and only 230 to tile space, and proofs using “semi direct products” were simply too opaque to me. Orbifold notation helps emphasize that it is the connection to topology that is key to understanding things. Other notations (Schönflies, Hermann–Mauguin) emphasize the generators of the group and understanding why we chose particular generators to use in the name is difficult to understand.I often find myself in hotels and determining the orbifold name for the wallpaper or carpet pattern.

To answer your question, I think I good notation helps make certain mathematics a *lot* easier. And I think I would rather invent a new notation than discover (or prove) a great theorem (that is probably a very good way to partition different types of people, come to think of it). I think that good notations are the result of someone who has gained a deep understanding of a subject, and some empathy for those who might want to learn it.

This is a wonderful thread with some wonderful posts … the above post by maneesh on orbifold notation is really excellent.

Regarding on notation, how many folks have seen the Cohen brothers’ recent movie

A Serious Man? There is a scene in that movie that—for quantum systems engineers in particular—hilariously depicts the perils of even the most elegant mathematical notation.The accompanying dialog is:

Please let me say that I vehemently reject this movie’s point-of-view and yet I dearly cherish the movie itself!

But nonetheless, for a world of 21st century systems engineering, in which most enterprises blend discrete-with-continuous, classical-with-quantum, macroscale-with-microscale, and deterministic-with-stochastic, it’s undeniably a pretty tough challenge to create valid, verifiable system models … and no one notation is going to do the whole job.

For system-level engineering (AFAICT) the path forward will be via minimalist, visual formalisms (category theory? block diagrams? LabVIEW

G-type languages?) that are naturality-oriented and (nearly) symbol-free and variable-free. Because when it takes a stack of books 4000 pages high just todefinethe formalism(s) that go into a system-level dynamical simulation, is any other path forward feasible?The notation used for logarithms may be a significant part of why people have trouble understanding them. There’s great discussions about this at The Exponential Curve, written by Dan Greene, and at f(t), written by Kate Nowak. I love Dan’s big L notation.

I also wrote just last week about how the words and symbols used in math can introduce a lingering fear that makes it harder to learn a topic. ‘Eigen’ did that to me.

Great post. it took me through the different notation I used in college in such a breeze. When I looked at the notation, I envisioned different problems I toiled over back in the day. So yes, I think notation influences the way we think.

I have to jump in with a favorite awkward notation…

Where .

I think this is sufficient context to see what the original author was driving at…I’m curious to hear what others think about this one. I’m also curious to know if its a common idiom. I don’t have more than an undergraduate training in mathematics so I wouldn’t know.

This is indeed more than common, I’d say almost ubiquitous in CS papers.

A good notation is one thing, a computer language is another. Iverson and the teams of APL implementors, over the last 40 or so years, rather successfully fused the two things together. The resulting language offered a very high level of abstraction, thus when writing programs, many of the tedious details of programming simply boiled away into the vacuum of space, leaving a language which nicely supported and buttressed problem-oriented thinking.

What followed in the mainstream, possibly due to the exquisite timing of the development of the C language followed by the mass emergence of personal computing, is that a computing culture which embraces detail developed and took hold of mainstream thinking. One look at “newer” languages like C# and Java, languages which from an evolutionary perspective aren’t that far away from C, and it’s clear that a high level of abstraction is simply not valued by this culture.

Also, diversity in computing is going away. It’s best to know both languages, C# and Java.

One would hope that ideas which made APL great for problem solving would eventually make their way into the mainstream.

Except that APL is some sort of a “write only” programming language, the APL code can be a mystery for even its own author after a while.

Even though it’s more than a year later, I can’t abide leaving this cheap shot as the last word. It’s usually made by someone who’s made no effort to learn the language.

As someone who programmed in APL for many years, as well as in numerous other languages, I think I can speak from a position of knowledge rather than one of ignorance when I dispute this. Poorly-written code in any language is hard to understand; many languages other than those in the APL family further obscure an algorithm by burying it under a load of unnecessary verbiage.

Conversely, the density of code written symbolically often requires more time to understand the meaning of a single line of it – which may accomplish quite a lot – as opposed to the very short time it takes to read a line in a more vacuous language which accomplishes very little.

I would second DevonMcC’s remarks. I’ve programmed in terse languages like APL and J for years but I’ve also waded deep into the C/C#/C++ pools and frankly the programming language is never the barrier when it comes to decoding someone else’s code. If the work is a product of superior programmers there is always clear pattern and theme. If it it’s a hodge-podge hacked together under duress the resulting mess will constantly frustrate your efforts to grasp it. Notation always cracks under a heavy load. Programmers have learned this the hard way.