# Who Invented Boolean Functions?

* Hint: I am not sure *

Calixto Badesa is a faculty member at the University of Barcelona. His department is called the Department of Logic, History, and Philosophy of Science. It is neat to see the idea of the “logic of science” added to the familiar ‘HPS’ acronym. Well, Badesa himself specializes in Logic—that is, the history of Logic as a discipline.

Today I want to discuss why propositional calculus as we now know it took so long to create.

I am currently reading Badesa’s book entitled: *The Birth Of Model Theory*. His main focus is on the fundamental Löwenheim-Skolem Theorem. This refers to several theorems. The first was proved by Leopold Löwenheim, in a paper in 1915. It was later re-proved by Thoralf Skolem.

For Badesa the theorem is pivotal because it leverages the distinction between *syntax* and *semantics*. Model Theory is about this distinction, and understanding it makes Logic a mature field. But that’s almost saying Logic is less than 100 years old, while *logic* has been studied for 3,000 years. This makes Ken and me wonder how old our modern understanding of basic concepts really is.

## Löwenheim-Skolem

The main result is that any first-order theory with only a countable number of symbols that has an infinite model must also have a *countable* model. This sounds plausible and is correct, but on second thought it is paradoxical. Consider ZF, or any other reasonable first order theory of set theory. This theory can prove that there are “uncountable sets,” yet by the Löwenheim-Skolem Theorem it follows that the theory has a countable model. The trick is that outside the theory we can “see” that the model is countable, but inside the theory we cannot. Very neat.

The story of the Löwenheim-Skolem Theorem, like the theorem itself, is somewhat paradoxical. The initial proof of Löwenheim’s is unclear at several key places. Badesa spends a majority of his 220-page book discussing what Löwenheim actually did prove and what he did not. Also what Skolem proved and what he did not. Perhaps we can discuss this another time. Read his book.

Badesa’s first chapter is on the work of George Boole, William Jevons, Charles Peirce, and Ernst Schröder. All of them worked on the early days of what became the theory of propositional calculus. Let’s discuss what they did and what they did not do.

## A Joke

Who invented Boolean Functions? This sounds like an old joke of Groucho Marx’s. His radio/TV show You Bet Your Life was a game show where contestants, usually a couple, tried to win money by answering general-knowledge questions.

The best part of the show was not the game, but Groucho’s patter with the contestants. Contestants who won $25 or less were asked a trivial question by Groucho at the end of the show—he wanted them to have something more to take home. One “trivial” question he often used was:

Who is buried in Grant’s Tomb?

Ken grew up thinking the answer was, “his horse.” Actually the correct answer is “no one,” since Ulysses Grant and his wife are in an an above-ground vault, rather than buried. Recall “buried” means in the ground. All answered “Grant” and were given a small prize, except one man who actually got the question right and shocked Groucho. The man pointed out that Grant’s Tomb is an above-ground mausoleum. Oh well.

So who actually did invent Boolean functions? Indeed.

## Boole’s Work

Back to Badesa’s book. In his first chapter he goes into quite a bit of detail on what Boole did, and also Jevons, Peirce, and Schröder. Boole used symbols such as and to represent “the class of individuals to which a particular name or description is applicable.” He used to denote the class of things represented by and where both are simultaneously applicable. Thus if stands for “white” and for “horse,” then stands for “white horses.” He used for “or” but stated that the objects in and should be disjoint. Although he did sometimes diverge and allows the classes and to overlap, it was really Jevons who later observed that the disjointness restriction is unnecessary.

The point that I think is the most important is what Boole did not say. He did not say:

There are two elements in the world: and . Here are the operations that I use on these elements. Let now turn and study their properties. Consider …

Schröder studied what Boole did and built a set of axioms that attempt to capture the general structure. He used the symbol in his theory (I actually cannot get his symbol exactly). One interesting point that Badesa discusses is the possibility that Schröder may have been the first ever to try to prove that a result is independent from a set of axioms. The issue was whether or not a particular axiom was needed. Peirce thought the axiom in question could be proved from the others, but lost his argument. Schröder in 1883 purported to show that the axiom was unprovable by finding a model that violates it, but satisfies the other axioms. This would show that it did not follow from the other axioms.

What is interesting about this is not the particular result, but that he may have been the first to prove an independence result via model theory.

## Why So Long?

The real numbers are the basis of all mathematics of the Greeks, since geometry by definition is all about the real plane. Later the whole thrust of western mathematics was on functions from real numbers to real numbers. This of course gave rise to the notions of continuous functions, derivatives, integrals, and more. These are beautiful, important, and actually quite delicate concepts. Part of the difficulty is that all touch in one way or another on the *infinite*. Real numbers demand a good understanding of the infinite, as do all the other related concepts.

Any definition of a real number involves, in some way, the infinite. Whether you define them via infinite decimals, Dedekind cuts, Cauchy sequences of rationals, or otherwise, there is no way to avoid notions based on infinity. At least there are none that I know.

The same issues of infinity arise in the trouble that early mathematics had in defining precisely the notion of function. Again a proper definition requires that we have infinite sets. The number of real functions is not just “infinity,” but is a larger cardinality in the precise sense of Cantor.

Yet the notion of a boolean space of just two elements is all too concrete. At first glance it seems almost too easy. There are no fundamental issues in defining operations on this space. You cannot understand what a real function really is just by giving the values in a table. Yet for a Boolean function, all is said by a finite table—simple like an accountant’s view of the world.

So why was the creation of a theory of Boolean functions so hard? My theory is that it was too simple. I argue it is a huge surprise historically that practical notions based on infinity were developed long before Cantor, while it lay hidden until the 20th Century that the theory of functions mapping 's to 's is so rich and so important. How can functions that you can *see* because they are finite be so important?

I would argue that this is a huge surprise that only recently has become clear. How could early mathematicians know that discrete simple functions were as important as continuous ones?

## Logic in the Looking-Glass

I—Ken writing this section—think that we can perceive the answer by examining the work of another Victorian-era logician, Charles Dodgson. That is, Lewis Carroll. Carroll’s work in logic mainly involved both formalizing and visualizing arguments based on syllogisms. Syllogisms involve quantifiers, and their terms refer directly to categories and attributes.

Today we might say that the various syllogistic theories of his time are fragments of the full predicate calculus in which quantifier forms are limited in complexity. Bounded-quantifier logic and arithmetic are subtle in computational theory even today. Thus the academics were following the long shadow of Aristotle into meaningfully intricate work, and did not see need to re-base it.

Carroll was influenced by John Venn and his diagrams for visualizing set-theoretic relations. He is known to have contacted Charles Babbage, but perhaps only as far as Babbage’s devices could be used for numerical calculation.

Even the universality of the binary NAND and NOR functions was not published until 1913 by Henry Sheffer—again, just a hundred years ago. Peirce had observed it thirty years earlier, but didn’t publish it. Even here the thinking seems to have been elsewhere from simply functions of and . NAND and NOR were viewed as “the same” since they are dual functions in a Boolean algebra—alike in a looking-glass. Even the Sheffer stroke originally stood for NOR before its current usage as NAND.

All of these conceptualizations tended away from our modern ideas of logic as based on and , of computation being based on this logic, and even of “it from bit.”

## Open Problems

So who really did invent Boolean functions? Why were they not invented a thousand years ago? And what about our current understanding may seem “immature” a century from now?

[fixed Pierce->Peirce]

The spelling is “Charles Sanders Peirce”, the last name pronounced like “purse”, the original family name being “Pers”.

Ah, thanks! I did note the “ampheck” usage while editing but not the spelling of his name…

Here is a useful reference:

Geraldine Brady •

From Peirce to Skolem : A Neglected Chapter in the History of Logic>>> Why were they not invented a thousand years ago?

It is really meant to be an OPEN question? In what sense?

Mankind had a collective mind to have a choice a thousand years ago?

If it had had, would every individual in that collection have contributed? So if taken our current version of BF versed in the language of that time, you think things could have been different? Just wondering.

BTW, in the previous one, it obviously should be M0 -> M3 (not M1 ->M3). Sorry for the error.

See the PlanetMath entry, “Ampheck”, for one of Peirce’s latter discussions of the NAND/NNOR operators.

Maybe they were not invented because people didn’t have enough interest in them. What could they be used for? Logic was more philosophy than mathematics. On the other hand, this is not true about real numbers and functions over them.

The general concepts of real number and function as we know are rather new. As you know there were arguments among mathematicians if there can be a no where differentiable continuous function. The naive notion of real function was rather simple: lines on plane that we can draw. They are far from set theoretic notion. Similarly functions were not set theoretic but simple rules to obtain output from input. These notions were not formal and rigorous but intuitive ones.

I think that Boolean logic was not invented because the European and Arab intellectual worlds were so in thrall to Aristotle. Schiller’s book on Formal Logic, published a century ago, has no reference to either Leibniz or Boole, the two possible claimants, but regards formal logic as Aristotelian syllogistic logic.

My comments on this are on my blog under Formal Logic.

Hi, Peter—indeed this is exactly what I agree with. I was unable to find a reference I read last summer or so that I recall as really drawing a connection between syllogistic theories and bounded-quantifier arguments.

I also read as a child someone’s assertion that had Archimedes been adopted rather than killed by the Romans, we might have had calculus some decades later and spaceflight by 500 A.D.

¿Hay por Google también?Actually I think “hay” is important—yet another reference I remember is we needed hay to be invented to get anything off the ground, and that didn’t happen until closer to 900 AD.Whoops: campus network outage made me edit a post from my home Mac for the first time ever, and I forgot to sign out as Pip and back in as me.

The question recalls recent discussions of discovery and invention in the mathematical field, bringing back to mind questions I’ve wondered about for as long as I can remember.

Speaking as an unreconstructed Platonic realist, I am tempted to say that Boolean functions are mathematical objects that cannot be invented, only discovered. But speaking as a semiotic constructivist I would have to concede that we do indeed invent all sorts of syntactic systems for talking and thinking about these mathematical objects. And some calculi can even be better than others for the purpose of calculation, a fact that repays us to consider the alternatives as they work out in practice.

On the third hand, I have more lately been thinking that the concepts of discovery and invention, being human constructs like the proverbial concepts of particles and waves, may not be adequate in the final analysis to articulate the reality of the process at hand. It may well be that all of these questions are more like the question,

Who discovered Orion in the night sky?Regarding discovery-versus-invention, Domokos Szasz attributes the following all-purpose quotation to Alfréd Rényi:

“Other mathematicians prove what they can, von Neumann what he wants.”Until the 18th century, I think that many mathematicians and natural philosophers approached problems geometrically, since Euclid was the one source everyone knew. Look at the development of calculus: the two main problems that people wanted solved were the problems of tangents and rectification of various curves. Even Newton wrote the Principia using the language of geometry, since nobody would have understood the algebraic approach.

C. Shannon’s master’s thesis applied boolean algebra to the design of switching circuits (sort of a precursor to digital logic design).

http://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits

Cf. Peirce to Allan Marquand (30 Dec 1886)

No, this is not true. The proper definition of function is in terms of cartesian closure, and this definition works perfectly well even with finite models. (For example, even if you drop the axiom of infinity from set theory, you can still construct the set of functions between any two sets.)

The difficulties 19th century mathematicians had with functions had more to do with the fact that they had not yet developed the formalism for handling free variables (a problem no one made progress on until Frege), which made giving an algebraic formulation of function very difficult. Church solved this problem with the invention of the lambda-calculus in the 1930s, and Lambek’s recognition in the early 70s that the lambda-calculus gives a syntax for cartesian closed categories was a major impetus for the import of categorical methods into theoretical computer science.

Our brain is composed of finitely many neurones. Nonetheless, our mind can sometimes perceive the infinite. I don’t know if this fact is connected in any way with the Löwenheim-Skolem theorem, but the illusion of infinity is known to have misled the great Dedekind himself. In his essay on “The Nature and Meaning of Numbers”, he appeals to “[his] own realm of thoughts” to “prove” that “the totality of things which can be objects of [his] thought is infinite”…

I regard Dedekind as the true inventor of the Peano axioms, for he was the first to base a definition of the natural numbers on the successor operation. But to him, infinity was such an obviously granted concept that it didn’t require any axiom. In the present days that situation has been reversed and most people now find it hard to “believe” in infinity… witness the recurring debate on the validity of Cantor’s proof on this blog.

Computing, in its way, and science, and its broader way,

both involve the relation between what appears limited

and what appears not. Whether you believe in divinity

or not, and whether you believe that humanity contains

a spark of divinity or not, we have to acknowledge that

our powers as oracles are limited and, even if they were

not, problems of relative computability would still arise

in the striving of oracles to communicate with one another.

Edit —

Computing, in its way, and science, in its broader way …

I think Beltrami’s construction of a model of the hyperbolic plane was a little earlier. This shows that the parallel postulate is not implied by the other axioms of Euclidean geometry.

I first thought Bolyai, Lobachevski or Gauss or somebody must have provided a model, but I cannot find any reference to this being so.

I just ran across some notes on model theory that I wrote out a decade ago when a number of online working groups were tackling the subject for the sake of applications to web ontologies and semantic webs. I focused especially on how the sibling subjects of formal semantics and representation theory grew out of their native ground in the natural semantics and pragmatics of mathematics.

• Model Theory Unplugged

Peirce’s published works are available as downloadable PDFs from the Institute for Studies in Pragmaticism at Texas Tech —

• Published Works of Charles Sanders Peirce

This is sociological issue. People are not ready to accept new ideas if they do not see their application. Like with Cantor uncountability argument it was useful at a time. As a side note, I make public crackpot notes that are ignored. In the private e-mails I get responses. Look, it took 10 min (including my explanation) of my friend to point out to determinantial rings in my monomial example. Roughly speaking society support “my” writing to 100 top researchers either each spending their time, explaining “me” what is going on, or ignoring “me”. In the first case, you spend much more resources than needed, in the second case, you possibly through away baby. I remember in depth analysis of a particular countability arguments to convince author he is wrong, whereas my simple intuitive argument were ignored (I’m not pushing it, I do not care, but others too). I know we are speaking different language, mine of cause, extraterrestrial alien. It is not truth that is searched by this society, but sparseness, leading to domination. No building starts with rigor, it starts with draft drawing. It is only when people like the draft they push the process.

Now how can I believe you, mathematicians, telling me you are searching for truth, whereas even in the Cantor diagonal argument is not that simple as you are trying tell, and it is more teaching convenience that the truth.

Draft (stop reading here, until I sent it to you by e-mail).

We start with random x table of “0” and “1” each drawn from Bernoulli process. Rows represent numbers. Starting from top to down we are going to write the Cantor diagonal number. First we write digit that is different from the first column in the first row. So we have the number that is certainly different from the first number, than by induction if we have a number that is different from top k numbers in first k columns we are write k+1 digit in Cantor diagonal number being different from k+1 digit in k+1 row. Therefore, by the induction the Cantor diagonal number different from all numbers in the table. Next we take the Cantor diagonal number to show that it is in the table, moreover countably many times. We start from the first column, and select those numbers that have the same first digit as Cantor diagonal number – there are countably many of them. induction step. From the selected countably infinite set, where each number coincides with Cantor diagonal number up to digit k we select countably infinite subset of rows which coincide in k+1 digit with k+1 digit of Cantor diagonal number. Therefore, by induction argument there are countably infinite numbers in the table that coincide with Cantor diagonal number. QEDraft.

If we assign weights to the columns () than we immediately see (constructing Cauchy sequences from sequence of selected sets, they are by the way once drawn are well ordered by the row index) there are infinitely many numbers in the table distance 0 from Cantor diagonal number.

Finally, thanks to MrX, the main issue here is that if numbers are represented as a path of binary tree, this tree is self similar, i.e. every sub-tree is isomorphic to the whole tree. That probably is the reason for incompleteness theorem, although it is interesting how algebraic operations stratify the tree.

Sincerely yours, never looking truth, only the fame, crackpot trisector.

The interplay of mathematical inquiry and social interaction, especially when taken over historical time, is certainly an interesting subject. But this looks like the big middle of a discussion whose beginning I must have missed. Just by way of identifying the ballpark, is your concern more about uncountability or more about indirect proofs?

It’s my sense of things that people who are tea-totalers about constructive mathematics or intuitionistic logic have voluntarily chosen to work under a non-classical aesthetic and here as in all matters of taste there is little chance of arguing to a settled conclusion unless the decision of nature or reality or some other umpire is final.

I have no concerns. the internet is the place where information should be sorted, and presented for advances. The ideal case is the social network of mental workers, providing algorithm for mechanical devices, producing goods for mental workers to at least support their life, and the resource distribution mechanism not connected to human beings. as the first step no ads payed by the producer.

1) наши дни к веселью мало оборудованы;

2) наши дни под радость мало оборудованы;

3) наши дни под счастье мало оборудованы;

4) наша жизнь к веселью мало оборудована;

5) наша жизнь под радость мало оборудована;

6) наша жизнь под счастье мало оборудована;

7) для веселий планета наша мало оборудована;

8) для веселостей планета наша мало оборудована;

9) не особенно планета наша для веселий оборудована;

10) не особенно планета наша для веселья оборудована;

11) планетишка наша к удовольствиям не очень оборудована;

и, наконец, последняя, 12-я –

12) для веселия планета наша мало оборудована.

In English all functions begin with fun.

My statistics teacher liked to complain that he thought it unfair how such a simple idea could get one’s name attached to it for all time. On the other hand, he would say, I can’t remember which brother it was, so maybe there is compensation. On a related note, my favorite quote about the discrete and continuous properties of mathematical study…

“The sole natural object of mathematical thought is the whole number. It is the external world that has imposed the continuous upon us…

“…we have devoted almost all our time and all our strength to the study of the continuous. Who will regret it; who will think that this time and this strength have been wasted? Analysis unfolds before us infinite perspectives that arithmetic never suspects; it shows us at a glance a majestic assemblage whose array is simple and symmetric; on the contrary, in the theory of numbers, where reigns the unforseen, the view is, so to speak, blocked at every step.” Poincare’ at the first Int’l Congress of Mathematics, Zurich, 1897.

A comment that I meant to make here is turning into a work in progress, so here’s the first installment —

• Indicator Functions : 1

(

Note.I’m trying to see if I can copy an image to a comment box. If it doesn’t work then try the link in my previous comment.)One of the things that it helps to understand about 19th Century mathematicians, and those who built the bridge to the 20th, is that they were capable of high abstraction — in Peirce’s case a cut above what is common today — and yet they remained close enough to the point where abstract forms were teased away from the concrete materials of mathematical inquiry that they were able to maintain a sense of connection between the two. There are few better places to see this connection than in the medium of Venn diagrams. But Venn diagrams are such familiar pictures that it’s easy to overlook their subtleties, so it may be useful to spend some time developing the finer points of what they picture.

There are actually several types of Boolean functions that are represented in a typical Venn diagram. They all have the

Boolean domainor one of its powers as their functional codomains but their functional domains may vary, not all being limited to finite cardinalities. As an aid to sorting out their variety, consider the array of functional arrows in the following figure.Suppose is a universe of discourse represented by the rectangular area of a Venn diagram. Note that the set itself may have any cardinality. The most general type of Boolean function is a map This is known as a

Boolean-valued functionsince only its functional values need be inA function of the type is called a

characteristic functionin set theory or anindicator functionin probability and statistics since it can be taken to characterize or indicate a particular subset of namely, thefiberorinverse imageof the value for which we have the notation and definitionThe notation is often used for the characteristic function of a subset of All together then, we have

To be continued …