Plus a teaching idea that’s no joke?

 Cropped from Maths History source

Emil Post was the first to use the formal notion of reduction between problems. We discussed Post’s wonderful work and its relevance to complexity earlier here.

Today Ken and I want to discuss the notion of reduction, and also perhaps some jokes for your amusement.

Reductions are used throughout mathematics. We regard Post’s definition from 1944 as quintessential. His are called many-one reductions. Here is the definition in a functional style that we’ve tried to promote in other cases: A problem ${A}$ reduces to a problem ${B}$ if there is a computable function ${f}$ such that for any instance ${x}$ of problem ${A}$,

$\displaystyle A(x) = B(f(x)).$

If ${A}$ and ${B}$ are languages then this gives the familiar condition ${x \in A \iff f(x) \in B}$. But the idea can be more general: ${A}$ and ${B}$ can be functions.

Alan Turing had earlier defined an even more general kind of reduction, in which given ${x}$ one computes multiple ${y_i}$, gets the answer ${B(y_i)}$ for each of them, and finally pieces together the answers to obtain ${A(x)}$. But this feels more like “expanding” than “reducing.” We want it to be: one ${x}$, one ${y}$. That was Post’s notion.

Dating Reductions and Jokes

The idea of reduction is simple: When confronted with some problem, be lazy: Instead of solving the problem, show that it can be changed and then solved by some previously known method. That is, show that your problem is reduced to someone’s already solved problem. Be lazy.

In this form, we can think of two mathematical examples that are much older than Post’s:

• Logarithms. Multiplying two numbers ${a}$ and ${b}$ with many digits or decimals can be cumbersome. If you can compute ${f}$ and its inverse such that

$\displaystyle a\cdot b = f^{-1}(f(a) + f(b)),$

then your multiplication is reduced to a much easier addition. John Napier gave ${f}$ the name “logarithm” in 1614 and soon William Oughtred built a physical device to automate this reduction. If you are over 40, perhaps you have seen one.

• Integration. Do you have a hard integral ${\int A(x) dx}$? Often tricks of defining ${y = f(x)}$, so ${dy = f'(x) dx}$, and/or integrating by parts, yields an integral ${\int B(y) dy}$ that is easier to solve, and whose solution completes your answer.

The latter example is in Wikipedia’s article on reduction. The former, however, has the signature idea of reduction between problems, more than massaging instances of the same problem. What we really want to know is:

When was the process of transforming between problems first known by the term reduction?

We wonder if tracing an old kind of joke can help. The point is that reductions are a neat source of jokes. Here is a classic one, taken from a big list of math jokes. The list gives a second form of the joke based on reductions and there are many others. More on the jokes later.

A physicist and a mathematician are sitting in a faculty lounge. Suddenly, the coffee machine catches on fire. The physicist grabs a waste basket, empties the basket, leaps towards the sink, fills the basket with water, and puts out the fire. As this coffee machine has done this before they agree to keep a waste basket next to the coffee machine filled with water.

The next day, the same two are sitting in the same lounge. Again, the coffee machine catches on fire. This time, the mathematician stands up, grabs the waste basket that is filled with water. Empties it, places some trash in the basket, and hands it to the physicist. Thus reducing the problem to a previously solved one.

Instead of putting out a fire, the following video is about retrieving a shoe that is floating away.

Another Example

Here is an example of reductions that are not so silly and a little less simple.

Imagine that Alice and Bob are at it again. Bob wants to be able to multiply integers fast and he plans on building a hardware system that stores the answers in a table. Then his hardware system will be able to compute the product of two integers by just looking up the answers. Okay, there are really better ways to do this, but just play along for the moment.

Bob’s table is big and he is troubled. The above table has ${100}$ entries just to multiply numbers less than ${10}$. Clearly for a more extensive table the cost grows fast. He asks his friend Alice for some help. She says:”Just store the diagonal values and I can show you how to handle the general case.” Here is her old trick.

$\displaystyle a \times b = \frac{\left(\left(a + b\right)^{2} - a^{2} - b^{2}\right)}{2}.$

Using this allows Bob to just store the diagonal of the multiplication table, and forget all the rest. It is a powerful reduction that shows:

One can reduce integer multiplication to addition and taking the square of a number.

For example,

$\displaystyle \begin{array}{rcl} 37 \times 15 &=& ( 52^{2} - 37^{2} - 15^{2} )/2 \\ &=& (2704 - 1369 - 225)/2 \\ &=& 555. \end{array}$

Complexity Reductions: No Joke?

Wikipedia actually gives the above as the first example in its article on the complexity kind of reduction. These kind of reductions not only define NP hardness and completeness, they are really needed to understand what the ${\mathsf{P=NP}}$ problem is really about.

For example, once one accepts that a language ${A}$ in ${\mathsf{NP}}$ can be represented by a uniform family of Boolean circuits ${C_n}$, one for each length ${n}$ of instances ${x}$ for ${A}$, the reduction to SAT is quickly defined: The ${C_n}$ have auxiliary inputs ${y_1,\dots,y_q}$, where ${q}$ is polynomial in ${n}$, such that there is a ${y \in \{0,1\}^q}$ making ${C_n(x,y) = 1}$ if and only if ${x \in A}$. The circuit ${C_n}$ can consist of binary NAND gates ${g}$, each with input wires ${u,v}$ (which may be inputs ${x_i}$ or ${y_j}$) and one or more output wires ${w}$ (which may be the overall output ${w_0}$). The reduction ${f}$ first constructs the Boolean formula

$\displaystyle \phi_n = (w_0) \wedge \bigwedge_{g} (u \vee w) \wedge (v \vee w) \wedge (\bar{u} \vee \bar{v} \vee \bar{w}).$

To apply ${f}$ to a given ${x}$, simply substitute the bits of ${x}$ for the variables ${x_i}$ and simplify to make the formula ${\phi_x = f(x)}$. Then ${\phi_x}$ is satisfiable if and only if ${x \in A}$. The reduction not only proves the ${\mathsf{NP}}$-completeness of SAT (indeed, 3SAT) instantly, it conveys the character both of SAT and what ${\mathsf{NP}}$ is all about.

This leads us to wonder something about teaching complexity theory. Maybe reductions, not languages, should be the principal objects of study.

This may seem like a joke but there are benefits. The reductions are composable in ways that languages are not. They carry the source and target problems with them, at least in the form of the function’s arguments and values. Emphasizing reductions highlights the greatest success of complexity theory to date, which is proving relations between problems, rather than its failure to classify languages via lower bounds.

More Jokes

Here are a selection from a big list of math jokes that speak most to computing, plus one from here. We have embellished a few of them:

• A theory professor is one who talks in someone else’s sleep.

• Two is the oddest prime of all, because it’s the only one that’s even.

• A student comes to the department with a shiny new coffee cup, the sort of which you get when having won something. They explain: I won it in my Programming Class contest. They asked what ${7 + 7}$ is. I said ${12}$ and got 3rd place.

• There are 10 types of people in the world, those who see it in binary and those who don’t.

• An engineer thinks that his equations are an approximation to reality. A physicist thinks reality is an approximation to his equations. A mathematician doesn’t care. A computer scientist makes reality equal the equations.

• There is no logical foundation for mathematical proof, and Gödel proved it!

• Mathematics is like checkers in being suitable for the young, not too difficult, amusing, and
without peril to the state. [Plato wrote that 2,400 years ago; they did have checkers.]
Coding, on the other hand…

Open Problems

How should the concept of reduction be taught and emphasized? Can you trace the age of the classic reduction joke?

There is a setting for multiplication where our Alice and Bob reduction example fails. Two of our latter list of jokes might suggest it to you. What is the issue?

1. February 28, 2020 2:00 pm

The basic idea behind problem reduction goes back to Aristotle’s απαγωγη, variously translated as “abduction”, “reduction”, or “retroduction”, and even earlier to Plato’s question, “Can virtue be taught?” See the following discussion:

February 28, 2020 2:33 pm

In my experience, the concept of reduction is misunderstood, even by experts. People (novices and experts) often see it as a magical way to refute lower bound arguments. They tend to think that if one problem is shown to be impossible to solve in polynomial time but it is known to be reducible to a 2nd problem in poly time and then somebody clever comes along and gives a poly time algorithm for the 2nd problem, then one could solve the first problem in polynomial time through poly time reduction to the 2nd problem. They ignore the fact that the 2nd problem cannot possibly be in P, since the first problem, which is not in P, is poly time reducible to it.

This has been my experience in trying to convince people that P is not NP.

3. February 28, 2020 4:36 pm

“Mick and Paddy are walking through the jungle, when a lion comes charging towards them.

Mick picks up a brick (this is an Irish jungle) throws it at the lion and shouts.

“sure I’m not running, it was you who threw the brick!”

This ‘genius of the Irish’ joke can be reduced to this source

” The Lion is never afraid. but rather with a bold spirit in fiery combat with a multitude of hunters, always seeks to injure the first one who injures him”

Leonardo da Vinci

4. February 28, 2020 7:34 pm

I had never seen that diagonal multiplication trick before. I like it.

February 28, 2020 10:00 pm

Dear Joseph Nebus:

Thanks. I do like the trick myself. It is used for example in various places to replace x*y by squaring. Used in complexity theory to relate multiplication with squares for error correction tricks. And in algebra to replace certain polynomial maps by simpler maps.

Best

Dick

5. March 1, 2020 3:16 pm

The concept of reduction is a very useful concept in software development and should not be taught just as a theoretical mathematical technique, as it happens sometimes.
I used to work in a geoinformatics company and at some point we needed an algorithm that would detect specific mapping error in road networks. Or else, searching for spatial relationships among the features of a network of 2-dimensional objects. A colleague proposed a brute-force searching algorithm that was running for more that 20 hours. The build-in solution in the GIS software that we were using required some preparation on the input data and then 30 minutes of running time; the total time was 1 or 2 hours.
Then I build an algorithm by reducing the search on 2-dimensional objects to a search on correlation of 1-dimensional objects. To put it on a simple way I used the nodes of the network instead of the edges that defined the road to solve an edge defined problem. The result was an algorithm that produced the desired output on 10 minutes and was working on input of raw data.
So I really believe students should learn how include reduction in designing algorithms.