Skip to content

Just Arvind

May 29, 2020


Theory and practice

[ MIT ]

Arvind Mithal—almost always referred to as Arvind—is now the head of the faculty of computer science at a Boston trade school. The school, also known as MIT, is of course one of the top places for all things computer science. From education to service to startups to research, MIT is perhaps the best in the world.

Today I thought we might discuss one of Arvind’s main research themes.

Although this work started in the 1980’s I believe that it underscores an important point. This insight might apply to current research topics like quantum computers.

But first I cannot resist saying something personal about him. Arvind is a long time friend of mine, and someone who gets the stuck in elevator measure of many hours. This measure is one I made up, but I hope you get the idea.

Arvind is the only name I have ever known for Arvind. I was a bit surprised to see that the Wikipedia reference for him states his “full” name. He told me that he found having one name—not two—was a challenge. For example, he sometimes had to explain when arriving at a check-in desk at a conference that he only had one name. His name tag often became:

because the conference software could not handle people with one name.

Dataflow—The prehistory

About forty years ago one of the major open problem in CS was how to make computers go faster. It is still an issue, but in the 1980’s this problem was one of all-hands-on-deck. It was worked on by software engineers, by electrical engineers, by researchers of all kinds including complexity theorists. Conferences like FOCS and STOC—hardcore theory conferences—often contained papers on how to speed up computations.

Two examples come to mind:

{\bullet } The idea of systolic arrays led by Hsiang-Tsung Kung then at CMU. Also his students, especially Charles Leiserson, made important contributions.

{\bullet } The idea of the Ultracomputer led by Jacob Schwartz at NYU. An ultracomputer has N processors, N memories and an N log N message-passing switch connecting them. Note, the use of “N” so you can tell that it came from a theorist.

Arvind tried to make computers faster by inventing a new type of computer architecture. Computers then were based on the classic von Neumann architecture or control flow architecture. He and his colleagues worked for many years trying to replace this architecture by dataflow.

A bottleneck in von Neumann style machines is caused by the program counter fetch cycle. Each cycle the program counter decides which instruction to get, and thus which data to get. These use the same hardware channel, which causes the famous von Neumann bottleneck. We have modified Wikipedia’s graphic to make the bottleneck aspect clearer:

Dataflow—The promise

Arvind is usually identified with the invention of dataflow computer architecture. The key idea of this architecture is to avoid the above bottleneck by eliminating the program counter. If there are no instructions to fetch, then it would seem that we can beat the bottleneck. Great idea.

Dataflow architectures do not have a program counter, and so data is king. Roughly data objects move around in such a machine, and they eventually appear at computational units. Roughly these machines operate, at a high level, like a directed graph from complexity theory. Data moves along edges to nodes that compute values. Moreover, nodes compute as soon as all the input data is present. After the node computes the value, the new data is sent to the next node; and so on.

The hope was that this would yield a way to increase the performance of computers. No program counter, no instruction fetching, could make dataflow machines faster.

Dataflow—The actual

The dataflow idea is clever. The difficulty is while dataflow can work, classic von Neumann machines have been augmented in various ways. The competition rarely stays put. These updates did not eliminate the von Neumann bottleneck, but they reduce the cost of it, and let von Neumann machines continue to get faster. Caches are one example of how they avoid the bottleneck, thus making the dataflow machines less attractive. Thus, a cache for program instructions makes it likely that the program fetch step does not actually happen. This is an attack on the main advantage of dataflow machines.

The paper by David Culler, Klaus Schauser, and Thorsten von Eicken titled Two Fundamental Limits on Dataflow Multiprocessing? discusses these issues in detail. They start by saying:

The advantages of dataflow architectures were argued persuasively in a seminal 1983 paper by Arvind and Iannucci and in a 1987 revision entitled “Two Fundamental Issues in Multiprocessing”. However, reality has proved less favorable to this approach than their arguments would suggest. This motivates us to examine the line of reasoning that has driven dataflow architectures and fine-grain multithreading to understand where the argument went awry.

Dataflow—Lessons

What are the lessons? Is there one?

I claim there are lessons. The work of Arvind on dataflow was and is important. It did not lead to the demise of von Neumann machines. I am using one right now to write this.

Dataflow did lead to insights on programming that have many applications. The dataflow idea may yet impact special computational situations: there is interest in using them for data intensive applications.

Ken adds that dataflow has been realized in other ways. Caches and pipes and subsequent architecture innovations profit from designs that enhance locality. The MapReduce programming model gives a general framework that fits this well. The paradigm of streaming is even a better fit. Note that Wikipedia says both that it is “equivalent to dataflow programming” and “was explored within dataflow programming”—so perhaps the parent lost out to the wider adaptability of her children.

I think there are lessons: Practical goals like making hardware go faster, are complex and many faceted. A pure theory approach such as systolic arrays or ultra computers or dataflow machines is unlikely to suffice. Also the existing technology like von Neumann machines will continue to evolve.

A question is: Could quantum computers be subject to the same lesson? Will non-quantum machines continue to evolve in a way to make the quantum advantage less than we think? Or is this type of new architecture different? Could non-quantum machines incorporate tricks from quantum and {\dots}

Open Problems

Speaking about theory and practice: Noga Alon, Phillip Gibbons, Yossi Matias, and Mario Szegedy are the winners of the 2019 ACM Paris Kanellakis Theory and Practice Award.

We applaud them on receiving this award. Sadly the Paris award reminds us that Paris Kanellakis died in the terrible plane crash of American Airlines Flight 965 on December 20, 1995. Also on that flight were his wife, Maria Otoya, and their two children, Alexandra and Stephanos.

The citation for the award says:

They pioneered a framework for algorithmic treatment of streaming massive datasets, and today their sketching and streaming algorithms remain the core approach for streaming big data and constitute an entire subarea of the field of algorithms.

In short they invented streaming.

Proof of the Diagonal Lemma in Logic

May 25, 2020


Why is the proof so short yet so difficult?

Saeed Salehi is a logician at the University of Tabriz in Iran. Three years ago he gave a presentation at a Moscow workshop on proofs of the diagonal lemma.

Today I thought I would discuss the famous diagonal lemma.

The lemma is related to Georg Cantor’s famous diagonal argument yet is different. The logical version imposes requirements on when the argument applies, and requires that it be expressible within a formal system.

The lemma underpins Kurt Gödel’s famous 1931 proof that arithmetic is incomplete. However, Gödel did not state it as a lemma or proposition or theorem or anything else. Instead, he focused his attention on what we now call Gödel numbering. We consider this today as “obvious” but his paper’s title ended with “Part I”. And he had readied a “Part II” with over 100 pages of calculations should people question that his numbering scheme was expressible within the logic.

Only after his proof was understood did people realize that one part, perhaps the trickiest part, could be abstracted into a powerful lemma. The tricky part is not the Gödel numbering. People granted that it can be brought within the logic once they saw enough of Gödel’s evidence, and so we may write {\ulcorner \phi \urcorner} for the function giving the Gödel number of any formula {\phi} and use that in other formulas. The hard part is what one does with such expressions.

This is what we will try to motivate.

Tracing the Lemma

Rudolf Carnap is often credited with the first formal statement, in 1934, for instance by Eliott Mendelson in his famous textbook on logic. Carnap was a member of the Vienna Circle, which Gödel frequented, and Carnap is considered a giant among twentieth-century philosophers. He worked on sweeping grand problems of philosophy, including logical positivism and analysis of human language via syntax before semantics. Yet it strikes us with irony that his work on the lemma may be the best remembered.

Who did the lemma first? Let’s leave that for others and move on to the mystery of how to prove the lemma once it is stated. I must say the lemma is easy to state, easy to remember, and has a short proof. But I believe that the proof is not easy to remember or even follow.

Salehi’s presentation quotes others’ opinions about the proof:

{\bullet } Sam Buss: “Its proof [is] quite simple but rather tricky and difficult to conceptualize.”

{\bullet} György Serény (we jump to Serény’s paper): “The proof of the lemma as it is presented in textbooks on logic is not self-evident to say the least.”

{\bullet } Wayne Wasserman: “It is `Pulling a Rabbit Out of the Hat’—Typical Diagonal Lemma Proofs Beg the Question.”

So I am not alone, and I thought it might be useful to try and unravel its proof. This exercise helped me and maybe it will help you.

Here goes.

Stating the Lemma

Let {S(w)} be a formula in Peano Arithmetic ({PA}). We claim that there is some sentence {\phi} so that

\displaystyle  PA \vdash \phi \iff S(\ulcorner \phi \urcorner).

Formally,

Lemma 1 Suppose that {S(x)} is some formula in {PA}. Then there is a sentence {\phi} so that

\displaystyle  PA \vdash \phi \iff S(\ulcorner \phi \urcorner).

The beauty of this lemma is that it was used by Gödel and others to prove various powerful theorems. For example, the lemma quickly proves this result of Alfred Tarski:

Theorem 2 Suppose that {PA} is consistent. Then truth cannot be defined in {PA}. That is there is no formula {Tr(x)} so that for all sentences {\phi} {PA} proves

\displaystyle  \phi \iff Tr(\ulcorner \phi \urcorner).

The proof is this. Assume there is such a formula {Tr(x)}. Then use the diagonal lemma and get

\displaystyle  \phi \iff \neg Tr(\ulcorner \phi \urcorner).

This shows that

\displaystyle  \phi \iff \neg Tr(\ulcorner \phi \urcorner) \iff Tr(\ulcorner \phi \urcorner).

This is a contradiction. A short proof.

The Proof

The key is to define the function {F(n)} as follows: Suppose that {n} is the Gödel number of a formula of the form {A(x)} for some variable {x} then

\displaystyle  F(n) = \ulcorner A(\ulcorner A(x) \urcorner) \urcorner.

If {n} is not of this form then define {F(n)=0}. This is a strange function, a clever function, but a perfectly fine function, It certainly maps numbers to numbers. It is certainly recursive, actually it is clearly computable in polynomial time for any reasonable Gödel numbering. Note: the function {F} does depend on the choice of the variable {x}. Thus,

\displaystyle  F(\ulcorner y=0 \urcorner) = \ulcorner (\ulcorner y=0 \urcorner)=0 \urcorner,

and

\displaystyle  F(\ulcorner x=0 \urcorner) = \ulcorner (\ulcorner x=0 \urcorner)=0 \urcorner.

Now we make two definitions:

\displaystyle  \begin{array}{rcl}        g(w) &\equiv& S(F(w)) \\        \phi &\equiv& g(\ulcorner g(x) \urcorner). \end{array}

Now we compute just using the definitions of {F, g, \phi}:

\displaystyle  \begin{array}{rcl}        \phi &=& g(\ulcorner g(x) \urcorner) \\                 &=& S(F(\ulcorner g(x) \urcorner)) \\           &=& S(\ulcorner g(\ulcorner g(x) \urcorner) \urcorner) \\               &=& S(\ulcorner \phi \urcorner). \end{array}

We are done.

But …

Where did this proof come from? Suppose that you forgot the proof but remember the statement of the lemma. I claim that we can then reconstruct the proof.

First let’s ask: Where did the definition of the function {F} come from? Let’s see. Imagine we defined

\displaystyle  \begin{array}{rcl}        g(w) &\equiv& S(F(w)) \\        \phi &\equiv& g(\ulcorner g(x) \urcorner). \end{array}

But left {F} undefined for now. Then

\displaystyle  \begin{array}{rcl}        \phi &=& g(\ulcorner g(x) \urcorner) \\                 &=& S(F(\ulcorner g(x) \urcorner)). \end{array}

But we want {\phi = S(\ulcorner \phi \urcorner)} that happens provided:

\displaystyle  \ulcorner g(\ulcorner g(x) \urcorner) \urcorner) = F(\ulcorner g(x) \urcorner).

This essentially gives the definition of the function {F}. Pretty neat.

But but …

Okay where did the definition of {g} and {\phi} come from? It is reasonable to define

\displaystyle  g(w) \equiv S(F(w)),

for some {F}. We cannot change {S} but we can control the input to the formula {S}, so let’s put a function there. Hence the definition for {g} is not unreasonable.

Okay how about the definition of {\phi}? Well we could argue that this is the magic step. If we are given this definition then {F} follows, by the above. I would argue that {\phi} is not completely surprising. The name of the lemma is after all the “diagonal” lemma. So defining {\phi} as the application of {g} to itself is plausible.

Taking an Exam

Another way to think about the diagonal lemma is imagine you are taking an exam in logic. The first question is:

Prove in {PA} that for any {S(x)} there is a sentence {\phi} so that

\displaystyle  \phi \iff S(\ulcorner \phi \urcorner).

You read the question again and think: “I wish I had studied harder, I should have not have checked Facebook last night. And then went out and {\dots}” But you think let’s not panic, let’s think.

Here is what you do. You say let me define

\displaystyle  g(x) = S(F(x)),

for some {F}. You recall there was a function that depends on {S}, and changing the input from {x} to {F(x)} seems to be safe. Okay you say, now what? I need the definition of {F}. Hmmm let me wait on that. I recall vaguely that {F} had a strange definition. I cannot recall it, so let me leave it for now.

But you think: I need a sentence {\phi}. A sentence cannot have an unbound variable. So {\phi} cannot be {g(x)}. It could be {g(m)} for some {m}. But what could {m} be? How about {\ulcorner \phi \urcorner}. This makes

\displaystyle  \phi = g(\ulcorner g \urcorner).

It is after all the diagonal lemma. Hmmm does this work. Let’s see if this works. Wait as above I get that {F} is now forced to satisfy

\displaystyle  F(\ulcorner g(x) \urcorner) = \ulcorner g(\ulcorner g(x) \urcorner) \urcorner.

Great this works. I think this is the proof. Wonderful. Got the first question.

Let’s look at the next exam question. Oh no {\dots}

Open Problems

Does this help? Does this unravel the mystery of the proof? Or is it still magic?


[Fixed equation formatting]

Math Tells

May 22, 2020
tags: ,


How to tell what part of math you are from

Gerolamo Cardano is often credited with introducing the notion of complex numbers. In 1545, he wrote a book titled Ars Magna. He introduced us to numbers like {\sqrt{-5}} in his quest to understand solutions to equations. Cardano was often short of money and gambled and played a certain board game to make money—see the second paragraph here.

Today, for amusement, Ken and I thought we’d talk about tells.

What are tells? Wikipedia says:

A tell in poker is a change in a player’s behavior or demeanor that is claimed by some to give clues to that player’s assessment of their hand.

Other Tells

Ken and I have been thinking of tells in a wider sense—when and whether one can declare inferences amid uncertain information. Historians face this all the time. So do biographers, at least when their subjects are no longer living. We would also like to make inferences in our current world, such as about the pandemic. The stakes can be higher than in poker. In poker, if your “tell” inference is wrong and you lose, you can play another hand—unless you went all in. With science and other academic areas the attitude must be that you’re all-in all the time.

Cardano furnishes several instances. Wikipedia—which we regard as an equilibrium of opinions—says that Cardano

acknowledged the existence of imaginary numbers … [but] did not understand their properties, [which were] described for the first time by his Italian contemporary Rafael Bombelli.

This is a negative inference from how one of Cardano’s books stops short of treating imaginary numbers as objects that follow rules.

There are also questions about whether Cardano can be considered “the father of probability” ahead of Blaise Pascal and Pierre de Fermat. Part of the problem is that Cardano’s own writings late in life recounted his first erroneous reasonings as well as final understanding in a Hamlet-like fashion. Wikipedia doubts whether he really knew the rule of multiplying probabilities of independent events, whereas the essay by Prakash Gorroochurn cited there convinces us that he did. Similar doubt extends to how much Cardano knew about the natural sciences, as correct inferences (such as mountains with seashell fossils having once been underwater) are mixed in with what we today regard as howlers.

Every staging of Shakespeare’s Hamlet shows a book by Cardano—or does it? In Act II, scene 2, Polonius asks, “What do you read, my lord?”; to which Hamlet first replies “Words words words.” Pressed on the book’s topic, Hamlet perhaps references the section “Misery of Old Age” in Cardano’s 1543 book De Consolatione but what he says is so elliptical it is hard to tell. The book also includes particular allusions between sleep and death that go into Hamlet’s soliloquy opening Act III. The book had been published in England in 1573 as Cardan’s Comfort under the aegis of the Earl of Oxford so it was well-known. Yet the writer Italo Calvino held back from the inference:

To conclude from this that the book read by Hamlet is definitely Cardano, as is held by some scholars of Shakespeare’s sources, is perhaps unjustified.

To be sure, there are some who believe Shakespeare’s main source was Oxford, in manuscripts if not flesh and blood. One reason we do not go there is that we do not see the wider community as having been able to establish reliable principles for judging what kinds of inferences are probably valid. We wonder if one could do an experiment of taking resolved cases, removing most of the information to take them down to the level of unresolved cases, and seeing what kinds of inferences from partial information would have worked. That’s not our expertise, but within our expertise in math and CS, we wonder if a little experiment will be helpful.

To set the idea, note that imaginary numbers are also called complex numbers. Yet the term complex numbers can mean other things. Besides numbers like {2 + 3i} it also can mean how hard it is to construct a number.

In number theory, the integer complexity of an integer is the smallest number of ones that can be used to represent it using ones and any number of additions, multiplications, and parentheses. It is always within a constant factor of the logarithm of the given integer.

How easy is it to tell what kind of “complex” is meant if you only have partial information? We don’t only mean scope-of-terminology issues; often a well-defined math object is used in multiple areas. Let’s try an experiment.

Math Tells

Suppose you walk in log-in to a talk without any idea of the topic. If the speaker uses one of these terms can you tell what her talk might be about? Several have multiple meanings. What are some of them? A passing score is {\dots}

  1. She says let {S} be a c.e. set.

  2. She says let {k} be in {\omega}.

  3. She says by the König principle.

  4. She says {S} is a prime.

  5. She says {n} is a prime.

  6. She says {G} is solvable.

  7. She says let its degree be {0}.

  8. She says there is a run.

  9. She says it is reducible.

  10. She says it is satisfiable.

Open Problems

What are your answers? Do you have some tells of your own?

The 2020 Y Prize

May 16, 2020


Can one maintain privacy while publicly congratulating?

Cropped from X Facemask src

X, who shall go nameless here, recently won a prize for research.

Today we congratulate X and wish to talk a bit about their work while safeguarding X’s identity.

X also won another prize Z—but we will not mention which one. In both cases the honor is about the creation of a new field rather than the solution of a long-standing open problem. Her—OK, X is a she—brilliant work has allowed many others to write papers in these areas. And it has led to countless conferences, meetings, and talks.

Oops, we used the word “brilliant.” That has already leaked her identity, given where you are reading this. Just type it into our search box.

We assume you get the point. One can leak information about people without mentioning them directly.





The Real Start of the Post

Her work in Algorithmic Fairness too

Cynthia Dwork is the winner of the 2020 Donald E. Knuth Prize. The prize is awarded jointly by ACM SIGACT and the IEEE Computer Society Technical Committee. The ceremony will include a lecture by Cynthia at FOCS 2020, which is still projected to meet in-person at Duke next November 16–19.

Today we congratulate Cynthia and talk a little about her work.

I, Dick, have known her for many years. I am excited to see that she has won the Knuth prize.

She has always been a top researcher and a terrific speaker. I fondly recall some of her earlier talks at Princeton and elsewhere. There is something about her ability to explain complex ideas in a lucid manner. I have often thought that this ability must be one of the reasons she has been so successful in her research.

We will talk a little about differential privacy (DP) and then her other joint work which led to another prize. We should be quick to mention that others have been instrumental in both developments.

Quantifying an Idea

DP might have been appreciated long before there were large databases and computer systems. The genius is not just the simplicity of the idea but the development of mathematical tools to govern its application.

For a simple example, suppose a professor has a general policy of posting exam average scores in her class of 100. Suppose she posts that the average is 73.3 after 98 students have taken the exam but two need to make it up a week later. Then suppose she re-posts the average after they take it and it is 72.5. The difference may not seem like much. But the class learns two things:

  • Both students had their makeup scores included in the average. (If the 73.3 is exact then it is not even possible to get a rounded 72.5 with just one score of zero, pending the other.)

  • The two students collectively bombed the exam. (Just possibly one got a zero while the other did OK, but even then the other’s score was below average.)

Your first thought might be that the fault is in the excessive precision in the grade reporting. If she had rounded to the nearest integer then both averages would have been reported as “73” and there would be no problem. But rounding is accidental. The new average could have been 72.49, which would be rounded down to “72,” when the set of possible interpretations is worse than before.

In wide-scale situations where queries for multiple averages at different times are allowed, effects of rounding may cancel so that a large statistical analysis can subtract them out. Thus passively reducing precision is not enough. The key is to affect the precision actively by adding a random {\pm \epsilon} to the reported results, where {\epsilon} is judiciously chosen. The point is:

Adding {\pm \epsilon}, better than rounding, makes inference much harder.

In the exam-score example, randomly adding or subtracting a number between 0 and 2 changes the calculus so that seeing “73.3” and then “72.5” does not allow a high-confidence inference about the makeup students’ scores. One could reason that the first number could easily have been 72.3 and the second 73.5, in line with interpretations where the makeup students upped the average. Actually, a jump from 72.3 to 73.5 in the true averages is impossible from just two new scores, so the class could deduce that the reported averages are fudged. But that is OK. Two basic purposes of reporting the averages are met despite the fudging:

  • to show the exam was consistent with past years—especially good to know this year;

  • to give you some idea (knowing your own score exactly) where you stand relative to the class.

In related situations, one could argue that a mean of {\pm 1} for {\epsilon} is not enough. If the exam has 100 one-point questions each with a 73.3% expectation of getting it right for everyone, then the standard deviation about the mean is almost {2}. Using a mean change of {\pm 2} seems also fine in this case, whereas rounding to the nearest 05 (i.e., reporting “70” or “75” as the average) might be too coarse.

We do not intend our example to be definitive, only that it conveys relevant issues. The example is enough to convey that DP leads into deep matters in statistics and what can be inferred by statistical methods. The latter melds the complexity-theoretic notion of knowledge gain from interactive protocols with statistical (machine) learning. And as with quantum computing, theorems and methods from the subfield have come back to inform fundamental questions about complexity (of course, this is true of crypto in general).

Privacy and Online Chess

I, Ken, am the lead on this post, and have had data-privacy affect me personally. The pandemic has forced the cancellation of all over-the-board (OTB) tournaments, some as far ahead as August. Chess has all moved online, as heralded by two recent articles in the New York Times and one in the Wall Street Journal proclaiming, “Sports: Chess Is the New King of the Pandemic.”

As these articles also note, cheating has long been a major concern for online chess. I have been a recourse for second-opinions in online cheating cases but have not tried to be primary in this domain. A great conversation in November 2006 with Daniel Sleator (who—besides being famous for splay trees and amortized analysis—co-founded the Internet Chess Club) told me how online cheating detection avails much information about the manner of play that is not applicable or reliably available in OTB chess. Also for academic reasons, I chose a minimal approach of using no information besides the moves of the games and the Elo ratings of the players. An unexpected advantage of that choice is exemplified by an e-mail I received—fifteen minutes before a conference call on Wednesday with representatives of the International Chess Federation (FIDE) and other online platforms—from a different chess organizer requesting a statement that:

…you will only use the data of the games, and not any personal data that might be obtained from the [tournaments] for your research, such as player name etc. … in writing as a standard precaution … to make sure that there isn’t third party usage of the data or usage outside of its intent.

Some recent legal decisions have confirmed that the moves of the games are public data and cannot be copyrighted. The e-mail did not mention the players’ ratings. As used by FIDE and the US and most online platforms (and much more widely), those are 4-digit numbers, and they are public. Knowing all four digits might be enough to pinpoint the identity of a player.

Thus I suggested rounding the rating to the nearest 10 or even 20. This makes little difference to my statistical cheating tests, which already give 25 Elo points benefit-of-doubt for uncertainty about the rating. Tournament officials I’ve served have done this in a couple of individual cases but now the need is en masse.

The question based on the previous section above is whether I should instead (or also) suggest that they implement full DP by adding {\pm \epsilon} with a mean of {\pm 10} or {\pm 20} or so, with-or-without rounding. A caveat is that it is easy for someone with a tournament spreadsheet to round a column of figures before exporting text to me, but applying a random function is less familiar and prone to major mess-ups.

The Other Prize

The Knuth Prize is not Cynthia’s only win for 2020. In December she was awarded the 2020 IEEE Richard W. Hamming Medal. A video gives a one-minute summary of differential privacy and her work on non-malleable cryptosystems via lattice bases. The latter stems from her 2000 paper with Danny Dolev and Moni Naor (DDN).

We have not covered this crypto topic before. It involves chosen-ciphertext attacks where Eve, the evesdropper, first gets hold of a transmitted ciphertext {y}. The cryptosystem is malleable if Eve can know enough about the structure of {y} to make a legal ciphertext {y'}, one that may decode to a plaintext {x'} that advances her interests compared to the intended {x}. Wikipedia’s article gives an example where {y} embeds a numerical field {r} that Eve can recognize and alter en-route. The paper gives a similar example where Eve cannot disrupt the transmission of {r} but can separately send her own {y'} with a value {r'} that gives her advantage. This may be so even without being able to decode or gain any information about the plain value of {r}, except that {r'} is different in a meaningful way. For instance, {r'} could be a lower bid on a contract or higher bid in a secure auction.

One would think that {r} should just be encoded via an error-correcting code whose range is secure so that an attacker would be overwhelmingly unlikely to find another value {r'} in the legal range of the code. However, this clashes with the goal of homomorphic encryption which allows numerical operations on encoded values. Or the technology used may be too simple, as in the paper’s mention of submitting bids to a fax number.

Note: both fax machines and the paper date to the previous millennium. But the paper next gives an example that is staying with us in this millennium, where {y} is a proof of {\mathsf{P \neq NP}}. Suppose Alice arranges to transmit {y} to a referee Ralph using a zero-knowledge protocol, but Eve is able to intercept their messages. Eve can impersonate Ralph to Alice and impersonate Alice to Ralph as Ralph asks questions in the protocol. It may seem impossible to forestall such a “man-in-the-middle” attack without extrinsic considerations of hardware and timing, but DDN design an ingenious scheme for non-malleable zero-knowledge proofs.

A Video

En-passant of this, we want to note a great video that was made for next month’s Women in Theory workshop. It has an all-female cast and lyrics by Avi Wigderson parodying the song “I Will Survive.” Cynthia is not in the video, but she likewise has been a great spokesperson for the experiences of women in our professional life.

Open Problems

Our congratulations again to Cynthia—by name—on the Knuth and Hamming prizes.

Consistency and P=NP

May 11, 2020


Can we at least show that {P\neq NP} is consistent?

[ From personal page ]

Jan Krajicek is an expert on linking computational complexity and mathematical logic. He has authored four books on this area including the often cited text Bounded Arithmetic and Complexity Theory. Moreover, he has studied the consistency of various of theories and whether they can prove {\mathsf{P \neq NP}} and related questions.

Today I thought we would discuss consistency proofs in general.

Read more…

Mathematics of COVID-19

May 1, 2020


Its not just {R_0}

[ Sir Francis Galton by Charles Wellington Furse ]

Francis Galton is a perfect example of a Victorian era scientist. Sir Galton, he was knighted in 1909, had many roles: a statistician, a sociologist, a psychologist, an anthropologist, a tropical explorer, a geographer, a meteorologist, a geneticist, and an inventor. He coined the phrase “nature versus nurture” and more.

Today we trade in jokes for some mathematics of the virus.

Read more…

Time For Some Jokes

April 26, 2020


Can we still smile?

[ Hardy and Littlewood]

John Littlewood lived through the 1918–1919 flu pandemic, yet he appears not to have remarked on it in print. Nor can we find mention of it by Godfrey Hardy in A Mathematician’s Apology—though Hardy did write about the ravages of WW I.

Today, Ken and I thought you might like some fun comments that are not about the current pandemic.

This is not to say we are ignoring it. We are all fighting the virus in one way or another. Our hearts go out to those of you fighting it directly. We are all worried about ourselves and others. We are stuck at home, at least most of us. We are all in this terrible time together. We hope you all are safe and well.

We thought we would list a few jokes and stories that you might enjoy. We wrote recently about one kind of mathematical joke that can be given various proportions of pure levity and mathematical content. Our friends Lance Fortnow and Bill Gasarch, plus commenters in their item, collected some jokes on the computer science side.

Littlewood’s notion of “mathematical joke” leaned more on mathematical content, though his memoir A Mathematician’s Miscellany includes many funny stories as well. At the end of his introduction to the book, he wrote:

A good mathematical joke is better, and better mathematics, than a dozen mediocre papers.

We will start at the levity end. This is almost a math joke:

The Daily News published a story saying that one-half of the MP (Members of Parliament) were crooks.
The Government took great exception to that and demanded a retraction and an apology.
The newspaper responded the next day with an apology and reported that one-half of the MPs were not crooks.

We like this one, even if it is not really a hardcore math one. It does rely on the fact that {\frac{1}{2} + \frac{1}{2} = 1.}

Jokes and More

The following are some examples that we hope you all like. They are from a variety of sources:

  • Jokes that mathematicians think are funny.

  • Some are from StackExchange.

  • Others are from Andrej Cherkaev’s page.

We have lightly edited a few.

{\bullet } “My age is two billion years old,” said Paul Erdös. The point is:

When I was seventeen years old it was said the earth was two billion years old. Now they say it is four billion years old. So my age is about two billion years old.

{\bullet } There was a statistician that drowned crossing a river {\dots} It was 3 feet deep on average.

{\bullet } An infinite number of mathematicians walk into a bar. The first one orders a beer. The second orders half a beer. The third orders a third of a beer. The bartender bellows, “Get the heck out of here, are you trying to ruin me?”

{\bullet } An chemist, a physicist, and a mathematician are stranded on an island when a can of food rolls ashore. The chemist and the physicist comes up with many ingenious ways to open the can. Then suddenly the mathematician gets a bright idea: “Assume we have a can opener {\dots}

{\bullet } A theorist decides she wants to learn more about practical problems. She sees a seminar with the title: “The Theory of Gears.” So she goes. The speaker stands up and begins, “The theory of gears with a finite number of teeth is well known {\dots}

{\bullet } The reason that every major university maintains a department of mathematics is that it is cheaper to do this than to institutionalize all those people.

Regarding the last one, Littlewood did after all write in his book:

Mathematics is a dangerous profession; an appreciable proportion of us go mad.

This appears to have been a playful swipe at Hardy’s decision to leave Cambridge for Oxford. It was couched in a discussion of events that would seem to have had tiny probabilities before they happened.

The last two we’ve picked out from the above sites verge into philosophy:

{\bullet } The cherry theorem: Question: What is a small, red, round thing that has a cherry pit inside?
Answer: A cherry.

{\bullet} René Descartes went into his favorite bar and the bartender asked, “would you like your usual drink tonight, Monsieur Descartes?” Descartes replied “I think not.” Then he promptly ceased to exist.

Wrong Derivations, Right Results

Littlewood’s standards for a “mathematical joke” were higher than ours, but we will start by adapting an example from this MathOverflow discussion of Littlewood-style jokes. Sometimes we can play a joke on ourselves by deriving a result we know is right but with an incorrect proof. Here is the example:

{\bullet} Casting out 6’s. Suppose we want to simplify the fraction {\frac{166}{664}}. We can use the rule of casting out 6’s to get

\displaystyle  \frac{166}{664} = \frac{16}{64} = \frac{1}{4}.

The rule works quite generally:

\displaystyle  \frac{1666}{6664} = \frac{16666}{66664} = \frac{166666}{666664} = \cdots

\displaystyle  \frac{26}{65} = \frac{266}{665} = \frac{2666}{6665} = \frac{26666}{66665} = \cdots

You can even turn the paper upside down and cast out the {6}‘s that you see then:

\displaystyle  \frac{19}{95} = \frac{199}{995} = \frac{1999}{9995} = \frac{19999}{99995} = \cdots

\displaystyle  \frac{49}{98} = \frac{499}{998} = \frac{4999}{9998} = \frac{49999}{99998} = \cdots

Note, this is a joke: The rule of course does not actually work all the time:

\displaystyle  \frac{56}{65} = \frac{5}{5} = 1.


We thought to try to come up with our own examples, or at least blend in other sources. It once struck me (Ken), on reading a column by Martin Gardner on difference equations, that they give a “convincing proof” of {0^0 = 1}. Consider the powers of a natural number {k}, say {k = 5}. Take differences like so:

\displaystyle  \begin{array}{ccccccccccc} 1 & & 5 & & 25 & & 125 & & 625 & & \dots\\ & 4 & & 20 & & 100 & & 500 & & \dots\\ & & 16 & & 80 & & 400 & & \dots & & \\ & & & 64 & & 320 & & \dots & & &\\ & & & & 256 & & & & \\ & & & & & \ddots & & & \end{array}

The powers of {k-1} always appear on the bottom diagonal. Thus we have: {(k-1)^0 = 1}, {(k-1)}, {(k-1)^2}, and so on. Now do this for {k = 1}:

\displaystyle \begin{array}{ccccccccccc} 1 & & 1 & & 1 & & 1 & & 1 & & \dots\\ & 0 & & 0 & & 0 & & 0 & & \dots\\ & & 0 & & 0 & & 0 & & \dots & & \\ & & & 0 & & 0 & & \dots & & &\\ & & & & 0 & & & & \\ & & & & & \ddots & & & \end{array}

The diagonal now holds the powers of {0}. It thus follows that {0^0 = 1}.

Open Problems

What are your favorite mathematical jokes? Please send them to us. Be safe. Be well.

[fixed equations in last main section]